Applying the Baldwin Test to Ed-Tech

by Charles Logan

Sometimes when I’m reading an educational technology company’s claims about what its products can do, I begin to hear the robotic voice from Radiohead’s “Fitter Happier” (1997). The promises, the platitudes: the empty language invokes MacinTalk Fred, the synthetic voice created by Apple that Radiohead used to narrate “Fitter Happier”. Automate your grading with the assistance of our AI, boasts an ed-tech company. Fitter, happier, more productive, intones Fred. It’s all so many tinny-sounding voices manipulated by humans.  

I’ve been prompted again in recent months to think about the importance of naming the ways language obscures human design. In March, the Center on Privacy and Technology at Georgetown Law decided to no longer use the terms AI, artificial intelligence, and machine learning. Emily Tucker (2022), the Center’s executive director, explained the new policy by noting, “For institutions, private and public, that have power consolidation as a primary goal, it is useful to have a language for speaking about the technologies that they are using to structure people’s lives and experiences which discourages attempts to understand or engage those technologies” (para. 12). In a May op-ed, linguist Emily Bender (2022) echoed Tucker’s focus on institutions and individuals wielding language to hoard and exert power. Bender writes, “But if we mistake our ability to make sense of language and images generated by computers for the computers being ‘thinking’ entities, we risk ceding power — not to computers, but to those who would hide behind the curtain” (para. 10). Both Tucker and Bender warn against being persuaded by tech companies self-mythologizing as well as the uncritical press these narratives so often receive.

Education scholar Ben Williamson brings this conversation about technology, language, and power into the realm of education. After Google published a blog post in March touting a new AI feature for Google Classroom, a post that includes a teacher sharing that students called the feature “Google magic,” Williamson (2022) critiqued the fanciful term, writing, “AI is of course not magic. The discourse and imaginary of magical AI obscures the complex social, economic, technical and political efforts involved in its development, hides its internal mechanisms, and disguises the wider effects of such systems” (para. 15). Again, we see how powerful institutions deploy language to both rebuff understanding and renew hype about what a technology is capable of doing.

To illuminate these systems, their impacts, and the people responsible for the systems and impacts, Tucker and her colleagues at the Center on Privacy and Technology at Georgetown Law have committed to what they’re calling the Baldwin Test, named after the writer James Baldwin (2010) and his reflections on the English language in his essay “Why I Stopped Hating Shakespeare”. A riff off of the Turing Test (2022), the Baldwin Test seeks transparency and something closer to truth by describing artifice. Passing the Baldwin Test when writing about technology is akin to revealing the great and powerful Oz to be an insecure white man hiding behind a curtain; it means dispelling magic, whether it’s conjured by Google or Amazon. Tucker sets out the Baldwin Test’s four commitments:

1) Be as specific as possible about what the technology in question is and how it works.

2) Identify any obstacles to our own understanding of a technology that result from failures of corporate or government transparency.

3) Name the corporations responsible for creating and spreading the technological product.

4) Attribute agency to the human actors building and using the technology, never to the technology itself.

Artistic image of James Badlwin sitting with a collage of images around him.

James Baldwin by AK Rockefeller shared via Creative Commons.

I want to consider how we might apply and extend the Baldwin Test to educational technology. (Thanks to Ben Williamson for his tweet prompting this post.) Like the Civics of Technology’s five critical questions about technology, the Baldwin Test can help teachers and students name the often implicit or hidden sacrifices, harms, and changes caused by educational technology. Using the Baldwin Test to describe educational technology might look like this:

1) Be as specific as possible about what the technology in question is and how it works.

Before: According to the ed-tech company, its social annotation tool’s engagement score can gamify students’ reading and keep them accountable for the day’s material.

After: So-called engagement scores calculated with data like “time on task, pages read, annotations written and how frequently a document is opened…are misleading because students read in many different ways” (Cohn & Kalir, 2022).

2) Identify any obstacles to our own understanding of a technology that result from failures of corporate or government transparency. 

Before: Google claims students using its products have ownership over their data.

After: Google’s claims about student ownership of data should be questioned since the company makes it difficult to discern the distinction between a student’s data and collected information in Google Apps for Education as part of its surveillance-based business model of targeted advertising (Lindh & Nolan, 2016).

3) Name the corporations responsible for creating and spreading the technological product

Before: Online proctoring software uses facial recognition technology to flag suspicious behavior. 

After: The online proctoring company Honorlock uses Amazon’s facial recognition software Rekognition (Hill, 2022).

4) Attribute agency to the human actors building and using the technology, never to the technology itself.

Before: Online proctoring software uses facial recognition technology to flag suspicious behavior.

After: The online proctoring software made by the company Honorlock uses Amazon’s facial recognition software Rekognition to isolate what a professor will need to determine is a student’s possible suspicious behavior and evidence of cheating. Whether a student’s appearance and actions are initially flagged as suspicious is the outcome of underpaid workers from around the world labeling training data for Amazon’s proprietary algorithm (Doyle-Burke & Smith, 2021).

In addition to the four commitments proposed by the Center on Privacy and Technology at Georgetown Law, I think a focus on educational technology would benefit from at least three more elements. They are:

5) Name the technology’s theory (or theories) of learning.

No technology is neutral. And when it comes to educational technology, no application or platform comes free of at least one theory of learning. Naming the encoded theory or theories of learning allows people to question how the technology does and doesn’t align with their professed visions of education (Gleason & Heath, 2021). 

Example: The behaviorist learning platform Class Dojo shapes students’ actions through a system of punishments and rewards.

6) Describe the technology’s effects on pedagogy.

Educational technology can affect both students’ learning as well as teachers’ pedagogy. Describing possible ways ed-tech influences teachers’ pedagogy can help educators reflect on ways to resist, refuse, and reimagine a technology they may want to use, or what’s often the case, may be compelled to use by the administration.

Example: GoGuardian, a company that makes Internet filtering and monitoring software, can affect teachers’ pedagogy by heightening their sense of power over students and their desire to surveil students (Kumar et al., 2019).

7) Highlight the technology’s impacts on the environment

Education scholar Neil Selwyn (2021) argued “the continued expansion of digital technology throughout education can in no way be rationalised as somehow off-setting the hugely detrimental nature of the full life-cycle of the digital products and processes that go to make up ‘ed-tech’. Instead, the environment costs of technology use in education need to be properly acknowledged” (para. 22). Drawing attention to ed-tech’s environmental harms provides teachers and students an alternative to technosolutionism’s refrain of throwing more digital technology at the existential problems we face in the era of irreversible climate change and the increasing frequency of extreme weather events. 

Example: The reliance of school systems on Google Workspace for Education and Amazon Web Services is problematic in part because of the enormous amount of water required to maintain Google’s and Amazon’s data centers (Solon, 2021).

These additional elements aren’t exhaustive. What other elements would you add to the Baldwin Test of educational technology?

Teachers and students might apply the Baldwin Test in different ways, paying attention to each unique context and what the writer wants to expose about the workings and consequences of an educational technology. I want to share two activities that can help educators and their students attend to the language of educational technology.

Write a Transparent Press Release

Discussing ed-tech agitprop, Audrey Watters (2019) observed ed-tech companies often attempt to realize their preferred future by issuing press releases. In this activity, students can apply the Baldwin Test’s elements to rewriting a press release published by an ed-tech company, government agency, or school. The revision process may look like going through the press release line-by-line and revising a word, phrase, or sentence according to the Baldwin Test’s elements. If company press releases and blog posts are means of hawking what often amounts to snake oil–assertions too frequently parroted by other institutions–then this activity deconstructs that oil into its components, reconstituting it into a tonic that might lead to a little more truth about the incredible magic. 

Annotate an Educational Technology Company’s Press Release or Website

In their book Annotation, Remi Kalir and Antero Garcia (2021) argue “Annotation…helps author a counternarrative, or an alternative to conventional methods and messages” (p. 111). Using the Baldwin Test’s elements and the social annotation tool Hypothesis, teachers and students can join together to annotate an educational technology’s press release or website, making observations about the language with the Baldwin Test as a guide. For example, install the Hypothesis web browser app, open the website from Evolv Security explaining its product Evolv Cortex AI, and come join the conversation I’ve started about a technology company selling school districts its problematic scanners (Singer, 2022). (You can also visit a presentation Autumm Caines and I (2021) delivered for more on #AnnotateEdTech.) As scholar Chris Gilliard (2022) regularly says, “Every future imagined by a tech company is worse than the previous iteration.” In this activity, teachers and students can start to reimagine more just futures–and presents–by challenging the often harmful worlds sold by tech companies and their assertions.

The desire to imbue technology with mystical powers is not going anywhere. A Google employee made news recently when he claimed Google’s Language Model for Dialogue Applications was sentient (Tiku, 2022). Timnit Gebru pushed back against the claim (Johnson, 2022). A former Google employee the company’s leadership fired for questioning the ethics of the company’s large language models, Gebru said, “I don't want to talk about sentient robots, because at all ends of the spectrum there are humans harming other humans, and that’s where I’d like the conversation to be focused” (Johnson, 2022, para. 8). The Baldwin Test provides us with methods for talking about humans harming other humans with and through technology. Combined with the Civics of Technology’s critical questions, the Baldwin Test and its extended educational technology-specific concerns ask students and teachers to cultivate a technoskeptical approach (Krutka, Heath, & Mason, 2020) to educational technology that, to channel Radiohead again, starts with an incredulous “OK, computer” instead of immediately embracing whatever corporate lines of code and copy we’re told to buy to make ourselves and our students fitter, happier, more productive.

References

Baldwin, J. (2010). The cross of redemption: Uncollected writings. Random House.

Bender. E. (2022, May 11). Look behind the curtain: Don’t be dazzled by claims of ‘artificial intelligence’. Seattle Times. https://www.seattletimes.com/opinion/look-behind-the-curtain-dont-be-dazzled-by-claims-of-artificial-intelligence/

Cohn, J., & Kalir, R. (2022, April 11). Why we need a socially responsible approach to ‘social reading’. The Hechinger Report. https://hechingerreport.org/opinion-why-we-need-a-socially-responsible-approach-to-social-reading/

Doyle-Burke, D., & Smith, J. (Hosts). (2021, April 7). Atlas of AI with Kate Crawford. In The Radical AI Podcast. Radical AI LLC. https://www.radicalai.org/atlas-of-ai

Gilliard, C. [@hypervisible]. (2022, June 21). Yo, who wants one? [Image attached] [Tweet]. Twitter. https://twitter.com/hypervisible/status/1539386924938543104?s=20&t=raaTBTre__SDZs-jiLhl_Q

Gleason, B., & Heath, M. K. (2021). Injustice embedded in Google Classroom and Google Meet: A techno-ethical audit of remote educational technologies. Italian Journal of Educational Technology, 29(2), 26–41.

Hill, K. (2022, May 27). Accused of cheating by an algorithm, and a professor she had never met. New York Times. https://www.nytimes.com/2022/05/27/technology/college-students-cheating-software-honorlock.html

Johnson, K. (2022, June 14). LaMDA and the sentient AI trap. Wired. https://www.wired.com/story/lamda-sentient-ai-bias-google-blake-lemoine

Kalir, R., & Garcia, A. (2021). Annotation. MIT Press.

Krutka, D. G., Heath, M. K., & Mason, L. E. (2020). Editorial: Technology won’t save us – A call for technoskepticism in social studies. Contemporary Issues in Technology and Teacher Education, 20(1). https://citejournal.org/volume-20/issue-1-20/social-studies/editorial-technology-wont-save-us-a-call-for-technoskepticism-in-social-studies

Kumar, P. C., Vitak, J., Chetty, M.,  & Clegg, T. L. (2019). The platformization of the classroom: Teachers as surveillant consumers. Surveillance & Society, 17(1/2). 145–152. 

Lindh, M., & Nolin, J. (2016). Information we collect: Surveillance and privacy in the implementation of Google Apps for Education. European Educational Research Journal, 15(6), 644–663.

Logan, C., & Caines, A. (2021, April 21-22). Creating counter-media texts in the open with #AnnotateEdTech [Conference presentation]. OERxDomains21, Online. https://docs.google.com/presentation/d/18zlOTBLdRni2Ul173Jpa_iwJD6gYO4aEnMFJzyhnjFk/edit?usp=sharing

Radiohead. (1997). Fitter, happier [Song]. On OK Computer. Parlophone; Capitol.

Selwyn, N. (2021). Ed-tech within limits: Anticipating educational technology in times of environmental crisis. E-learning and Digital Media, 18(5), 496–510.

Singer, N. (2022, June 26). Schools are spending billions on high-tech defense for mass shootings. New York Times. https://www.nytimes.com/2022/06/26/business/school-safety-technology.html

Solon, O. (2021, June 19). Drought-stricken communities push back against data centers. NBC News. https://www.nbcnews.com/tech/internet/drought-stricken-communities-push-back-against-data-centers-n1271344

Tiku, N. (2022, June 11). The Google engineer who thinks the company’s AI has come to life. The Washington Post. https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

Tucker, E. (2022, March 8). Artifice and intelligence. Center on Privacy and Technology at Georgetown Law. https://medium.com/center-on-privacy-technology/artifice-and-intelligence%C2%B9-f00da128d3cd 

Turing test. (2022, June 28). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Turing_test&oldid=1095441013

Watters, A. (2019, November 28). Ed-tech agitprop. Hack Education. https://hackeducation.com/2019/11/28/ed-tech-agitprop


Williamson, B. (2022, March 17). Google magic. Code Acts in Education. https://codeactsineducation.wordpress.com/2022/03/17/google-magic/

Previous
Previous

Previewing the 1st Annual (and free) Civics of Technology Conference

Next
Next

A racist soap dispenser? Critical Theory and the non-neutrality of society