Conducting a Technoethical Audit of ChatGPT
By Charles Logan and Sepehr Vakil
Announcements
Next Monthly Tech Talk on Tuesday, 3/5/24. Join our monthly tech talks to discuss current events, articles, books, podcast, or whatever we choose related to technology and education. There is no agenda or schedule. Our next Tech Talk will be on Tuesday, March 5th, 2024 at 8-9pm EST/7-8pm CST/6-7pm MST/5-6pm PST. Learn more on our Events page and register to participate.
Spring Book Clubs Announcement!: We will hold three book clubs in spring 2024, including two of the books which most influenced our Civics of Tech project, and a new book sandwiched in between them. We often talk about how Neil Postman’s work influenced our ecological perspective and Ruha Benjamin’s work has influenced our critical perspective. Yet, we’ve never held book clubs to discuss either. We’re excited to return to those two classics and also dive into Joy Buolamwini’s highly anticipated new book. You can find all our book clubs on our Events page.
Register to join us on March 21st as we discuss Joy Buolamwini’s new book, Unmasking AI: My Mission to Protect What is Human in a World of Machines.
Register to join us on April 25th as we discuss Ruha Benjamin’s instant classic, Race After Technology: Abolitionist Tools for the New Jim Code
How do you tell a coherent story about AI, equity, and public education in 10 weeks? That’s the challenge we faced as we put together our syllabus for our recently completed undergraduate course Design of Learning Environments: AI, Equity, and Public Education. You can read our syllabus to see an arc that we eventually landed on (and our liberal use of optional deeper dives).
As we put together our syllabus, one of our guiding goals was to engage students in narratives about generative AI that they may not have previously encountered. This meant we had to contextualize the inundation of AI hype in a longer history both of AI and educational technology in order to help students understand that claims about advanced technologies disrupting education and solving equity issues aren’t new. See, for example, Audrey Watters’ book Teaching Machines: The History of Personalized Learning, or the book we read in class, Justin Reich’s Failure to Disrupt: Why Technology Alone Can’t Transform Education. We also sought ways for students to explore the ethical dimensions of AI and generative AI in particular. Fortunately, we had a pedagogical tool at the ready: the technoethical audit.
Our technoethical audit of ChatGPT owes its origins to two papers: “Foregrounding technoethics: Toward critical perspectives in technology and teacher education” by Dan Krutka, Marie Heath, and Bret Staudt Willet; and “Injustice embedded in Google Classroom and Google Meet: A techno-ethical audit of remote educational technologies” by Benjamin Gleason and Marie Heath. Prior to conducting the audit, students read the two papers. Our discussion of the papers opened new lines of inquiry for students, helping them to see how an educational technology introduces certain economic models into schools, comes encoded with learning theories, and can deepen pedagogies of surveillance. Students also applied our expanded analytical toolkit to a New York Times article about Newark, New Jersey schools experimenting with Khan Academy's chatbot Khanmigo. We asked students to weigh, for instance, the affordances of expanded access to advanced technology with the type of financial burdens being placed on the school district in exchange for access to Khanmigo (and its use of ChatGPT-4). Finally, we set up the technoethical audit of ChatGPT. Students selected from a list of eight dimensions and read two articles about their dimension.
Our following class session was one of those classes you know you’ll remember because the energy feels heightened and you see and hear connections being formed in real time. We were especially struck by two small-group conversations. The first group read about the harms endured by Kenyan data annotators as they trained ChatGPT; the group also read about the overall dependence on ghost work that makes AI possible. This group–which included several students who’d made clear their belief in the power of AI–was taken aback by what they’d read, and together they made sense of what they found to be often shocking revelations.
Another group read about the environmental costs of ChatGPT, specifically the amount of water required for cooling data servers and generating electricity. Like their peers in the labor group, this group was surprised by what they discovered. “I had no idea,” said one student.
In the final phase of the audit, we reshuffled the groups again so that each new group contained members from across the eight dimensions. Students identified affordances and constraints of ChatGPT in K-12 and higher education and then answered two final questions taken from the “Foreground technoethics” article. First: Is ChatGPT ethical? And second: What guidance would the students offer for how to take informed action with and about ChatGPT?
For us, the technoethical audit served several purposes. From a purely logistical standpoint, the jigsaw approach to the readings ensured we could address a range of issues related to ChatGPT. More importantly, while the audit asks students to identify the platform’s affordances and constraints in teaching and learning contexts, it does not end in ambivalence. Students are explicitly asked to grapple with whether or not ChatGPT is ethical, and though the question is posed as a binary, many students rejected the framing in favor of more nuanced answers that contextualized when, how, and whether ChatGPT might be considered ethical and/or unethical.
We should say too that the collective technoethical audit of ChatGPT also prepared students to complete a technoethical audit of an educational technology that uses AI, a technology that students had selected to examine in small groups for a quarter-long project. The students incorporated their technoethical audits of software like Kahoot!, Gaggle, and Turnitin in a digital zine. These zines–inspired by Shirin Vossoughi, Ruha Benjamin, and their students and collaborators–are public resources, ones we hope members of the Civics of Technology community consider using in their own teaching contexts. With students’ permission, we’re sharing a few of those digital pedagogical zines. You can read about SoapBox, a tool that uses voice recognition; Sherpa, an AI-powered platform for oral exams; and Canva, the popular design platform. You can also access the final project’s documents for use in your own teaching.
As calls for expanding AI into schools continue to dominate headlines and policy discussions, engaging students in learning about the often obscured dimensions of generative AI remains a priority. Notably, many of our students shared in their final reflections that they plan to apply the technoethical audit to other technologies in their lives. We were heartened to read about their intentions, because if the past few years of cryptocurrencies, NFTs, and chatbots have revealed anything, it’s that Big Tech and its techno-optimists demand an informed, critical public.