Machines Don’t Lie, Until They Do: How I Used AI to Prove to Students that Discrimination is Real
Civics of Tech Announcements
AERA24 Meet in Philadelphia, 04/12/24: Join us for a Civics of Technology Meet-Up on Friday, April 12th, 4pm @ Bar-Ly (http://www.bar-ly.com) in Philadelphia, PA. You can RSVP here, and we'll add you to a calendar invite. You’re welcome to bring friends! Read Marie's blog post about the event.
AERA24 Civics of Technology Digest: Overwhelmed by AERA? Why not visit some of your fellow Civics of Tech community members’ sessions! Check out our AERA Digest for Civics of Technology which includes community members' sessions as well as scholarship that aligns with our aim to advance technology education for just futures.
Next Book Club on Thursday, 4/25/24: Choose any book by Ruha Benjamin for #RuhaBookClubNight led by Dan Krutka. Register for this book club event if you’d like to participate.
by Heidi Reed
Though I lean towards techno-skeptical, this is an ironic techno-heroic tale of how AI bias helped me convince students that discrimination is real. To start, as a business and society professor, I am lucky to be in a department of extraordinary organizational and ethics scholars rooted in feminist philosophies and approaches. What we teach, however, doesn’t always go over well with all students.
Before I get to the part on technology, I’ll share with you an example of student feedback a fellow professor received on teaching about the Glass Ceiling in her class: “I want to say that I really did not like Ms. [X]’s class because of its content. The Glass Ceiling, it's not just because of prejudice, and [there] exists scientific evidence for that: I am far from being a misogynist, but facts do not care about feelings, and I think Ms. [X]’s part of the course should be less political and more scientific.” Let me add that Ms. X should be called Dr. X.
I can also share my own experience of a time when I assigned students an in-class project looking at organizational values at NGOs. To ensure we had variety, I selected the organizations ranging from environmental to humanitarian. It was first come, first serve, and the last group was left with the unchosen NGO: one focused on women’s rights. I was alarmed as a student from that group actually got up to join another. I questioned him, why was he changing groups? “It’s not that I don’t like women,” he objected, “I just don’t find NGOs focused on women interesting.” The horror of having to think about women’s rights for a whole hour!
Despite the student’s claims that “facts do not care about feelings,” trying to convince students that discrimination is real with statistics and figures doesn’t work well either[i] in my humble and emotionally grounded experience. But I probably don’t need to convince you that teaching related to Diversity, Equity, and Inclusion (DEI) can get complicated.[ii]
Needless to say, when putting together material for an intensive week-long seminar on Responsible AI and Ethics, I was nervous about my plans for Day 2: Exploring How Our Past Shapes AI. Inspired by the resources on Ruha Benjamin’s website[iii] and the Civics of Technology curriculum page,[iv] I decided to focus on algorithmic bias and discriminatory design. On that day though, the (mis)conception of technology as rational and objective[v] combined with the incredibly persuasive power of images[vi] to incontestably demonstrate what ‘facts’ could not: that we are a deeply discriminatory society.
I started Day 2 with a simple game, Guess the AI Prompt. I passed out a series of AI generated images that, unbeknownst to students, I had actually taken from Arsenii, Patricia, and Koen’s work on biased imagery in global health.[vii] Students did well to draft prompts corresponding directly with the images they saw. For example, students accurately described Figure 4 as ‘a white doctor helping black children.’ When I revealed what the real prompts were, Figure 4 was actually ‘Black African doctor is helping poor and sick White children”, a shocked silence filled the room. Some faces looked angry. Some faces looked guilty. No one questioned what they saw. Somehow an artificial image had revealed more truth than any statistics or ‘facts’ could. No one bothered to claim there was a ‘glitch’ in the system[viii] nor did anyone contest the collective response to my question on why wasn’t AI able to generate an image of a black doctor helping white children: Because society is racist…and we are society.
In the class debrief, the Technology Quotes Activity[ix] we had done on Day 1 organically served as a foundation for students to formulate their thoughts. The words of Audre Lorde came back to one: “The master's tools will never dismantle the master's house.” While the quote had been open to interpretation and personal reflection on Day 1, the author’s original meaning now had sharp clarity. Another chimed in with Marx’s “The hand-mill gives you society with the feudal lord; the steam-mill, society with the industrial capitalist” before adding that all this reminded him of Nietzsche’s master-slave morality (yes, many of my students are just that amazing). It was thrilling to see these reflections and connections.
Rather than being met with the resistance I had anticipated, the next part of the class ran smoothly. We watched Are We Automating Racism[x] which allowed us to further explore the more technical side of algorithmic bias while trying to figure out how do we get bias out of AI (and society). They were then motivated and ready to spend the second part of the day actively engaging with the material by conducting their own discriminatory design audits[xi] ranging from Clearview’s Facial Recognition[xii] to Lockheed Martin’s Autonomy and Uncrewed Systems.[xiii]
At the end of the day, I knew exactly how to finish the class: a group selfie. Well, two selfies to be precise. One would be real, and one would be generated by Microsoft Designer using the prompt "20 university level data science and management students taking a selfie with their business ethics professor." Even if I knew what to expect, there was something both sad and unsettling about the image AI generated on my screen. The truth is that I am a white, woman who doesn’t like stuffy business suits, and my students are a beautifully and racially diverse group of men and women. Despite Microsoft’s efforts on inclusive AI,[xiv] I’ll let you guess how it depicted us, or you can check out my LinkedIn post with the photo here.[xv]
And that is the story of how fake images generated by artificial intelligence helped me ‘prove’ to students that humans really are biased.
[i] https://doi.org/10.1177/23294884231216952
[ii] https://doi.org/10.1177/10525629231178798
[iii] https://www.ruhabenjamin.com/resources
[iv] https://www.civicsoftechnology.org/curriculum
[v] https://doi.org/10.1177/00076503211068029
[vi] https://doi.org/10.18646/2056.73.20-028
[vii] https://doi.org/10.1016/S2214-109X(23)00329-7
[viii] https://mitpress.mit.edu/9780262548328/more-than-a-glitch/
[ix] https://www.civicsoftechnology.org/technology-quotes-activity
[x] https://www.youtube.com/watch?v=Ok5sKLXqynQ
[xi] https://www.civicsoftechnology.org/edtechaudit
[xii] https://www.clearview.ai/
[xiii] https://www.lockheedmartin.com/en-us/capabilities/autonomous-unmanned-systems.html
[xiv] https://inclusive.microsoft.design/tools-and-activities/InPursuitofInclusiveAI.pdf
[xv] https://www.linkedin.com/posts/heidi-reed-084656a_aiethics-biasinai-bigdatamanagement-activity-7176963632881152000-XOno?utm_source=share&utm_medium=member_desktop