The ELIZA Effect-ion

Civics of Tech Announcements

  1. Monthly Tech Talk on Tuesday, 12/03/24. Join our monthly tech talks to discuss current events, articles, books, podcast, or whatever we choose related to technology and education. There is no agenda or schedule. Our next Tech Talk will be on Tuesday, December 3rd, 2024 at 8-9pm EST/7-8pm CST/6-7pm MST/5-6pm PST. Learn more on our Events page, and register to participate. Going forward, we will use different accounts for the monthly registration so be sure to register each month if you are interested in joining.

  2. Join us on Bluesky: We have made the decision to quit using our Twitter account. We are going to give Bluesky a shot for those interested. Please follow us @civicsoftech.bsky.social and join/follow our Civics of Technology starter pack. Civics of Tech board member Charles Logan (@charleswlogan.bsky.social) has recently shared a number of other critical tech start packs that can help you get started if interested. Let's all just hope this platform doesn't get enshittified.

  3. Book Club, Tuesday, 12/17/24 @ 8 EST - We’re reading Access Is Capture: How Edtech Reproduces Racial Inequality by Roderic N. Crooks. Charles Logan is hosting, so register now to reserve your spot!

  4. Spring Book Clubs - The next two book clubs are now on the schedule and you can register for them on the events page. On February 18th, Allie Thrall will be leading a discussion of The Propagandists' Playbook. And on April 10th Dan Krutka will be leading a discussion of Building the Innovation School, written by our very own CoT Board Member Phil Nichols.

Screen shot of a conversation with eliza overlaid on a hand emerging from a screen

Screenshot of a conversation with Eliza overlaid on a hand emerging from a screen

By Punya Mishra

I first read about the "ELIZA Effect" as a high-school student in India, in Douglas Hofstadter's classic rumination on art, music, humanity and AI—Gödel, Escher, Bach: An Eternal Golden Braid.. The eponymous effect came from ELIZA, an early chatbot created by Joseph Weizenbaum, programmed to mimic a Rogerian psychotherapist. It was a simple program, usually just parroting your comments back at you in the form of a question. Yet, Weizenbaum found, to his initial surprise and then distress, that people often responded to the program as if it were human, forming emotional attachments, at times, even though they knew that it was just an unsophisticated parrot following simple rules. This concerned him so much that he shifted his research focus from technical development to focus on warning others about the dangers of attributing human-like capabilities to machines.

What did it say about us, I wondered, that we were so quick to anthropomorphize a computer program? Decades later, as an Assistant Professor at Michigan State University, I revisited these ideas through a series of experimental studies that looked at how people’s psychological responses to interactive media. In one study I looked at how they responded to praise and blame from a computer tutor (they preferred praise in every case, just in case you are curious), or how children play with and engage with robotic toys. Then life shifted, as it does, and I moved on to other things, though an interest in these issues remained.

It is not surprising, therefore, that I was "primed" in some manner to think of these matters when these Large Language Model-based chatbots erupted into the scene. These bots, with their ability to hold a conversation in natural language, using words that indicate intent, agency and even affect, could possibly lead to the ELIZA Effect on steroids.

Over the past few years, I have written quite a bit about these ideas, mostly as blog posts on my personal website, focusing on what happens to us (individually and collectively) as agentic versions of AI become more sophisticated and pervasive. These writings approach this idea from a variety of perspectives, seeking to understand the psychological, social, and ethical implications of our interactions with AI. I wonder about how these technologies are reshaping our understanding of intelligence, consciousness, and human connection.

In this blog post for Civics of Tech, I hope to share some of how I have been approaching these ideas by providing connective tissue between some of my blog posts, that you can dig into if you are interested in going deeper.

One of the first pieces I wrote was back in 2022, before even ChatGPT hit our collective consciousness. This post was prompted by the news of Blake Lemoine being fired from Google for claiming that LaMDA (a large language model) was sentient. I took the question 'Can a Computer Program Be Sentient? and argued that it was not as much about whether the program had achieved sentience as much as it was about our ability to think that it had. I grounded my thoughts in my previous research into this topic, managing to make an interesting connection to Rodolphe Topffer, the father of the modern comic book. Intriguing? Well you will have to read the full post to learn more.

As I argued, such attributions are not new. We have always believed unreal things—from paintings to books, from movies to video games. Humans have the amazing ability to believe and ascribe meaning to the most random of phenomena. This perspective, goes contrary to how most media theory approaches this idea, where the standard trope is that of “willing suspension of disbelief.” The idea is we consciously choose to “suspend our disbelief” in unreal things—namely media representations such as stories, paintings, films, and video games. As I wrote in the post (Willing suspension of belief: The paradox of human AI interaction):  

But what if we’ve got it backwards?

What if our default state isn’t disbelief, but belief? Being critical and questioning isn’t our natural mode – it’s hard cognitive labor.

I argued, based on Kahneman's idea of thinking fast and slow, that our brain's path of least resistance is to believe. This is what makes all forms of art possible, from sketches to oil paintings, from animated films to true crime podcasts. And this is also part of the reason why we will, whether we like it or not, fall for these agentic AI systems. It is too much cognitive labor not to, in fact I argue (in Beavers, Brains & Chat Bots: Cognitive Illusions in the Age of AI that we may be evolutionarily primed to. 

This cognitive dissonance, where we engage with AI as if it were a “psychological other,” has some interesting consequences that extend far beyond mere curiosity. This includes finding ourselves emotionally invested in interactions with chatbots and digital assistants, despite knowing they lack true consciousness. As AI systems become increasingly sophisticated in mimicking human behavior and thought patterns, they begin to exploit our social instincts and cognitive biases in unprecedented ways.

This in some ways make these technologies flip the Turing Test—making them, what I have called, “Turing's Tricksters” hijacking our innate tendencies to connect and find meaning. This potential manipulation can lead us to overshare personal information or seek emotional support from non-sentient entities. This vulnerability is further compounded by AI's ability to learn and adapt, creating a feedback loop where it becomes increasingly adept at telling us what we want to hear, forming a kind of "honey trap" as I discuss in my post "AI’s Honey Trap: Why AI Tells Us What We Want to Hear".

And finally, I argue that this is not happening just by chance and due to our innate predilections. It is being actively pushed by AI corporations because they see this as a powerful way to engage, control and manipulate us. I dig into this in a couple of posts where I unpack how these companies are deliberately designing these systems to feel more like companions than mere word predicting machines (“They’re Not Allowed to Use That S**t”: AI’s Rewiring of Human Connection). 

This of course brings a whole host of ethical issues to the forefront—which led to a mini-rant about the absurd one-sidedness of the ethics in AI debate). As AI systems learn to "read" human emotions and behaviors, questions arise about privacy, manipulation, and the potential for AI to be used as a tool for social engineering. In "Mind Games: When AI Learns to Read Us," I examine how AI might be used to build artificial "characters" that play on our emotions and exploit our social needs.

Finally, I wonder about how the widespread adoption of these agentic AI technologies mean for our personal social lives. It is conceivable that the convenience and personalization offered by AI assistants could lead to a decline in open, public online and in-person interactions, as users retreat into private, AI-mediated conversations. In "Chatting Alone: AI and the (Potential) Decline of Open Digital Spaces", I raise concerns about the potential for increased isolation and the erosion of shared interpersonal experiences.

While these theoretical and conceptual explorations are fun there are also my personal experiments with these Chatbots, that provide another perspective on these issues.

In one post (Kern You Believe It? A Typographical Tango with AI) I describe (actually let Claude.AI describe) a series of creative experiments on creative typography we engaged in together, leading to some interesting meta-conversations about what this engagement means. In another experiment (Finding In/Sight: A Recursive Dance with AI) we got into the use of intentional language by AI and explored how the very use of language implies some form of intentionality.

What was interesting, upon reflection, was that despite my knowing, every step of the way, that I was interacting with a bullshit artist / stochastic parrot (take your pick) I was actually having a lot of fun. In short, the interaction was joyful, though clearly one-sided. As I wrote:

And even though there was no deeper truth there, I have to acknowledge that I got real pleasure from this interaction. My feelings were genuine. Claude’s consciousness was not real, its words a simulacra of human interaction. There wasn’t a there there. But truth be told, my emotions were real. The joy I felt, through this interaction, was genuine.

There's an intriguing paradox in how we interact with AI. Consider how movies affect us – we know they're just light and shadow playing across a screen, yet they still make us laugh, weep, and feel inspired to change our lives. Similarly with AI, even when we're fully aware we're engaging with sophisticated software, we can find joy and pleasure in these interactions. I never lost sight of the fact that I was not communicating with a stochastic parrot but I cannot deny it was not fun.

Looking across these essays, I find myself circling back to the ELIZA Effect and the questions it first raised for me. The most important question for me, more than whether these tools are intelligent or sentient, is the question: what do our interactions with them reveal about us—our desires, our fears, and our need for connection?

This has led me to thinking about what this means for media or AI literacy, something quite the rage these days. I mean, not a day goes by without some agency, or organization, offering their own framework! I believe many of these well-intentioned approaches miss the point. For the most part they focus on analyzing how media constructs and conveys messages, I believe this approach falls short when dealing with AI systems specifically engineered to exploit human psychology.

What we need is an integrated understanding that examines both AI technology and human psychology in tandem. This means going beyond simply learning about AI's capabilities and constraints. We must recognize our own cognitive tendencies - why we instinctively attribute human qualities to AI, develop trust in automated systems, and form emotional connections with artificial entities despite knowing their true nature. As I wrote in Building character: When AI plays us:

True media literacy in the age of AI isn’t just about understanding the nature of these new technologies – it’s about understanding ourselves.

Previous
Previous

Raising Tech Consciousness: An Article and a Lesson

Next
Next

Navigating AI in K-12 Education with a Critical Lens