What can we learn from AI Chatbot answers to our technoskeptical questions?

by Jacob Pleasants and Dan Krutka

Read the following answers to our five critical, technoskeptical questions (adapted from Postman, 1998) and consider how well they align with how you might respond:

ChatGPT Responses.

The tech world has been afire with people explaining, critiquing, bemoaning, and playing with ChatGPT, an artificial intelligence chatbot developed by OpenAI that can respond to questions with, at least technically, impressive results.

What do we make of this?

We admit this is not the optimal use for these questions - they are better posed about a specific technology rather than “technology” in general. Nevertheless, ChatGPT was able to put together cogent responses. The word that keeps coming to our minds about these responses is that they are unobjectionable. They represent pretty “stock” or “basic” answers; they are not particularly detailed, and largely leave us wanting more. But at the same time, there isn’t too much here with which we would strongly disagree.

This, we think, is largely a product of the machine learning that forms the foundation of ChatGPT. Surely, this AI system was fed a gargantuan amount of text - presumably texts that represent our contemporary cultural conversations. And if we think about the “standard” conversations that exist in our society about technology, ChatGPT has replicated that discourse pretty well. 

The real innovation here, of course, is not so much the quality of the ideas that ChatGPT produces but its skillful use of language. Not only did it grasp the meaning of what are admittedly abstract questions, but it produced text that is eerily conversational. If we were posing these questions to a student, and got these responses in return, what would we make of them? We certainly would not suspect that the student was secretly an AI.

On closer inspection, though, there are assumptions in these answers that ought to be questioned. In question 1, the Chatbot assumes that making tasks “easier” and more “efficient” always improves our lives. Of course, numerous technology critics have challenged this assumption (e.g., Borgmann, 1984; Carr, 2014). Nicholas Carr (2014) outlined arguments and provided examples for how increased automation (think Google Maps) can make things “easier” and more “efficient” all while degrading our ability to navigate, wayfind, and know our communities (see our Unfolding a Smartphone curriculum). The answer also fails to account for how technologies change the flow of our lives in unpredictable ways. In fact, the Chatbot treats technologies as neutral tools that just “depend on how it is used or implemented,” an idea which entire fields of study such as media ecology reject.

We could go on: the question 2 response fails to account for discriminatory design (Benjamin, 2019) or algorithmic oppression (Noble, 2018); question 3 is answered in a technical way that does not account for how humans are often nudged toward the ideologies of technologies (e.g., cars require massive space that sprawls communities); question 4 seemed to focus on obvious downsides, and less on the “unintended” or “unexpected” ones that are difficult to name; and question 5 failed to imagine how removing technologies, or technofasting, can actually bring clarity to downsides.

Our experiment with the AI Chatbot does show that it can offer generic and (mostly) unobjectionable responses, but if our students were to offer these responses we might ask them to reference readings, share experiences, and ask their own questions. If anything, this reminded us how we should treat our five critical questions as educators.

References

Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Polity.

Borgmann, A. (1984). Technology and the character of contemporary life: A philosophical inquiry. University of Chicago Press.

Carr, N. (2014). The glass cage: Automation and us. WW Norton & Co.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

Postman, N. (1998, March 28). Five things we need to know about technological change [Address]. Denver, Colorado.

Previous
Previous

Why We Wrote Power On!

Next
Next

Join Us for our January Book Club Discussion of the Computer Science Graphic Novel Power On!