Here are “101 Creative Uses of AI in Education.” Are They Truly Creative?
CoT Announcements
Next Book Club on Thursday, 09/21/23: We are discussing Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil on September 21, 2023 @ 8:00 p.m. EDT. Register for this book club event if you’d like to participate.
Next Monthly Tech Talk on Tuesday, 10/03/23. Join our monthly tech talks to discuss current events, articles, books, podcast, or whatever we choose related to technology and education. There is no agenda or schedule. Our next Tech Talk will be on Tuesday, October 3rd, 2023 at 8-9pm EST/7-8pm CST/6-7pm MST/5-6pm PST. Learn more on our Events page and register to participate.
Critical Tech Study: If you self-identify as critical of technology, please consider participating in our study. We are seeking participants who self-identify as holding critical views toward technology to share their stories by answering the following questions: To you, what does it mean to take a critical perspective toward technology? How have you come to take on that critical perspective? Please consider participating in our study via OUR SURVEY. You are welcome to share with others. Thank you!
by Jacob Pleasants
My university recently pushed out a set of modules about generative AI in an attempt to provide some guidance on its use in university teaching. In those modules (which were, overall, pretty reasonable), I came across a link to this resource, and I was intrigued:
It is, as it says, a “crowdsourced” collection of ideas about using AI in education. The editors state:
This collection captures where we are at this moment in time with our collective thinking about potential alternative uses and applications of AI that could make a real difference and potentially create new learning, development and opportunities for our students and educators, for all of us.
The editors provided a general template for contributions and seemed to welcome any and all ideas. It was not heavily “curated” as far as I can tell.
I was tempted to write up a review of the text, but then I had a different idea. If the volume truly does capture collective thinking of some kind, it is perhaps an intriguing window into the minds of educators, most of whom are in higher education. Presumably, these are educators who have given more than a passing amount of thought to how they might approach AI. In this collection are their ideas about what counts as “creative” applications within teaching and learning.
And so, what if we think of their contributions as “data?” What might be gleaned from analyzing those data? Thus began what might be a very ill-conceived research project. It might be a terrible idea. CoT community, please talk sense into me!
How far have I taken this? Well, I read through and started coding all 101 of the entries in this collection (many of them, fortunately, are pretty short). My guiding questions: What kinds of ideas were being put forward? How distinct were these 101 contributions? Which ones are mostly in line with the intended uses of these technologies? Which ideas could be considered “creative?”
A General Note: The ideas range quite a bit in terms of how extensively they are developed. Some give a clear outline while leaving open the specifics of implementation. A few give a very step-by-step description of an AI-based task. Some others are extremely vague to the point where the idea is more of a “hope” (e.g., AI will make things great) than a concrete suggestion for education.1
What Did I Find?
As I read through the ideas, it became apparent that many were very similar to one another. One that came up repeatedly was the idea to use ChatGPT to generate scenarios for Project Based Learning (PBL) or case studies or to establish contexts for an assessment question (ideas 7, 12, 16, 58, 86). Another was for students to evaluate AI output by comparing it against human-generated responses, different AI systems, established facts, etc. (25, 28, 29, 43, 78, 82, 84, 93). I was, of course, looking for similarities and patterns as I was trying to categorize the ideas, but I did not expect such tight convergence. It was rather peculiar to read what amounted to the same idea multiple times. Does this indicate a lack of creativity? Does it indicate a sort of consensus on what uses of AI are reasonable?
Below are the categories that I ended up forming to capture the different ideas in the text.
So, what to make of this tabulation? Three categories are noteworthy in that they contain ideas that take a potentially critical stance toward AI: Evaluating AI Ouput, Explore Implications of AI, Interrogate AI Systems. Together, these represent 27 ideas, and even if they are not all equally skeptical of AI (they aren’t), it still indicates a broad interest in helping students engage in technology criticism. I’m all for it.
On the other hand, many of these ideas are not exactly creative. In fact, many of the ideas describe the intended use cases of these technologies. For instance, using AI to offload some of the work that students and teachers would otherwise do, which together represent 26 of the ideas. Even if a reader has not necessarily thought of all of those offloading strategies, are they really all that creative? Some are no doubt clever, but they all amount to identifying things that students and teachers currently have to do themselves, then suggesting that AI do it instead. Labeling such ideas “creative” can give them a veneer that they do not necessarily deserve, and also obscures the many unintended and problematic consequences of such “offloading.”
A curious set of ideas are those in which Students Use AI to Create a Product. I will admit that I am still unsure what to call this category. At one point, it was simply called “Use AI” because the ideas are essentially ones in which students simply generate some output using AI. On the one hand, I had not necessarily thought about having students use something like Midjourney to generate images of fantastical things. But then again, isn’t that exactly what something like Midjourney is supposed to be able to do? Perhaps more importantly, while the ideas around how students can use different AI systems might very well be creative, they also wholly miss the problematic nature of the systems themselves. Midjourney is a case in point: the system was trained on a data set that involved the theft of the products of countless artists. Encouraging students to use Midjourney means asking them to join in the fruits of that exploitation. Creative? Maybe. Ethical? Maybe not.
On the whole, my analysis is heartening in some ways (there’s lots of interest in getting students to critique these AIs!) and pretty disappointing in others (there is also quite a bit of uncritical use here). But what I really have at this point are questions, and…
CoT Community, I Need Your Help!
Is this just a bunch of meaningless analytical wheel-spinning? Have I uncovered anything interesting here? Does this say anything important about how people are thinking about AI in education?
End Note
Many ideas that involve AI text generators include some example output (e.g., exchanges with ChatGPT). Unfortunately, the image resolution that is used is too low to be able to easily read those examples. If you really work at it, you might be able to figure out the text, but it’s not easy. This seems like something that should have been addressed!