What AI Literacy do we need?
Civics of Tech Announcements
Civics of Tech Parent Testimonials, by 11/1/24 - Read Allie’s blog post and click here by November 1, 2024 to submit your testimonial about how educational technologies are manifesting in your child(ren)’s schooling.
Monthly Tech Talk on Tuesday, 11/12/24 (Note the updated date!). Join our monthly tech talks to discuss current events, articles, books, podcast, or whatever we choose related to technology and education. There is no agenda or schedule. Our next Tech Talk will be on Tuesday, November 12th, 2024 at 8-9pm EST/7-8pm CST/6-7pm MST/5-6pm PST. Learn more on our Events page and register to participate.
Book Club, Tuesday, 12/17/24 @ 8 EST - We’re reading Access Is Capture: How Edtech Reproduces Racial Inequality by Roderic N. Crooks. Charles Logan is hosting, so register now to reserve your spot!
Spring Book Clubs - The next two book clubs are now on the schedule and you can register for them on the events page. On February 18th, Allie Thrall will be leading a discussion of The Propagandists' Playbook. And on April 10th Dan Krutka will be leading a discussion of Building the Innovation School, written by our very own CoT Board Member Phil Nichols.
by Dan Krutka
This week is U.S. Media Literacy Week and the National Association for Media Literacy Education (NAMLE) has a host of events and resources. Media literacy education has never been more critical as our shifting media landscape requires both old and new knowledge, skills, and dispositions for informed decision-making. For example, new(ish) approaches such as Michael Caulfield’s SIFT moves offer a framework that is responsive to our fast-paced internet environment. The steps are Stop, Investigate the source, Find better coverage, Trace the original context. As a New York Times story explained, what makes this framework responsive is that, “The goal of SIFT isn’t to be the arbiter of truth but to instill a reflex that asks if something is worth one’s time and attention and to turn away if not.” In short, it’s designed to make quick decisions online, thus recognizing that we can’t—and shouldn’t try to—investigate every post we see online. When you’re viewing hundreds, even thousands, of posts in a day, one of the most important skills you can learn is to recognize when you can’t quickly verify the information and just decide not to interact with it… much less share it.
Of course, the rise of GenAI as a consumer product further complicates this picture. Evaluating sources has long been central to many literacy approaches, but what happens when Generative AI (GenAI) products obscure their sources of information in their corporate black boxes of proprietary training data? What old approaches still work when sourcing is hard or impossible? What new approaches are needed?
I was thus excited a couple months ago when Marc Watkins wrote about how educators might address GenAI and mis- and disinformation on his Rhetorica blog. Toward the end of the post he offered some educational suggestions that I’ll share here:
Last fall, I created an assignment to explore misinformation and disinformation using the Bad News game. The game was developed by the Cambridge Social Decision-Making Lab at Cambridge University and puts the user in the role of a social media monger, whose goal is to gain as many followers as possible by spreading conspiracy theories and false information. Along the way, the game shows students how algorithms that power social media reward their choices. The assignment I put together gives students a chance to process and reflect on how our choices impact the media landscape.
Bad News is about our socials before AI came on the scene, but it remains a powerful activity to help students. Luckily, there are a number of other games out there that can help students understand the nature of AI deep fakes and help them spot them:
Simple: Calling BS’s Which Face is Real?
A straightforward game that asks a user to guess if a face is real or generated by AI. It does a good job of introducing deep fake images using an older type of generative technology.
Scored: Microsoft’s Real or Not
A 15 question image guessing game that goes beyond shapes, using a series of sophisticated real and AI images that pose a challenge for a user. Unlike Which Face is Real, this game is scored and lets you know how you fared against others.
Timed: Google’s Odd One Out
A really challenging game that is timed to mirror the short attention a user gives an image they run across on social media. Not only is the ticking clock stressful, a user also only has a certain amount of guesses before they lose.
Using games can help young learners navigate a complex topic like AI -generated disinformation and deep fakes. But to teach students about generative AI’s impact, we have to first move beyond AI as cheating and ask young learners to analyze it as a tool, a process, one in desperate need of serious inquiry. If we don’t find more moments where we can slow things down and allow students the time to critically explore the rhetorical messages behind the information that slides effortlessly across their screens each day, then we’re missing an opportune chance for them to learn.
I appreciate his recommendations and you should read his full post and consider subscribing to his substack for thoughtful posts and updates on GenAI and education.
While I understand how the Bad News game can raise students’ consciousness about how social media algorithms and incentives work, I have been less sure about what is needed for the two AI “real or fake image” games to be educational. When I play these games, they do raise my consciousness about AI generated images, but they also make me feel a bit helpless. I come away worrying, how can I tell if any images are real? One theme of media literacy education is to encourage students to be skeptical, not cynical, about media. How do we ensure such games empower students? What curriculum should accompany these games?
I think we start by determining what media literacy approaches are still relevant, and what new approaches are needed, in the face of an AI-shifted information environment. There have been thoughtful AI Literacy curricula proposed such as Digital Promise’s AI Literacy Framework and AI4K12’s Five Big Ideas in Artificial Intelligence. Each framework emphasizes different aspects of AI education. How do these new AI Literacy frameworks further or modify media literacy approaches such as NAMLE’s core principles?
These questions are neither simple in theory or practice, but they are topics educators must broach with students across subject areas, grade levels, and institutions. As recent news shows, we now have to respond to an online environment where AI generated misinformation mingles with social media algorithms to affect emergency responses to hurricanes and, of course, the upcoming elections. I hope we can all use media literacy week to start taking up this work in and outside of schools. Or else, we’ll be left helpless to determine what’s generated by humans or the machines. And there are unfortunately bad actors that are hoping they can capitalize on such truth decay.