Narayanan, A, & Kapoor, S. (2024). AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference. Princeton University Press. ISBN-13 9780691249131
Review by Jacob Pleasants
AI Snake Oil aims to provide a foundational set of concepts and tools that we can use to critically appraise AI technologies and the rhetoric that surrounds them. This is an extremely timely goal, especially for those of us watching the aggressive marketing of AI in educational spaces. When AI hype is so abundant, how can we tell which of the claims (if any) have merit, and which are bullshit?
Two quotes from the introductory chapter of AI Snake Oil give a pretty good flavor of what Narayanan and Kapoor are up to in this book:
“AI snake oil is AI that does not and cannot work as advertised. Since AI refers to a vast array of technologies and applications, most people cannot yet fluently distinguish which types of AI are actually capable of functioning as promised and which types are simply snake oil. This is a major societal problem: we need to be able to separate the wheat from the chaff if we are to make full use of what AI has to offer while protecting ourselves from its possible harms, harms which in many cases are already occurring.” (p. 2-3)
…
“The goal of this book is to identify AI snake oil—and to distinguish it from AI that can work well if used in the right ways. While some cases of snake oil are clear cut, the boundaries are a bit fuzzy. In many cases, AI works to some extent but is accompanied by exaggerated claims by the companies selling it. That hype leads to overreliance, such as using AI as a replacement for human expertise instead of as a way to augment it.
Just as important: even when AI works well, it can be harmful, as we saw in the example of facial recognition technology being abused for mass surveillance. To identify what the harm is and how to remedy it, it is vital to understand whether the problem has arisen due to AI failing to work, or being overhyped, or in fact working exactly as intended.” (p. 28)
There are a few points to build on and highlight in these quotes. Importantly, Narayanan and Kapoor are not categorically against AI technologies; in fact, they often point out that they find generative AI to be rather useful in their own work. What they are against are the bogus claims that permeate discussions about AI. This begins with the very term itself: AI is woefully imprecise and is used to reference technologies that are, in fact, quite different. They especially draw distinctions generative and predictive AI technologies, while noting that there are other “types” of AI that exist as well. Another point to highlight is that they argue for a complex understanding of what it means for an AI system to “work.” Even if the technology functions as designed in a technical sense, it may not “work” in terms of bringing about desirable outcomes.
So with all that said, how much of what is out there in AI land is, in fact, nothing more than snake oil?
Narayanan and Kapoor come down quite hard on predictive AI technologies. These are technologies that claim to make evidence-based predictions about future events (e.g., the probability of someone committing a crime while on bail, the likelihood that a job seeker will do well in a position) so that humans can make better decisions about them (e.g., how to set bail, whether to offer a job). They argue that the current performance of these technologies is often far poorer (and more biased) than what their developers claim. But more importantly, those limitations are not temporary glitches that can be fixed with further technological development. In many cases, the predictive goal that is advertised is simply unattainable for a variety of reasons. For instance, once people know that decisions are being made by predictive models, they will alter their behavior to account for that - thus invalidating the model’s predictions. Given all their problems, it is perhaps not surprising that developers of predictive AI tend not to be very transparent, or allow external audits and evaluations of their technology.
They are more optimistic about the capabilities of generative AI, although there are plenty of false claims in this space as well. While there is much that generative AI can do, exaggerations and unwarranted extrapolations are abundant. For example, when a chatbot passes a medical or legal exam, we quickly hear claims about how AI can now do the work of a doctor or lawyer - as if the ability to answer questions on an exam is even remotely similar to doing the professional work. They are especially leery of the wild claims made about “artificial general intelligence” and existential threats to humanity.
To make their case and advance their critiques of these AI technologies, Narayanan and Kapoor have to get into some technical weeds. Yet the book remains accessible to a lay audience throughout. You don’t need any background in computer science to follow their arguments, and I found their technical explanations to be an essential part of the book. More broadly, the book is very well researched and referenced and I often found myself diving into their sources for further exploration of the topics they discuss. The reference list in this book is full of great reading.
If you’ve already spent a lot of time reading critical work about AI, you’ll see plenty of familiar examples and arguments. But you’ll see some new perspectives and examples as well. And you will walk away with some new tools to think with.
So, What About AI Snake Oil in Schools?
AI Snake Oil covers a wide range of sectors, and education contexts come up a few times in the book. For instance, they begin their chapter on predictive AI with an example from higher education. Specifically, a technology called “EAB Navigate” aims to help college administrators predict which students are likely to drop out. Even if EAB Navigate were to work as advertised (highly unlikely, according to Narayanan and Kapoor), there would still be myriad problems:
“In its marketing pitch to schools, EAB claimed: “The model will provide your school and its advisors with invaluable and otherwise unobtainable insight into your students’ likelihood of academic success.” Even if some schools might use this insight to pressure students to leave, others could conceivably use it to design interventions that might help students stay in school. But interventions that seem helpful could also be questionable. For example, the tool helps by recommending alternative majors in which a student would be more likely to succeed. This might have the effect of driving out poorer and Black students—whom the tool is more likely to flag—from more lucrative but more challenging STEM majors. And throughout this process, students may have no idea that they are being evaluated using AI.” (p. 37)
Naturally, I would love to see further discussion of AI snake oil in education, as it is a space that is rife with hype and marketing. Narayanan and Kapoor actually speak to this a bit further on their website, which includes an excellent dissection of hype-ridden journalism for EdTech. Yet even if they don’t address educational AI technologies extensively, their ideas are pretty readily applicable.
Their critique of predictive AI systems is especially relevant for educational contexts. Predictive AI is all about making inferences based on quantitative data - which is exactly what “learning analytics” technologies intend to do. As long as we gather plenty of student data, we are promised that we can identify students who at risk of failure and procure "personalized” instruction. Or, at the very least, we can use ubiquitous data collection as assessment data to measure performance. The conceit here is that these data-driven systems will make far better judgments and decisions than human teachers - just like the predictive AI examples Narayanan and Kapoor dissect in AI Snake Oil. Generative AI, too, is worthy of our scrutiny. For instance, just because an AI chatbot (e.g., Khanmigo) can impersonate a tutor does not mean that it can actually do all of the things that a competent tutor does.
AI Snake Oil, it turns out, is pretty good medicine for our current moment.