Beane, M. (2024). The Skill Code: How to Save Human Ability in an Age of Intelligent Machines. Harper Collins. ISBN-13 9780063337794

Review by Jacob Pleasants

A decade ago, in The Glass Cage, Nicholas Carr raised concerns about how automated systems were eroding our skills as they made work and life tasks easier to complete. His illustrative example was the airline pilot, whose work had been deeply transformed by the increasing capabilities of automated copilot technologies. Pilots rarely had to undertake even basic operating maneuvers (e.g., during takeoff and landing), to the point where those skills had atrophied. But what happens when the automated systems fail and the pilots have to take over the plane? Those loss of those skills has consequences.

Carr’s concern was about the atrophy of expertise in an age of automation. In The Skill Code, Beane’s concern is related, but comes at the issue from a different direction: in the age of automation, what happens to learning processes of novices? Rather than worry about what happens to the skills of the expert, Beane looks to the ways that technologies have undermined long-established systems that allow people to become experts in the first place.

The central problem examined by The Skill Code is the unintended impacts of technology in our places of work. This is a familiar enough theme in our Civics of Technology space, but Beane comes at things from a bit of a different perspective than you might be used to. His research uses workplace ethnography to understand how and why technologies play out on the ground and influence workplace dynamics - specifically those that affect workplace learning. As he examines technological issues, he does not dwell much on ethical problems, societal power structures, or broader social impacts. His is a decidedly narrower lens. There is nothing necessarily wrong with choosing a narrow focus, and I found his ideas to be interesting and informative. But at the same time, as I read his analyses and conclusions, I found myself asking, “But what about…?”

One salient example: Beane discusses the case of predictive policing (i.e., the algorithmic deployment of police officers in a city). His discussion of this case does not include harmful impacts on overpoliced communities or the biased nature of the algorithms. His interest lies in how algorithmic policing makes it far more difficult for rookie officers to learn how to get better. To that point, he makes a pretty compelling case. The algorithms that control and constrain rookie officers’ work makes it quite difficult for those novices to ever gain much expertise. The inattention to the broader contours of this example were jarring, but I reminded myself that Beane’s goal is not to offer a broad critique of predictive policing or any other specific technology. He is putting forth a particular thesis about the effects of technology on the development of expertise — and that more modest objective is nevertheless one worth considering.

Another place in which Beane’s focus is a little different than what we often see in our Civics of Technology space is that it is concerned with informal learning processes that occur “on the job” rather than what occurs in formal schooling environments. He very briefly touches on education as a workplace, but it’s not one of the examples he examines at length. All the same, there are insights and provocations in this book that are relevant to formal education.

In short: in The Skill Code, Beane advances some provocative ideas about technology and workplace learning that are very much worthy of our consideration. At the same time, there are some missing elements that could make his analyses and insights more robust. To see how, let’s get into some of the details. I’ll summarize the main arguments and ideas that Beane puts forth in his book. From there, I will explore how we might use those points to better understand technologies in schooling.

The Argument

The central example that Beane returns to throughout the book is the medical training process. He specifically focuses on the context of surgical training, as this was where Beane conducted ethnographic field research for his dissertation, so it forms the backbone of his ideas. Like other medical specialties, surgeons use the “residency” model of training for medical school graduates. New residents are the novices, and “attendings” are the experts. The training process follows an apprenticeship model, one similar to those that have existed for thousands of years. Residents work alongside attendings, who give residents opportunities to do increasingly complex parts of the work (while providing plenty of coaching and guidance). This model may not always work perfectly (some attendings are better mentors than others!) but it is one that has stood the test of time for some good reasons (which Beane describes - more on that shortly).

Enter: robotic surgery technology. This is a technology of augmentation, not automation. A skilled surgeon operates the robotic components to carry out extremely precise procedures in confined spaces where their hands would not be able to reach. In this sense, it vastly augments what a surgeon is able to do in the operating theater. Surgery, of course, has always been a highly technological practice. And yet, Beane found that this particular technology badly disrupts the training process on which surgical practice has always relied. Why might that be?

In the traditional arrangement, residents aren’t just dead weight in the operating theater. True, they are learners, but they are also assistants. Attendings need the assistance of residents, and the teaching processes are embedded in collaborative work processes. Robotic surgery technology upends that relationship. The expert surgeon can now do the whole operation without their residents’ assistance. And so residents become relegated to mere onlookers, cut off from opportunities to actually learn the work processes for which they are supposedly being trained. Unlike the traditional arrangement, if an attending wanted to actually help a resident learn the new technology (say, by handing over the controls for a period of time), that pedagogical goal would actually conflict with the goal of performing the surgery. Not surprisingly, attendings are not likely to want to hand over the controls to a novice and jeopardize the outcome for the patient.

Beane argues that this situation is not isolated to surgical training. Drawing upon ethnographic fieldwork from many different work environments (from warehouses to investment banks), he shows how “intelligent” technologies often dismantle the systems of workplace learning, even as they also often increase productivity in the short term. Unlike Carr, he finds that these technologies are often pretty good for the experts in that it often augments their capabilities. The problem, though, is that it also often separates the novices from the experts. The new work processes are also often confining and constraining for novices, preventing them from gathering skill-building experiences (e.g., the algorithmically-controlled police beat). The extreme case would be the Amazon warehouse worker whose work is completely controlled by an algorithm, and thus has little opportunity to learn much of anything (besides how to be a good servant to the machine).

This, of course, is not a technological inevitability. Human work has always been technological, and many high-tech workplaces are also places that foster learning and growth. The key components that Beane identifies as necessary for workplace learning are Challenge, Complexity, and Connection. Here, he draws heavily from learning theorists who will be familiar to most educators: Vygotsky, Lave and Wenger, Deci and Ryan. The short version is that people learn when they are engaged in tasks that are appropriately Challenging (especially when they have the support and guidance of experts). They also learn when they can engage with the Complexities of their work environments (e.g., seeing how interconnected parts of an organization are interconnected rather than focusing on their one small part within it). Finally, people need human Connections not only because they learn from each other but also to maintain motivation and community. Technologies such as robotic surgery systems corrode all three of these components.

Beane does not think we need to reject these new systems, but rather change the way we think about how they are designed and deployed. All too often, the decision-makers around these technologies narrowly prioritize productivity, even if it sacrifices opportunities for human learning. Beane argues that we need to strike a better balance. Robotic surgery technology, for instance, could be designed and deployed such that we still see the benefits of the technology (better surgical outcomes) while still supporting the training of future surgeons. It’s not an either-or. But it does require that learning is prioritized in ways that it currently is not. 

So, the message of The Skill Code is ultimately an optimistic one. And indeed, the final chapter of the book very much runs with that heady optimism… far more than I was expecting. Beane goes in for some pretty serious tech hype in the final part of the book, and I think the book would have been stronger without that final chapter entirely. So, let’s just set that part aside. Instead, let’s take his insights and consider how they might help us think about technology and formal education.

What Does This Mean for Formal Education?

There are two ways we might apply Beane’s insights to education. The more directly obvious connection is to the learning processes of teachers: how might technology be changing the work environment such that novice teachers are less able to develop expertise? Two immediate observations come to mind:

  1. For traditionally certified teachers (an increasingly uncommon way to become a teacher, but that’s a different issue), we still utilize an apprenticeship model during the student teaching experience. It’s far from ideal (for one, it’s too brief), but largely works well enough.

  2. On the other hand, once teachers enter the profession, it is not an environment that is very conducive to ongoing professional skill development - at least in the United States. While schools often have “mentorship” programs for novice teachers, the apprenticeship model that we see in student teaching is largely abandoned. 

As a point of departure, then, we might consider how technology might affect novice teacher learning during student teaching or as part of ongoing inservice development. On the whole, I am less worried about student teaching. While there is plenty of pressure on teachers to use various technologies and platforms, I haven’t yet seen this erode the mentorship model in the ways that Beane observed. 

I am much more concerned about inservice teacher learning, which is already rickety at best. Let’s consider for a moment how generative AI (e.g., MagicSchool, Khanmigo) might further undermine the already flimsy structures that support the development of teacher expertise. If you are a novice teacher and you need to, say, come up with a lesson or create a rubric or craft an email to a parent, these technologies promise to assist you with that. That seems great, as it meets your immediate needs. But suppose that you did not have access to technological assistance for these needs (and that includes things like TeachersPayTeachers and other non-generative AI tech). What would you do?

When I was a novice teacher, I sought the guidance of a more expert colleague. To be sure, this is far from perfect. Some highly experienced colleagues will give pretty lousy advice and guidance. But the point here is that these are the situations that bring experts and novices together. MagicSchool may supply novice teachers with (debatably) serviceable lessons in their moments of need, but at a cost of connection.

There are probably other technological systems that similarly impede expert-novice relationships in schools. This is an area that is probably worthy of closer examination!

But so far, we’ve just looked at teacher learning. What about students? Beane’s building blocks of Challenge, Complexity, and Connection are certainly relevant for the formal learning environment of the classroom. We can thus ask how different education technologies influence these qualities. Personalized tutoring technologies are an interesting case. On the one hand, their advocates would emphasize their ability to provide appropriate Challenge by adapting to the abilities of a student. At least within certain domains, this might be true, but what of Complexity and Connection? And if the foundation of a classroom is the novice-expert relationship between student and teacher, how does inserting a tutoring technology disrupt that relationship?

I am reminded of the promises made by the creators of a recent AI system that claims to do twice the teaching in half the time: 

“Instead of having to spend time doing lesson planning and grading homework and lecturing, what our adults in the classroom are able to do is focus on making a positive impact on students by concentrating on motivational and emotional support.” 

If the teacher has been relegated to being a motivational and emotional support coach, where’s the apprenticeship relationship?

Here, though, I’d like to look at Beane’s underlying argument about the apprenticeship model a little more critically. While there are certainly examples where apprenticeships do indeed lead to the growth of novices into experts, Beane tends to describe apprenticeships in an abstract and idealized form. In the real world, it’s not just technology that disrupts the idealized expert-novice relationship. What is sold to novices as an apprenticeship arrangement can often turn out to be, in reality, one of servitude. That is not accidental. From a management perspective, there are many benefits of having servants rather than apprentices within an organization. Taking a Taylorist ethos, the goal is for workers to be efficient, but there is little reason for them to expand their expertise and skill sets in ways that allow them to take on greater responsibility in the organization. The fact that the Amazon warehouse worker is utterly constrained in their learning is a feature, not a bug. Beane is right to point out how technologies have undermined what are intended to be genuine apprenticeship arrangements, such as those in medicine. Yet such apprenticeships are not the norm.

What is missing from Beane’s account is attention to the power dynamics that exist within workplaces. Put another way, in focusing on the psycho-social dimensions of technology, he misses the political dimensions. I raise this concern because there seems to be renewed rhetoric about pursuing more apprenticeship-like approaches to formal education — rhetoric tied to the notion of more deeply connecting schools to places of work. The proponents like to invoke the idealized image of the apprenticeship. But we ought to be wary. As Beane shows us, the new technologies of management and control can all but eliminate the learning opportunities that novices have in the workplace. But the managers might not see that as a problem at all.