Governance and the Civics of Artificial Intelligence

Civics of Tech Announcements

  1. Next Book Club on this Tuesday, 12/19/23: We are discussing Blood in the Machine The Origins of the Rebellion Against Big Tech by Brian Merchant led by Dan Krutka. Register for this book club event if you’d like to participate.

  2. Next Monthly Tech Talk on Tuesday, 01/09/23. Join our monthly tech talks to discuss current events, articles, books, podcast, or whatever we choose related to technology and education. There is no agenda or schedule. Our next Tech Talk will be this Tuesday, January 9th, 2023 at 8-9pm EST/7-8pm CST/6-7pm MST/5-6pm PST. Learn more on our Events page and register to participate.

  3. Critical Tech Study Participation: If you self-identify as critical of technology, please consider participating in our study. We are seeking participants who self-identify as holding critical views toward technology to share their stories by answering the following questions: To you, what does it mean to take a critical perspective toward technology? How have you come to take on that critical perspective? Please consider participating in our study via OUR SURVEY. You are welcome to share with others. Thank you!

by Scott Alan Metzger, Penn State University

Until fairly recently most people probably considered artificial intelligence (AI), when they thought about it at all, to be something out of science fiction. This changed with the sudden emergence of generative AI widely available to the public—notably ChatGPT. In was felt like short order, AI went from imaginary sci-fi bogeyman to something real with unknown potential to radically change human society. Within its first year of public availability, millions of Americans have experimented with using AI. American corporations have adopted it, too, and started replacing a growing number of formerly human jobs.

 Of course, the suddenness is just a matter of perception. Research into making computers capable of learning and intelligence equaling (or exceeding) the human mind has been going on for decades, since the days of Alan Turing and his famous test (in which machine intelligence was measured by how long it took a human being to determine its messages were not coming from another person). In 1996, IBM supercomputer “Deep Blue” beat a chess grandmaster for the first time. In 2011, IBM’s “Watson” soundly beat two human all-time champions on the TV gameshow Jeopardy! Perhaps people shouldn’t have been so shocked when the deep-learning text-to-image DALL-E and chatbot ChatGPT (Generative Pre-Trained Transformer) were made available to the public in 2022. On the other hand, such generative AI models represented an incredible level of improvement in only a decade.

 It is already clear to many people that artificial intelligence is one of the most important and potentially disruptive new technologies ever to emerge. Education will, sooner than later, have to deal with it. Those who follow the “technoskeptism” of the Civics of Technology community know that human history is replete with disruptive technological changes that transformed human life and society, in ways that can be seen as both beneficial as well as harmful, at least for some. It would be an educational mistake to view AI as completely different. At the same time, few (if any) previous technological changes have had so much global impact so quickly.

Prior technologies have augmented human abilities and replaced particular human roles or functions, but for the first time a technology has the potential to supersede human thinking—the species’ one edge over all animals that allowed it to dominate the planet. Or for those who prefer a spiritual metaphor, true artificial general intelligence may represent a mind more like God than humankind. History education can help us understand how humans got to this point but not where we may go from here. The future of human society under AI is a crucial topic for civics. 

Background: Rise of OpenAI and Struggle for Control

Both DALL-E and ChatGPT are products of OpenAI, an initially non-profit research organization funded by donations and with a founding board of technology luminaries that included Sam Altman and Elon Musk. Its mandate was to support development of “safe and beneficial” artificial general intelligence—autonomous machine thinking that can outperform humans in economically valuable work. Due to unclear disagreements over leadership, Musk departed the board in 2018. In 2019, Altman was named CEO of the company as it was spinning off a for-profit subsidiary (OpenAI Global) to attract outside venture investment and reward employees with a stake in the company through stock options. Altman and his supporters argued that a non-profit organization alone couldn’t compete with the deep pockets of tech corporations including Google, Facebook (now Meta), and Microsoft. Critics responded that for-profit incentives were counter to the goal of democratic access and put at risk commitment to safe development of AI.

 OpenAI did make initial versions of DALL-E and ChatGPT freely available to the public but in limited amounts. By late 2023, new sign-ups were temporarily suspended to cope with demand. At the same time, OpenAI Global under Altman’s leadership entered into a multi-billion dollar investment deal that was rumored to give Microsoft potentially a 49% stake in the company. These funds enabled OpenAI Global to begin acquiring other tech startup companies that would improve its products. Fears arose the overlap with Microsoft would push OpenAI’s products toward rapid commercialization, and some board members resigned to avoid, they said, conflicts of interest with Microsoft. Clearly OpenAI was changing, and not in a way everyone approved.

Then in November of 2023, OpenAI became embroiled in a highly public but mysterious internal power struggle. The organization’s board, which is charged with the ultimate oversight of leadership and direction, suddenly fired CEO Sam Altman. The surprise announcement vaguely claimed Altman was discharged after a deliberative review into deceptive communication and claims from anonymous former employees of abusive behavior. OpenAI’s chief technology officer was immediately named interim CEO, but it soon became clear she did not support the leadership change and left the position within days. With Altman’s supporters on the board resigning, the remaining members tried to appoint yet another CEO. The next day, more than 90% of OpenAI’s employees signed a letter threatening to resign and follow Altman to a new venture he was planning with Microsoft if the rest of OpenAI’s board didn’t resign.

It seemed there would be virtually no OpenAI left if the rump board persisted. Behind the scenes Altman’s supporters (or surrogates, or allies?) negotiated for his return. One board member recanted advocating for the firing and defected to Altman’s side. Altman appeared to have the upper hand and required reconstitution of the board as a condition for his return. Altman was formally reinstated as CEO after only five days. The chief technology officer who supported him returned to her position. The new board mostly consisted of members approved by Altman.

Civics: Who Controls Technological Change?

The confusing events of November 17–22, 2023, at OpenAI are interesting not only for their byzantine opaqueness akin to a power struggle in a historical royal court. What actually happened in this supposedly non-profit organization matters. Was this a quashed effort by the board to stop a CEO deviating from the organization’s mission, or a failed “coup” by Altman’s rivals, or political jujitsu by Altman to remove a board standing in the way of his plans? It is proving impossible to know for certain since there is no real transparency. The board gave little public explanation for firing Altman and virtually no details. Most of what the public knows played out on Twitter (or X, now owned by former OpenAI board member Elon Musk), as various participants tweeted cryptic or hardly disinterested statements.

The conflict inside OpenAI may reflect what multiple observers have called the two camps of artificial intelligence. “Accelerationists” believe in the positive transformative value or transhumanism of AI and that it is best to accelerate development toward artificial general intelligence as efficiently as possible. “Doomers” (so pejoratively named by their opponents) believe in the risks or potential threat to humanity of not adequately controlled AI and that artificial general intelligence must be developed cautiously. One possible interpretation of the battle for OpenAI is that those who controlled the old board believed Altman and his followers had become dangerous accelerationists. Altman and his supporters, then, purged the board of hostile doomers. Again, how can the public know for certain when there is so little transparency?

This is what turns the OpenAI conflict and the future of artificial intelligence into a crucial topic for civics education. With something as important and potentially disruptive as AI, who should regulate its capacity and the pace at which it is developed? This leads to a difficult tension. If regulation is too locally restrictive, bad-faith actors in other regions could accelerate development there in hopes of acquiring first advantages. Given what happened at OpenAI, it seems unsatisfying to a democratic society that reconciling such tensions should be left entirely in the hands of the small number of actors within a global capitalist industry.

Here it is important to point out that even accelerationists can recognize the need for regulation. Sam Altman was one of a group of leaders in AI development who called for the creation of an international regulatory agency for AI along the lines of the International Atomic Energy Agency (IAEA). The IAEA prevented the proliferation of nuclear weapons for decades—but moves in recent years by North Korea and Iran may cast some doubt. Is the UN today capable of creating a new agency that could provide effective oversight of a technological change as potentially powerful as AI in a world so divided between competing blocs of interests?

Finally, the conflict inside OpenAI can be viewed through the lens of global capitalism. The neoliberal world system championed by the US in the decades after World War II gave rise to transnational corporate organizations and flows of capital larger than the economies of most of the planet’s nation-states. CEOs of such corporations wield a kind of power greater than most political leaders. Legally, it is the corporation’s board that is supposed to exercise oversight of all executive officers and their activities for the good of the company and its investors. For a non-profit, oversight is supposed to protect an even broader public good. At OpenAI, the board—whether right or wrong—was prevented from exercising its discretion by a CEO who very clearly had consolidated almost total support within the company. The vast majority of employees would have left and shut down OpenAI rather than lose Altman. Perhaps they truly loved Altman and felt his sudden ouster was unfair—but it should at least be kept in mind that it these employees benefitted financially from Altman’s leadership of a for-profit OpenAI Global.

 This leads to a compelling question for civics education: If a corporate board isn’t allowed to exercise control over a transnational corporation involved in powerful technology, who can?

Previous
Previous

How Do We Imagine a Better Technological Future? A Q&A With Cory Doctorow

Next
Next

The Technology Stoplight Approach to Devices in the Classroom