In Defense of That Which Moves Slowly: AI Regulation (or Lack Thereof) and its Implication for EdTech

Civics of Tech Announcements

  1. Next Tech Talk on March 4th: Join us for our monthly tech talk on Tuesday, February 4th from 8:00-9:00 PM EST (GMT-5). Join an informal conversation about events, issues, articles, problems, and whatever else is on your mind. Use this link to register.

  2. Upcoming Book Clubs: You can register for these upcoming book clubs on our events page.

    1. On March 13th, Jacob Pleasants and Dan Krutka will be leading a discussion of Nicholas Carr’s new book, Superbloom: How Technologies of Connection Tear Us Apart.

    2. On April 10th, Dan Krutka will be leading a discussion of Building the Innovation School, written by our very own CoT Board Member Phil Nichols.

  3. Need More Books? Check out our most recent book reviews, including:

  4. Be sure to join us on Bluesky @civicsoftech.bsky.social and join/follow our Civics of Technology starter pack

By Erin Anderson

Recently, I watched September 5, a 2025 Oscar-nominated film about ABC’s coverage of the 1972 Olympics, where ABC sports journalists found themselves in the middle of the Israeli hostage crisis. A key moment in the film happens when Geoffery Mason, a young producer played by John Magaro, decides whether to broadcast incoming reports of freed hostages. Some journalists urged Mason to wait for multiple confirmations. However, Mason, desperately wanting ABC to be the first station to broadcast the scoop, directs Peter Jennings to report the good news. As the scale of the hostage tragedy manifests, though, ABC has to recant.

This tension between being the first to do something versus moving slower through a process parameterized by trust and verification is taking on newfound relevance for me. I’m thinking of this tension as thousands of federal workers are laid off, including those whose jobs focus on verifying their field’s work so the general public can trust their endeavors, from food safety experts at the Food and Drug Administration to financial regulators at the Consumer Financial Protection Bureau. The next regulatory workers rumored to be on the chopping block include US AI Safety Institute (AISI) employees. This relatively new agency was created with Biden’s 2023 Executive Order 14110, entitled Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Biden’s AI EO highlighted the need for AI safety standards, civil rights protections to prevent AI discrimination, privacy safeguards to protect personal data, and guidelines for federal agencies to ensure the equitable use of AI. 

It all sounds good to the average consumer of AI-integrated products, right? Especially those of us with children in schools using generative AI-integrated educational technology like Khanmigo, one of the most popular children’s educational technologies integrated with ChatGPT.

However, one of Trump’s first administrative executive orders was to revoke Biden’s AI EO. Instead of creating guardrails for AI development/deployment, Trump’s own AI EO called these safety standards “barriers” to innovation and those civil rights protections “ideological bias or engineered social agendas.” All those AI experts in that newly created US AI Safety Institute were then recently excluded from the US delegation to Paris’ 2025 AI Summit. Now, those AI regulators await termination. At the Paris 2025 AI Summit itself, where 57 countries signed a declaration prioritizing ethical and trustworthy AI, America, along with the UK, were the only two countries that refused to sign it. AI experts at the University of Oxford tied America’s refusal to JD Vance’s speech, which argued against prioritizing AI safety and insisted on prioritizing “opportunities afforded by AI.” 

Again, I think about the tension between slow-moving processes and the rush to be first. Then I think about our children and their exposure to technology. What could be more important than ensuring children's products are trusted and verified for safety? 

There are organizations dedicated to reviewing children's technology, such as Common Sense Media. Until recently, major producers of generative AI, like Open AI, had partnered with the US AI Safety Institute to research and test their products. While Common Sense gave Khanmigo a four-star review, they noted that Khanmigo AI sometimes amplified or reinforced biases and stereotypes. Concerning those major producers of generative AI themselves, such as OpenAI, Google’s Deepmind, and Elon Musk’s X.AI, they received low, even failing grades.

With these foundational generative AI models receiving such terrible scores, is now the time to start deprioritizing trust and safety in the name of progress, especially if these models become integrated with children’s technology? As opposed to devaluing safety, some AI experts at the 2025 Paris AI Summit called for a moratorium on developing generative AI. At the very least, they recommended creating an AI safety regulation process similar to the drug approval process.  While a moratorium on generative AI development is unlikely in the US, stricter regulations on models integrated into children’s products would likely garner more support, especially given the harm generative AI has already caused. 

For example, the generative AI Character.AI, an innocent-enough-seeming app, allows people to converse with AI-created pop culture characters. Character.AI is now being sued for allegedly causing a fourteen-year-old to commit suicide. Young Sewell Setzer from Florida reportedly fell in love with his AI character Daenerys Targaryen, the protagonist from Game of Thrones, after having “highly sexualized conversations” with her, then taking his life to be closer to her. While Character.AI is not educational technology, this tragedy highlights the need for further investigation into generative AI safety, especially in products involving children. If companies making generative AI further deprioritize safety, as current indicators signal, a moratorium on integrating them into educational technology could be necessary. With companies like Meta, which have already received an F safety rating and are overturning their content moderation policies, how might this impact the training of their generative AI models and, in turn, how might this impact the educational products integrated with their AI? If big tech is willing to put aside the concerns of AI experts, they may be less willing to do so under concerned and organized educator and parental pressure. 

Trusting and verifying things may take time, but these processes are crucial to making products safe for American consumers. This is especially significant when it involves children. Sure, tech operates with a “move fast and break things” ethos. However, we cannot afford to let our children be broken amid this mad dash for first. 

Next
Next

Conversation as Care: Why Talking to Students About AI is Our Most Essential Task Right Now