Navigating AI in K-12 Education with a Critical Lens
Civics of Tech Announcements
Monthly Tech Talk on Tuesday, 12/03/24. Join our monthly tech talks to discuss current events, articles, books, podcast, or whatever we choose related to technology and education. There is no agenda or schedule. Our next Tech Talk will be on Tuesday, December 3rd, 2024 at 8-9pm EST/7-8pm CST/6-7pm MST/5-6pm PST. Learn more on our Events page, and register to participate. Going forward, we will use different accounts for the monthly registration so be sure to register each month if you are interested in joining.
Join us on Bluesky: We have made the decision to quit using our Twitter account. We are going to give Bluesky a shot for those interested. Please follow us @civicsoftech.bsky.social and join/follow our Civics of Technology starter pack. Civics of Tech board member Charles Logan (@charleswlogan.bsky.social) has recently shared a number of other critical tech start packs that can help you get started if interested. Let's all just hope this platform doesn't get enshittified.
Book Club, Tuesday, 12/17/24 @ 8 EST - We’re reading Access Is Capture: How Edtech Reproduces Racial Inequality by Roderic N. Crooks. Charles Logan is hosting, so register now to reserve your spot!
Spring Book Clubs - The next two book clubs are now on the schedule and you can register for them on the events page. On February 18th, Allie Thrall will be leading a discussion of The Propagandists' Playbook. And on April 10th Dan Krutka will be leading a discussion of Building the Innovation School, written by our very own CoT Board Member Phil Nichols.
Call for Chapters: Civics of Tech contributor, Dr. Cathryn van Kessel, asked us to share the following call for chapters: “Rewiring for Artificial Intelligence: Philosophies, Contemporary Issues, and Educational Futurities.” This edited collection seeks to explore how we can critically rewire our approaches to AI, ensuring that it not only integrates “seamlessly” and into and across different educational contexts but also raises “unseemly” philosophical and ethical questions. This volume will focus, in part, on the rapidly evolving field of Artificial Intelligence and implications for higher education, teacher education, and/or K-12 public schooling system. Submit proposals here by November 29.
by Richard Zapien & Jennifer Elemen
Since the 2022-2023 school year we’ve witnessed the rapid evolution of artificial intelligence (AI) from a technology rarely discussed outside computer science classrooms to a topic frequently discussed in professional, social, and multiple education circles. People have been captivated by the perceived "magic" of Generative AI, finding they could create fully developed essays, screenplays, life-like images, songs, and poems in an incredibly short time. While undoubtedly exciting, these new technologies are accompanied by serious legal, and ethical questions that critically conscious educators need to be prepared to address.
While acknowledging these concerns, teachers have leveraged chatbot technology to streamline their work. Numerous articles, webinars, and digital marketing have stressed the importance of using AI-driven tools to save teacher time by automating tasks such as developing lesson plans, evaluating student learning, and adjusting reading levels; chatbots effectively lighten the teaching workload (Slagg, 2023). However, as professional learning facilitators, we wondered if automating administrative tasks should be the sole objective for district, school, and teacher leaders grappling with how to integrate these new Generative AI tools in education. We also wondered whether using AI to provide targeted support to our neediest students is sound pedagogy. We posit that approaching the exploration and use of these technologies through a justice-centered framework gives us a more complete way to consider the use of AI in schools.
AI Policy Landscape
Providing needed TK-12 AI education policy guidance, the U.S. Department of Education released Artificial Intelligence and the Future of Teaching and Learning, calling upon education leaders "to interrogate with a critical eye how AI-enabled systems and tools function in the educational environment" (U.S. Department of Education, Office of Educational Technology, 2023) and Empowering Education Leaders: A Toolkit for Safe, Ethical, and Equitable AI Integration (2024), providing additional resources. To support local school districts with their own AI-use policies, many states have subsequently issued their own AI guidance, which includes learning about AI to prepare students for future workforce needs, ensuring the ethical, safe, and responsible use of AI, and providing educators with professional development on AI literacy that stresses how AI works, how it may be used, and how its concepts connect to foundational computer science concepts (Roschelle et al., 2024). These documents also emphasize access and equity related to AI use and pose questions for us to consider data privacy, security, and safety while continuing to advocate for federal, state, and local ethical guidelines (Roschelle et al., 2024).
Applying a Critical Lens Toward AI in Education
Adding a critically conscious perspective to the California Computer Science Teacher Association’s (CSTA) learning standards—those that govern what is taught within computer science, including AI as a subfield, Professor and researcher Amy J. Ko provides a way to conceptualize how we might apply a critical lens to analyze and explore the social impacts of computing technologies. While the CSTA standards guide educators to “Evaluate the ways computing impacts personal, ethical, social, economic, and cultural practices” (Computer Science Teachers Association, 2017), Professor Ko suggests that we “Critique how computing amplifies, centralizes, privatizes, and automates social processes in society, impacting individuals, communities, and culture” (Ko et al., 2024). Additionally, where the standard prompts educators to teach students how to “describe how artificial intelligence drives many software and physical systems” (Computer Science Teachers Association, 2017), Ko recommends that education leaders identify opportunities for teachers and students to “describe how artificial intelligence can automate complex human decisions while also encoding and amplifying bias” (Ko et al., 2024).
Researchers, scholars, and advocates for responsible AI have recently contributed to a Kapor Foundation guide designed to support educators in critically examining the impact AI technology has on individuals, their communities, and the world (White, Scott, & Koshy, 2023). The guide encourages us to consider the social impacts of AI, effectively developing a critical AI framework. It prompts us to think about how the rise of AI and other emerging technologies contributes to furthering the ‘digital divide’ for marginalized communities. Further, it asks “What is algorithmic bias, and what are its causes and implications?” and “How can individuals and communities address the harms of AI technologies?” (White, Scott, & Koshy, 2023). When posed by critically conscious education leaders, these questions may guide the creation of AI-use policies that safeguard student privacy, manage harms, and raise awareness of the ethical implications spurred by AI technologies.
During our sessions with education leaders over the past year, we engaged in thoughtful discussions about the potential use of AI in education. Our goal was to create a space for shared learning, dialogue, and critical analysis of AI tools, drawing from the guidance and resources cited above. Approaching these discussions with a techno-skeptical lens, we challenged the prevailing AI hype that often overshadowed ethical considerations. We delved into structural concerns, including environmental racism, energy consumption (Bloomberg, 2024), data worker exploitation (Williams et al., 2022), and the “representational harm” stemming from biased AI datasets (Buolamwini, 2023). Additionally, we explored the algorithmic discrimination research of Safiya Noble, who once “Googled” the phrase ‘Black girls’ and found that the top suggested sites were pornographic (Noble, 2018), demonstrating the exploitive and extractive dangers to counter. Deeper learning with critical GenAI literacy (Elemen, 2024) and Critical Race Algorithmic Literacies (Tanksley, 2024) prepares learners to analyze and counter the ways that anti-Blackness is programmed into AI.
To support leaders in developing their own critical AI literacy, we critically examined assertions made by tech startups, educational AI companies, and digital media regarding AI’s potential to revolutionize, reimagine, or transform education. These claims often highlight educator benefits, including assistance with content development, differentiated instruction, assessment design, timely student feedback, personalized learning support, creativity enhancement, and administrative operational efficiency (Code.org et al., 2023). We discussed AI-focused learning opportunities offered by universities and online learning platforms that promise AI micro-credentials, and certificates to launch new AI careers, enhance project management skills, and learn basic AI concepts. Yet, we wondered how many of these courses would include ethical debates, considering many were sponsored or co-sponsored by major tech companies.
Lessons from the Field: Criticality in Action
Here are some stories about real education leaders serving students in TK-12th grade schools, trying and critically evaluating AI tools, shared together in a community of practice.
Saúl, a STEM teacher leader, implemented a project that integrated Native American knowledge systems. Through this project, he aimed to combat the erasure of Native peoples in the United States and encouraged his students to explore the assets and achievements of this minoritized community. Using digital storyboards, students created narratives noting the strength and resilience of Indigenous tribes who relied on the land’s natural resources to thrive. Once complete, students transcribed the content of their storyboards into AI chatbots and prompted the chatbots to “improve their texts,” which is a feature in many AI-powered tools. Saúl was interested in critically analyzing chatbot outputs for their “capacity to reshape attitudes, beliefs, and knowledge” (Hobbs, 2021). While the chatbots revised the original texts, “westernized” frames became apparent. Native women were overly described as being subservient, and words like canoes were changed to “boats.” As Saúl noted, Native children face a lack of belonging and cultural references in STEM education. How are they supposed to use AI tools when they threaten further erasure?
Suzanne, an elementary school teacher in the Bay Area, California, develops and teaches a media/AI literacy curriculum to her students. Despite the challenge of finding time for these discussions, she feels compelled to educate her students about the influencing potential of deepfake videos and other media. For Suzanne, the ability to distinguish between factual and false content is essential for fostering critical media consumers. In her classroom, students engage in critical analysis of stories, images, and videos. She introduces concepts such as “fake,” “copy,” “propriety,” and “privacy” while examining both print and digital artifacts. These artifacts include the MIT AI-created video of President Nixon delivering a eulogy in the hypothetical scenario that the Apollo 11 moon landing ended tragically. During class discussions, students debate what constitutes fakery and when it is appropriate to copy others’ work, whether in digital or physical form. Despite her students’ elementary age, Suzanne emphasizes teaching about AI-generated content due to the proliferation of cell phone apps and websites that facilitate the creation of fake deepfake videos. As a leader in her school, Suzanne looks forward to reading district guidance to help her and other teachers empower students with greater digital awareness.
Exploring the potential impact of AI on teaching and learning for her lower-performing students, Anne, a middle school principal, was determined to assess how an AI assistant could enhance academic growth at her school. Given the historical underachievement of some students, she aimed to determine whether AI tools could genuinely address systemic inequities and personalize learning, as is often advertised. Anne also expressed concerns that students and families with greater resources might gain advantages in using and understanding AI unless she could bring the technology to all of her students.
Before presenting her ideas to her teachers, Anne tested an AI assistant herself. She was impressed by its lesson planning potential, which promised to create “quality, structured lesson plans for any subject, lesson, or concept.” While building a sample Civil Rights unit plan, Anne found it remarkably easy to create lessons, envisioning it as a valuable tool for teachers exploring new and unfamiliar content. However, she was disappointed that not all subjects were available, and she worried about the quality of the linked civil rights resources. Anne wanted the unit to incorporate primary sources and historical photos to encourage student reflection, but the lessons lacked the critical discussions she had anticipated. Despite her interest in using AI technology to support her students, Anne realized that her best teachers were essential for supporting the highest-needs students. She couldn’t entrust that responsibility solely to a digital companion. She knew that warm and caring human rapport was crucial for creating a learning environment for all students to thrive.
Following the release of OpenAI’s ChatGPT, a small school district in California organized a Generative AI task force. Their goal was to share guidance and resources, bringing the district community together to learn about what Generative AI is, how it works, and explore ways in which the district might support staff, students, and families in understanding, using, and learning about Generative AI’s potential and its limitations. We collaborated with the superintendent and district leaders during a board study session "to deepen the (school) Board and community's understanding of the current and ever-evolving state of artificial intelligence (AI) tools and platforms and some of their current uses by our students and schools” (Booker, R., 2024).
During the event, board members and district leaders engaged in discussions about five reasons why the district needed to stay informed about AI technologies. These reasons included Generative AI’s potential to personalize learning opportunities, support administrative tasks, aid curriculum development, and provide valuable insights into teaching and learning. These considerations would inform the creation and adoption of a responsible use AI-policy, addressing important questions related to student privacy, algorithmic biases, and ethical concerns associated with the social and environmental impacts of Generative AI.
Recognizing the growing climate anxiety among middle and high school students (Will, 2022), particularly related to global warming, district leaders also raised climate-related questions. They focused on the water and energy demands of cloud-based data centers used to power and cool AI systems. They believe it is essential to facilitate discussions with students about how AI technologies intersect with environmental concerns that students are already passionate about. Additionally, district leaders express interest in engaging students to explore how AI-powered data analysis can enhance our understanding of climate trends and how AI could contribute to creating devices that mitigate the impacts of a warming planet. These analyses of AI allow students to delve into topics that genuinely interest them and approach technology from a designer’s perspective. By researching ways to mitigate climate impacts, students can become stewards of climate action.
A Leader’s Role
Fostering critical thinking doesn’t depend on the AI tool itself; it hinges on how education leaders create safe learning spaces for exploring and discussing AI models. Leaders should carefully review federal, state, and local Generative AI guidelines, considering alignment with their values. Assessing students’ current access to AI literacy and computer science education within their school contexts is crucial. Leaders may also wish to leverage the knowledge and skills of computer science, digital media and technology teachers to share the history of AI, where AI is present in our daily lives, and how and understanding of AI can potentially lead to college and career success.
Simultaneously, leaders must ground explorations and discussions of AI and Generative AI models in critically conscious frameworks. These frameworks allow us to “interrogate the ethical and equitable development, deployment, and impacts of AI, while simultaneously challenging, disrupting, and remedying the harms that these technologies can produce within individuals’ lives, communities, and society at large” (White et al., 2023). Whether leaders consider themselves techno-optimists or technophobes, they must actively facilitate critical discussions about the role AI should play in schools, communities, and society.
References
Antoniak, M. (2023, June 22). Using large language models with care - AI2 blog. Medium. https://blog.allenai.org/using-large-language-models-with-care-eeb17b0aed27
Bloomberg. (2024). AI is already wreaking havoc on global power systems. https://www.bloomberg.com/graphics/2024-ai-data-centers-power-grids/
Booker, R. (2024, May 16). Board Study Session - Artificial Intelligence
Broussard, M. (2023). More than a glitch: Confronting race, gender, and ability bias in tech. MIT Press.
Buolamwini, J. (2023). Unmasking AI: My mission to protect what is human in a world of machines. Random House.
Code.org, CoSN, Digital Promise, European EdTech Alliance, Larimore, J., & PACE. (2023). AI Guidance for Schools Toolkit. teachai.org/toolkit
Computer Science Teachers Association. (2017). CSTA K–12 Computer Science Standards, Revised 2017. https://csteachers.org/k12standards/
Elemen, J. E. (2024, October-November-December). Teaching critical GenAI literacy: Empowering students for a digital democracy. Literacy Today. International Literacy Association. https://publuu.com/flip-book/24429/1497535/page/62
Finley, T. (2023). 6 ways to use CHATGPT to save time. Edutopia. https://www.edutopia.org/article/6-ways-chatgpt-save-teachers-time/
Furze, L. (2023). Teaching AI ethics. https://leonfurze.com/2023/01/26/teaching-ai-ethics/
Furze, L. (2023). AI Q&A: Anna Mills on balancing the critical and creative aspects of generative AI. https://leonfurze.com/2023/11/13/ai-qa-anna-mills-on-balancing-the-critical-and-creative-aspects-of-generative-ai/
Furze, L. (2023). Hands on with AI audio generation: GAI voice, music, and sound effects.https://leonfurze.com/2023/09/25/hands-on-with-ai-audio-generation-gai-voice-music-and-sound-effects/
Hobbs, R. (2021). “A most mischievous word”: Neil Postman’s approach to propaganda education. Harvard Kennedy School Misinformation Review. https://misinforeview.hks.harvard.edu/article/a-most-mischievous-word-neil-postmans-approach-to-propaganda-education/
Ko, A. J., Beitlers, A., Wortzman, B., Davidson, M., Oleson, A., Kirdani-Ryan, M., & Druga, S. (2022). Critically Conscious Computing: Methods for Secondary Education. https://criticallyconsciouscomputing.org/
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7). https://www.cell.com/patterns/fulltext/S2666-3899(23)00130-7?ref=maginative.com
Muhammad, M. (2023). Unearthing joy: A guide to culturally and historically responsive teaching and learning. Scholastic.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
Roschelle, J., Fusco, J., & Ruiz, P. (2024). Review of Guidance from Seven States on AI in Education. Digital Promise. https://doi.org/10.51388/20.500.12265/204
Slagg, A. (2023). AI for teachers: Defeating Burnout and boosting productivity. EdTech. https://edtechmagazine.com/k12/article/2023/11/ai-for-teachers-defeating-burnout-boosting-productivity-perfcon
Tanksley, T. C. (2024). “We’re changing the system with this one”: Black students using critical race algorithmic literacies to subvert and survive AI-mediated racism in school. English Teaching: Practice & Critique, 23(1), 36–56. https://doi.org/10.1108/ETPC-08-2023-0102
U.S. Department of Education, Office of Educational Technology. (2023). Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, Washington, D.C. https://www2.ed.gov/documents/ai-report/ai-report.pdf
U.S. Department of Education, Office of Educational Technology. (2024). Empowering Education Leaders: A Toolkit for Safe, Ethical, and Equitable AI Integration, Washington, D.C. https://tech.ed.gov/education-leaders-ai-toolkit/
White, S. V., Scott, A., & Koshy, S. (2023). Responsible AI and Tech Justice Guide. Kapor Foundation. https://kaporfoundation.org/wp-content/uploads/2024/01/Responsible-AI-Guide-Kapor-Foundation.pdf
Will, M. (2022). Teens are struggling with climate anxiety. Schools haven’t caught up yet. Education Week. https://www.edweek.org/leadership/teens-are-struggling-with-climate-anxiety-schools-havent-caught-up-yet/2022/12
Williams, A., Miceli, M., & Gebru, T. (2022). The exploited labor behind artificial intelligence. Noema Magazine. https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/
Authors
Richard Zapien is a justice-minded education leader who facilitates professional learning in the Bay Area and throughout California. He has served as a classroom teacher, instructional coach, principal, and district administrator. Most recently, he led the California Computer Science Project, which supports education leaders in creating the systems and structures necessary to launch equitable CS in schools and districts throughout California.
Dr. Jennifer Elemen is an award-winning education leader who designs and facilitates professional learning. She has served in roles from teacher to director at the classroom, school, district, county, region, state, and higher education levels; president of California Council for the Social Studies, CLEAR AI steering committee member, and BeGLAD certified agency trainer.
The views and opinions expressed in this article are those of the writers and do not necessarily reflect the views or positions of any entities they represent. Names of professional learning participants have been changed to protect their privacy.