Children Writing with LLMs: A Problematization

Civics of Tech Announcements

  1. Monthly Tech Talk on Tuesday, 06/04/24. Join our monthly tech talks to discuss current events, articles, books, podcast, or whatever we choose related to technology and education. There is no agenda or schedule. Our next Tech Talk will be on Tuesday, May 7th, 2024 at 8-9pm EST/7-8pm CST/6-7pm MST/5-6pm PST. Learn more on our Events page and register to participate.

  2. 3rd Annual Conference Announcement: We are excited to announce that our third annual Civics of Technology conference will be held online on August 1st, from 11-4 pm EST and on August 2nd, 2024 from 11-3pm! Our featured keynotes will be Dr. Tiera Tanksley and Mr. Brian Merchant. You can register, submit proposals, and learn more on our 2024 conference page. Proposals are due by June 14th!

  3. June Book Club on Tuesday, 06/18/24: For our next book club we will Mary Shelley’s Frankenstein! The book club will be led by Dr. Marie Heath. We will meet at 8pm EDT on Tuesday, June 18th, 2024. You can register on our events page. Another summer book club is coming too.

by Mary Rice (maryrice@unm.edu), Associate Professor of Literacy , University of New Mexico

When Generative AI (GenAI) such as Large Language Model programs (LLMs) became available for public use in late 2022, schools started to grapple with their capabilities (Kasneci, et al., 2023). For example, LLMs generate content such as short stories and dialogues based on brief instructions given by users (Topsakal & Topsakal, 2022). LLMs may be deemed to have educational potential because it is believed that they can support children in generating or improving text. For children writing in home or school settings, the appeal might be strong because children need specific writing instruction to develop writing skills, yet there is little such instruction available in school settings (Barrett, et al., 2020; Harris & McKeown, 2022). 

Research about LLMs for school-aged children, including elementary-aged students has focused on how these technologies might be used to enhance and personalize learning (Chen et al., 2020; Hwang et al., 2020; Zhang & Aslan, 2021). According to Boninger et al. (2019), personalized learning within educational technology draws on principles from the scientific management movement from the 1920s with some refinement from current technology entrepreneurs. Historically, these so-called personalized pathways have resulted in limited learner choices. For example, learners might be able to choose what order to study certain skills in, from a narrow range of topics for a required exercise (e.g., would you like to do math problems about trains or pizza?), or based on a menu of how to be assessed (e.g., write a response or record an oral speech). In these scenarios, children must follow a prescribed learning course. Boninger et al. (2019) warned that educational technology vendors may become more interested in designing products that prioritize assigning children as much screen time as possible above any other factors for their learning and well-being.

Another problem with engaging with GenAI solely or mostly to do proscribed tasks to learn specific skills from lists of curricula is that it might work against students’ agencies with technologies. There is also the potential to be exploitative of already underserved populations (Rice & Dunn, 2023; Selwyn, 2022). Thus, educators planning the future of GenAI in education must attend carefully to acknowledging and supporting agencies of learners in classrooms. This is especially important as young children develop identities as writers. In this blog post, I discuss the research base around LLMs for younger children alongside the conception of agencies that might be limited or afforded to young children. I end with some thoughts about what might be done to support children’s identities and agencies as writers.

Small Research Base for Young Children and LLMs

Most previous literature about GenAI use with LLMs has focused on its potential for use in higher education, especially for disciplines like medicine and law (Lo, 2023). When used in settings with school children, high school students are the most common participants (Lo, 2023). This is because many LLMs are recommended for children 13 and older. However, as these technologies increasingly come into schools, these age recommendations may subside. This has already happened as the ChatGPT recommended use age went from 18 to 13 (Miller, 2023).

Research about elementary-aged school children that exists has focused on the feasibility of using LLMs with this age group. For example, Murgia, Abbasiantaeb, et al. (2023) conducted a preliminary investigation on the feasibility of adapting LLMs to support children in discovering information. The researchers found that the LLM could adapt their text to search prompts to address audiences of children in grade 4. In another study by Murgia, Pera et al. (2023) children in grade 4 responded to LLM output. Researchers determined that while the text was technically at grade 4, there were still readability issues that lingered for some children. Both studies used an LLM for reading rather than writing.

Other studies focused on elementary schools centered on teacher perceptions of the technologies and their roles. For example, Jeon and Lee (2023) collected data from 11 teachers and identified four roles for LLMs: (1) interlocutor, (2) content provider, (3) teaching assistant, (4) evaluator. In addition, there were three teacher roles: (1) orchestrating resources with pedagogical decisions, (2) making students active investigators, and (3) raising critical AI awareness. The researchers suggested that these more complex roles for teachers would require special kinds of professional learning and preparation.

In another study by Luo et al. (2023), six expert early childhood professors from the United States and China speculated on the roles and challenges of using LLMs with young children. The experts determined LLMs could serve as a resource for this population, but that broader concerns like accessibility, affordability, accountability, sustainability, and social justice should take precedence for younger children. Finally, Wu and Yu (2023) conducted a metanalysis of AI chatbot technology, which is an available feature in LLMs. They found that AI chatbot interventions had the strongest effects in higher education and in short-term interventions.

LLMs and Children’s Potential for Agencies

“Agencies are only distinct in relation to their mutual entanglement; they don’t exist as individual elements” (Barad, 2007, p. 33). LLMs are not non-human machines. Instead, ChatGPT draws from language produced by humans, which is then reordered according to statistical probabilities and patterns. LLMs contain information from many sources and recombine them. As an apparatus, ChatGPT functions as a specific material (re)configuration of the world through which bodies of text are intra-actively materialized (Barad, 2007). In this type of apparatus, matter is created through intelligibility and materiality as a differential becoming. LLMs seem to exemplify ongoing being and becoming because when text is imputed into it—as in a prompt—they can draw on that text in future iterations of ChatGPT (Kasneci, et al., 2023). In addition, LLMs adapt and respond to queries from users through a statistically mediated assemblage of discourse. In this way, ChatGPT forms an intra-active entanglement with users where they each can engage in ongoing and becoming. The use of ‘intra’ in ‘intra’-action “signifies the mutual constitution of entangled agencies” (Barad, 2007, p. 33, italics in original). Further, Barad (2007) clarified that separate agencies precede an interaction. Therefore, intra-action highlights the separate and distinctness of those agencies as they emerge and then entangle. 

Discourse produced by LLMs is not perfectly rendered. LLMs hallucinate or generate inaccurate or mistaken information based on false decoding of input data in a manner that sounds feasible or credible (Alkaissi & McFarlane, 2023; Hanna & Levic, 2023). The term hallucinate evokes Jameson’s (1983) work in the unreality—where readers are oriented toward words that have been taken out of space and time and repositioned with respect to one another. 

Another term to consider is ventriloquizing, from the work of Bakhtin (1981); this is where ideas are pulled together from various sources—although Bakhtin may not have imagined a technological actor as the ventriloquizing entity. A third concept comes from Baudrillard (1994), which is simulacra; images do not necessarily reflect reality, but nevertheless appeal to individuals’ sense of a desired reality. Because information is so readily available through the internet and because it seems available through some tools like ChatGPT, “more and more information has less and less meaning (Baudrillard, 1994, p.74). Under such conditions, individuals—including children—can make their own realities. For LLMs, what they can produce may be accepted as a reality. The sentience an LLM seems to exude may be no more than a simulacrum of reality (Brassington, et al., 2024). How do children understand these concepts? What role does children’s understanding of ChatGPTs’ capabilities play in how they determine to use or not use it for doing writing tasks? 

I am not arguing that LLMs are human; I am describing how they are intra-actively materialized from human discourse in a constant pattern of (re)configuration that lets some text at hand—the one a child might intend to create—shape and be reshaped. The agencies of children are also multiple and overlapping. For example, children might generate a text for the purpose of pleasing a teacher or while simultaneously wanting to avoid having to do an assigned task from a teacher. Children might also make texts using LLMs so that they can engage in play and interact with what they consider to be a toy, novelty, or curiosity. Some children who might have little understanding or ability to conceptualize what LLMs are, might be genuinely worried or anxious about using LLMs to generate text. Some children might also consider the use of LLMs to generate text to be an ethical dilemma. 

Importantly, children are not unlimited in the agencies that they have in their lives, their educations, or even in using LLMs. For example, children cannot sign up for the LLM services on their own. Children may not be allowed to enter media spaces, such as social media, television, or print media spaces where they will hear and read various information or opinions about LLMs without adult permission.  

Finally, children spend varying amounts of time in emotional and psychological spaces where they believe and trust adult opinions about LLMs (Harris, 2012; Mertol & Gunduz, 2020). In fact, emerging research even suggests that children learn to trust robots in the same ways that they develop trust for humans—that is when they perceive that the information that they are providing is accurate over time (Brink & Wellman, 2020; Geiskkovitch et al., 2019). Thus, children might have a variety of experiences deciding how to use and learning how to trust or not to trust an LLM. 

Writing and Reading Entangled

A second important conceptual principle regarding young children as writers is the entanglement between reading and writing. Reading cannot be without writing and writing cannot be without reading (Truman, 2016). It was Kristeva (1986) who wrote that “text is constructed as a mosaic of quotations; any text is the absorption and transformation of another” (p. 37). Accordingly, reading and writing operate as a form of cultural politics where there is privilege and empowerment possible for some, but not all (Freire & Macedo, 1987). Thus, even if one could separate writing and reading meaningfully, it would not be socially just to do so. To read only, but not to write—to speak back, to contribute—would be to surrender opportunities for being and becoming. 

In practical terms, LLMs require both reading and writing to impute prompts and to process and make sense of what is generated from the prompt. In these spaces, children might have a range of options for what to do next. Do children accept what is given to them as text? Do children use part of the text and reshape it using words lifted and shaped in their minds from previous experiences (Deleuze & Guattari, 1987)? Do children give another prompt? Do children accept the given text and use that to make another prompt? Or maybe children do nothing and walk away from the LLM. These decisions might rely on any of the previously discussed orientations for LLMs.

Applying these logics, deep considerations emerge. On one hand, children may be positioned potentially to write more than they might be able to on their own. On the other hand, they might also be positioned in a state of contradiction where they are less able to have a critical attitude toward what they produce. For example, children might think that because the LLM produced the text, it is better than a text that they could produce, and then, the children might doubt their own abilities. The long-term outcome might be children who struggle to understand themselves as readers and writers outside of the push for producing constant streams of text. At this point, who/what is the machine?

What can we do? Well, educators, parents, and policymakers who work with and/or have influence over what happens to children might redirect writing and the instruction of it toward agencies. Instead of promoting LLMs alone, questions such as “What do you think about the feedback you received? Which of these suggestions do you want to use? And at what point in your writing do you think it helpful to put your writing into an LLM?” are places to start if you are going to have the children use LLMs at all. If using an LLM for teacher feedback is supposed to free up teachers, but they spend large amounts of time feeding LLMs and sorting feedback for children, we should ask questions about teachers’ time priorities. Also, letting the children know information, such as how the LLMs operate and are trained, and what it means to environmentally and ethically to use them are also part of supporting their agencies. Finally, writing and compositional activities in general should never have been and should not now be mere exercises in text production and staying busy. They are identity-making moments. They are consciousness-raising activities. They are the work of life making for young people. There is much at stake. 

References

Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus, 15(2), 1-4.

Bakhtin, M. (1981). The dialogic imagination. University of Texas Press.

Barad, K. (2007). Meeting the universe hallway: Quantum physics and the entanglement of matter and meaning. Duke University Press.

Barrett, C. A., Truckenmiller, A. J., & Eckert, T. L. (2020). Performance feedback during writing instruction: A cost-effectiveness analysis. School Psychology, 35(3), 193.

Baudrillard, J. (1994). Simulacra and simulation. University of Michigan press.

Boninger, F., Molnar, A., & Saldaña, C.M. (2019). Personalized learning and the digital privatization of curriculum and teaching. National Education Policy Center. http://nepc.colorado.edu/publication/personalized-learning

Brassington, L., Traylor, A., & Rice, M. (2024). Using the task of supporting struggling writers to consider broader issues of composition with generative AI in English Language Arts education. In C. Moran (Ed.) Revolutionizing English education: The power of AI in the classroom (pp. 125-139). Lexington Books.

Brink, K., & Wellman, H. M. (2020). Robot teachers for children? Young children trust robots depending on their perceived accuracy and agency. Developmental Psychology, 56(7), 1268. https://doi.org/10.1037/dev0000884

Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. IEEE Access, 8, 75264-75278. https://doi.org/10.1109/ACCESS.2020.2988510

Deleuze, G. & Guattari, F. (1987). A thousand plateaus. University of Minnesota Press.

Freire, P., & Macedo, D. (1987). Literacy: Reading the word and the world. Routledge.

Geiskkovitch, D., Thiessen, R., Young, J., & Glenwright, M. R. (2019, March). What? that's not a chair!: How robot informational errors affect children's trust towards robots. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 48-56). IEEE.

Hanna, E., & Levic, A. (2023). Comparative analysis of language models: Hallucinations in

ChatGPT: Prompt study. urn:nbn:se:lnu:diva-121267

Harris, K. R., & McKeown, D. (2022). Overcoming barriers and paradigm wars: Powerful evidence-based writing instruction. Theory Into Practice, 61(4), 429-442.

Harris, P. (2012). Trusting what you're told: How children learn from others. Harvard University Press.

Hwang, G., Xie, H., Wah, B., & Gašević, D. (2020). Vision, challenges, roles and research issues of Artificial Intelligence in Education. Computers and Education: Artificial Intelligence, 1, 100001. https://doi.org/10.1016/j.caeai.2020.100001

Jeon, J., & Lee, S. (2023). Large language models in education: A focus on the complementary relationship between human teachers and ChatGPT. Education and Information Technologies, 1-20. https://link.springer.com/article/10.1007/s10639-023-11834-1

Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., ... & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274

Lo, C. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences, 13(4), 410-425.

 Luo, W., He, H., Liu, J., Berson, I. R., Berson, M. J., Zhou, Y., & Li, H. (2023). Aladdin’s genie or Pandora’s box for early childhood education? Experts chat on the roles, challenges, and developments of ChatGPT. Early Education and Development, 1-18. https://doi.org/10.1080/10409289.2023.2214181

Kristeva, J. (1986). The Kristeva reader. Columbia University Press.

Mertol, H., & Gunduz, M. (2020). Trust perception from the eyes of children. International Journal of Educational Methodology, 6(2), 447-454.

Miller, J. (2023). ChatGPT reduces age limit. https://jakemiller.net/chatgpt-reduces-age-limit/

Murgia, E., Abbasiantaeb, Z., Aliannejadi, M., Huibers, T., Landoni, M., & Pera, M. S. (2023, June). ChatGPT in the classroom: A preliminary exploration on the feasibility of adapting ChatGPT to support children’s information discovery. In Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization (pp. 22-27).

Murgia, E., Pera, M., Landoni, M., & Huibers, T. (2023, June). Children on ChatGPT readability in an educational context: Myth or opportunity? In Adjunct Proceedings of the 31st ACM Conference on User Modeling, Adaptation and Personalization (pp. 311-316).

Rice, M. & Dunn, S. (2023). The use of artificial intelligence with students with identified disabilities:  A systematic review with critique. Computers in the Schools. https://doi.org/10.1080/07380569.2023.2244935

Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education, 57(4), 620–631. https://doi.org/10.1111/ejed.12532

Topsakal, O., & Topsakal, E. (2022). Framework for a foreign language teaching software for children utilizing AR, voicebots and ChatGPT (Large Language Models). The Journal of Cognitive Systems, 7(2), 33-38.

Truman, S. (2016). Chapter seven: Intratextual entanglements: Emergent pedagogies and the productive potential of texts. Counterpoints, 501, 91-107.

Wu, R., & Yu, Z. (2023). Do AI chatbots improve students learning outcomes? Evidence from a meta‐analysis. British Journal of Educational Technology. https://doi.org/10.1111/bjet.13334

Zhang, K., & Aslan, A. (2021). AI technologies for education: Recent research & future directions. Computers and Education: Artificial Intelligence, 2, 100025. https://doi.org/10.1016/j.caeai.2021.100025

Previous
Previous

New Lesson: Lewis Latimer and the Invention of the Electric Light Inquiry

Next
Next

Pause before Implementing: Exploring the [Surveillant] Impacts of Technology