Writing Assistants and the AI Wasteland

Civics of Tech Announcements

  1. Monthly Tech Talk on Tuesday, 09/03/24. Join our monthly tech talks to discuss current events, articles, books, podcast, or whatever we choose related to technology and education. There is no agenda or schedule. Our next Tech Talk will be on Tuesday, September 3rd, 2024 at 8-9pm EST/7-8pm CST/6-7pm MST/5-6pm PST. Learn more on our Events page and register to participate.

  2. September Book Club: For our next book club we will read Ashley Shew’s 2023 book, Against Technoableism Rethinking Who Needs Improvement. We will meet at 8pm EDT on Thursday, September 12th, 2024. You can register on our events page.

By Jacob Pleasants

Author Note: If you would prefer to read an “improved” version of this essay courtesy of Grammarly.com, you can find it immediately below this post. If you would prefer to struggle to interpret my error-ridden, meandering, wordy, and wholly inadequate version, then read on.

Now that we are about two years into the Generative AI chatbot era, I think it's safe to say that the most dire worries about what those technologies would mean for education have not held up. To be sure, they have created plenty of problems (I'll get to those a little later), but classroom instruction has largely continued apace, albeit with some disruptions. That is to say that the likes of ChatGPT or Claude have neither completely undermined nor "revolutionized" our educational efforts. Educational institutions, as well as teachers and students, are slowly and steadily adjusting to the new landscape. Yet while ChatGPT has not been the death of education as we know it (fortunately), we cannot rest easy when it comes to the influence of generative AI in the classroom. The chatbots may have stolen the spotlight, but I would like to argue that there is a different AI technology that is far more consequential for the classroom: the "AI writing assistant."

Your students are using AI writing assistants. Maybe you are using them as well. They are a slow-moving educational nightmare.

Unlike AI chatbots such as ChatGPT or Claude, AI writing assistants are specialized pieces of software that are designed for the specific task of writing. The most well-known at the moment is Grammarly (it has practically plastered the internet with advertising), but there are plenty of competitors and the big players in word processing are rapidly adding more "assistive" features. This technology, of course, is not new. For years, word processing software has provided writing suggestions in the form of spell and grammar checks. For those of us who have used software like Microsoft Word over the decades, we've seen those automated editing tools improve dramatically. Where once there were simply red or blue squiggly lines, most programs can accurately guess the word we were trying to type and quickly help us correct the spelling. The grammatical suggestions they provide are more accurate and, again, most programs will provide some suggested revisions. While there are distinct drawbacks even of these relatively mundane editorial assists, on balance I find them to be pretty useful, if occasionally annoying.

In the current AI discourse, the quaint editorial suggestions of Word hardly seem like AI at all. The current crop of "AI-powered" writing assistants claim to do much more than just check for spelling errors or basic grammatical issues. They offer to rephrase and even reorganize your writing. On top of that, many now have integrations with Generative AI foundation models so that they can provide you with outlines, suggest citations, and all but write the first draft for you. Where once our word processing software was, at most, an editor, the situation has changed. There remains a very clear boundary between asking a computer program to edit your work and asking a Generative AI model to write something for you. However, the multi-purpose AI writing assistants that now exist are rapidly closing that boundary and challenging our notions of authorship.

This is the situation that has now arrived in our classrooms. What are the consequences?

1.        "But it's just Grammarly!"

When ChatGPT came onto the scene, it was pretty obvious that a student could use it to write their essays, answer homework questions, and similarly do their work for them. It was also pretty obvious to everyone involved that doing this was not okay. To be sure, plenty of students did it anyway, but there was no illusion that what they were doing was permissible. There were plenty of questions and conversations about how to detect AI-generated writing (which are still ongoing) and what the consequences of submitting AI-generated work should be (also still ongoing). But nobody is arguing that it's totally fine to have ChatGPT or any other chatbot do your work for you.

The situation with AI writing assistants is distinctly different. The term itself implies that these technologies are not doing the work "for" the student - they are simply "assisting." And if our mental model of that assistance is what programs like Microsoft Word do (editing), then this position seems very reasonable. But these programs are no longer merely editing a student's work. Depending on how heavily they use certain features (which usually requires paying for a premium subscription), software like Grammarly can go far beyond merely "assisting" the writing process and all but take it over. When the student becomes the assistant to the technology, we have a problem. 

But this is not a problem that is easily recognized as such. For teachers and students alike, the use of something like Grammarly is treated as wholly permissible, beneficial, and perhaps even encouraged. After all, who wouldn't want students' writing to be free from mechanical errors, to be more readable, logical, and clear?

Unlike the chatbot, we have viewed AI writing assistants as benign, if not benevolent. Their use is now ubiquitous.

2.        "Let's all write like machines!"

If your experience is anything like mine, you’ve now become pretty good at spotting AI-generated writing. It tends to be full of bulleted lists (always lists!), bookended by summary paragraphs that look like what you’d expect in a 5-paragraph essay. Chatbots have a default “tone” and degree of formality that is very recognizable. Unless you prompt them to do otherwise, they will pretty much always respond with that particular style. When a student submits a piece of writing that has those characteristics, that’s when the alarm bells go off.

A question: Why do chatbots produce text that looks like that?

An answer: Their default way of writing is what is generally assumed to be “high quality” (or at least “competent”)[1]. If it weren’t, then I would assume that they would be modified so that they were. Who would intentionally design a chatbot to generate sub-par text?

An implication: If whatever the AI chatbots are creating is regarded as “high quality,” what kind of writing do you suppose an AI writing assistant is going to push you toward?

A consequence: Writing that has been “assisted” by the likes of Grammarly has been made to be more similar to what is produced by the AI chatbots.

This is a concerning situation for a whole lot of reasons. For one, there’s the obvious issue of what is being regarded as “good writing.” What these chatbots produce is not culturally neutral – it reflects linguistic hegemonies, cultural hegemonies, colonialist logics, etc. Those are big issues, but let’s set them aside for the moment. Consider this more mundane but very practical problem: if your students’ work is being “revised” to sound more and more like it has been written by a machine, then it’s going to set off all your “this was written by a chatbot” alarms.

But it’s just Grammarly, though, so it’s okay? [2]

3.        “Welcome to the AI wasteland”

On a regular basis, I find myself reading student writing that has clearly been shaped by the hand of AI. Was it wholly written by a chatbot? Or did this student “only” use Grammarly or something similar? If so, how much did they use it? These are now the kinds of questions I have to regularly wrestle with. And it’s miserable. I don’t want to have to wonder just how much of my students’ writing was actually written by them. I don’t want to have to read text that was written by a machine. I don’t want to make accusations of cheating.

This is the AI wasteland. ChatGPT didn’t (and won’t) be the death of education. But it and its AI ilk have polluted and degraded the learning environment, and it’s not getting any better. Instead of being inundated by wholly AI-written text, we are instead in a sea of increasingly AI-shaped writing. It feels terrible to read machine-generated content when you are expecting something created by a human. It actually feels worse to not know if what you’re reading was created by a human. That ambiguity and uncertainty is the wasteland.

Put into place all the generative AI policies you want. If we don’t get serious about the AI writing assistants, the wasteland will only become further degraded and continue to expand. It’s not going to be the death of education, but it’s going to make the experience a mostly shittier one for everyone involved.

—————————————————————-

[1] I will admit that this is actually more complicated than I’m making it sound. The default way of writing is essentially a composite of what the foundation models were fed as training data, which is essentially the scraped content of the internet. Consider that much of that content is the product of Search Engine Optimization (SEO). So, you could actually make a pretty strong case that the quintessential chatbot “style” is actually just replicating the contemporary style of SEO. Which is terrifying.

[2] It’s worth keeping in mind that there are now a bunch of “AI detectors” in place that will not distinguish between something wholly written by a chatbot and something “cleaned up” by a writing assistant. A colleague of mine ran afoul of this recently. He wasn’t actually using Grammarly himself. He sent a portion of text for a manuscript to the lead author, who then piped it through Grammarly before inserting it. The portion kept getting flagged by an automated system in the submission portal. My colleague was baffled, because he had no idea what the lead author was doing. Only after much back-and-forth did it finally come to light.

The Grammarly “Improved” Version

As we look back on the two-year mark of the Generative AI chatbot era, it's evident that the initial concerns about the impact of these technologies on education have not come to fruition. Despite presenting their fair share of challenges (which I'll address shortly), traditional classroom instruction has, for the most part, persisted, albeit with some disruptions. In essence, chatbots such as ChatGPT or Claude have neither completely upended nor revolutionized our educational practices. Educational institutions, educators, and students are steadily adapting to this new landscape. Although ChatGPT hasn't marked the end of traditional education (thankfully), we cannot afford to ignore the influence of generative AI in the classroom. While chatbots have received a lot of attention, I firmly believe that there's another AI technology that holds far greater significance for the classroom: the "AI writing assistant."

It's important to note that your students and even yourself might be utilizing AI writing assistants. Unfortunately, these tools are causing a slow-moving educational nightmare.

AI writing assistants, unlike chatbots such as ChatGPT or Claude, are specialized software designed specifically for the task of writing. Grammarly, the most well-known one at the moment, has heavily marketed itself across the internet, but there are numerous competitors, and major word processing players are quickly adding more "assistive" features. Although this technology is not new, word processing software has long offered writing suggestions in the form of spell and grammar checks. Users of software like Microsoft Word have witnessed significant improvements in these automated editing tools over the years. What was once represented by simple red or blue squiggly lines can now accurately predict the word being typed and provide quick assistance in correcting spelling. Furthermore, the grammatical suggestions are more precise, and most programs offer suggested revisions. Despite the inherent limitations of these relatively basic editorial aids, I generally find them quite helpful, if occasionally bothersome.

In the current discourse on artificial intelligence (AI), the rudimentary editorial capabilities of Word hardly resemble AI at all. The present cohort of "AI-powered" writing assistants purport to offer much more than mere spell-checking or basic grammatical corrections. They profess to rephrase and even restructure written content. Additionally, many now feature integrations with Generative AI foundation models, enabling them to furnish outlines, suggest citations, and effectively compose initial drafts. Formerly confined to an editorial role at best, word processing software has undergone a transformation. A distinct demarcation once existed between tasking a computer program with editing one's work and enlisting a Generative AI model to compose content. Nevertheless, the multifaceted AI writing assistants in existence today are rapidly blurring this boundary and challenging prevalent conceptions of authorship.

The current situation has reached our classrooms. What repercussions will this yield?

1. "But it's just Grammarly!"

When ChatGPT emerged, it became apparent that students could exploit it to complete essays, solve homework problems, and essentially substitute their own work. However, it was widely acknowledged that this practice was unethical. Although many students still engaged in it, there was no misconception about its acceptability. Discussions ensued on methods to identify AI-generated content (which continue), as well as the repercussions of submitting such work (also ongoing). Nonetheless, there was a unanimous consensus that utilizing ChatGPT or similar tools to complete one's work was unacceptable.

The landscape changes with AI writing aids. The term itself suggests that these technologies are not assuming the role of the student but rather offering "assistance." If we conceive of this assistance akin to what programs like Microsoft Word provide (editing), the notion seems rational. Nevertheless, these programs now do more than just edit a student's work. Depending on the extent of their usage (typically necessitating a premium subscription), tools like Grammarly can surpass the boundaries of mere "assistance" and almost entirely take over the writing process. When the student becomes subordinate to the technology, an issue arises.

Perceiving this as problematic is not straightforward. For educators and students, the use of tools like Grammarly is perceived as entirely permissible, advantageous, and perhaps even encouraged. Naturally, one would desire the writing of students to be devoid of mechanical errors and to be more coherent, logical, and transparent.

Contrary to chatbots, we have regarded AI writing aids as benign, if not beneficial. Their usage has now become widespread.

2. "Let's all write like machines!"

As an expert audience, you understand the nuances of AI-generated writing. There are observable patterns in AI-generated text such as bulleted lists and structured summary paragraphs akin to a standardized essay format. Chatbots often default to a specific tone and formality, making their output easily recognizable. This default writing style is typically assumed to be of high quality or at least competent.

The underlying question is why chatbots produce text that conforms to this particular style. One plausible explanation is that the default writing style is considered to be of high quality, leading developers to retain it [1]. Intentionally designing chatbots to generate subpar text seems unlikely. Consequently, one could infer that AI writing assistants may push users toward a similar writing style assumed to be of high quality.

This situation raises concerns regarding the definition of "good writing." AI-generated text is not culturally neutral and reflects linguistic and cultural hegemonies as well as colonialist logic. Moreover, from a practical standpoint, if students' work is revised to sound machine-generated, it will raise doubts about its authenticity.

Despite these concerns, a prevalent tool like Grammarly could potentially exacerbate this issue by making the revised writing more akin to AI-generated content.

3. "Welcome to the AI wasteland"

I frequently come across student writing that appears to have been heavily influenced by AI. I find myself wondering if the text was entirely composed by a chatbot or if the student relied on tools like Grammarly. It's troubling to think about how much of the writing is truly the student's own work. I'm disheartened by the idea of having to assess text that might have been written by a machine. Accusing students of cheating is not something I want to do.

This is the reality we're facing with AI. ChatGPT didn't cause the downfall of education, but its presence and that of similar AI have certainly tainted the learning environment. No matter how many AI regulations we implement, if we don't address the issue of AI writing assistants seriously, the problem will only worsen and spread. It won't spell the end of education, but it will certainly make the experience much poorer for everyone involved. 

[1]  I acknowledge that the issue is more complex than it may seem. The standard writing style is essentially a blend of the data used to train the foundation models, which consists largely of scraped internet content. It's important to consider that much of this content is a result of Search Engine Optimization (SEO). Therefore, one could argue that the typical chatbot "style" is simply a replication of the current SEO style, which is quite disconcerting.

Previous
Previous

Teaching Students to Critique Screen Media through Election Advertisements

Next
Next

Beginning the year with the Technological Quote Activity