Beyond Algos: GenAI and Educational Trust

Civics of Tech Announcements

  1. Join us on Bluesky: We have made the decision to quit using our Twitter account. We are going to give Bluesky a shot for those interested. Please follow us @civicsoftech.bsky.social and join/follow our Civics of Technology starter pack. Civics of Tech board member Charles Logan (@charleswlogan.bsky.social) has recently shared a number of other critical tech start packs that can help you get started if interested. Let's all just hope this platform doesn't get enshittified.

  2. Spring Book Clubs - The next two book clubs are now on the schedule and you can register for them on the events page. On February 18th, Allie Thrall will be leading a discussion of The Propagandists' Playbook. And on April 10th Dan Krutka will be leading a discussion of Building the Innovation School, written by our very own CoT Board Member Phil Nichols.

By Autumm Caines

Last month, I had the opportunity to lead a 3-hour pre-conference workshop at the POD Network conference in Chicago titled “Beyond Algorithms: GenAI and Educational Trust”. If you are unfamiliar – the POD Network is a professional organization with a yearly conference for faculty developers in higher education, often focused on those who work in teaching centers.

The impacts on trust resulting from the widespread availability of large language model chatbots live large in my head. I’ve seen it first hand and I imagined that others working in faculty development had as well. Frustration and anger from many faculty over obviously AI generated text in assignments and discussion board posts – and the increased labor of sifting through it all. Other faculty were of course excited about these tools and encouraging their use but then there was fear and frustration from students who were unsure about how, when, or if they were supposed to use these tools. 

Going into this workshop I had done some reading and thinking on this matter of educational trust, I’d created a slide deck with examples of impacts on trust from genAI, and planned some structured activities/discussions but I really didn’t have a lot of answers. With the workshop my goal was just to create some space where we could start to examine how these tools are affecting multiple dimensions of trust in education. Rather than advocating for or against genAI or trying to come up with solutions myself I wanted to allow the collective knowledge of the room grapple with some of these complexities and see what emerged. 

The workshop looked at broad areas of educational trust:  the faculty-student relationship, particularly around assessment and academic integrity; our ability to verify and trust information; privacy issues with genAI systems and how that impacts our ability to trust them. I think that perhaps I went too broad because it felt like a kind of context collapse developed in the workshop. Trying to take on all of these elements of trust at one time was maybe too much. But I’m hopeful that reflecting on the workshop will prove to be fruitful

In this post I’m going to narrow a bit. I’m hoping to review some of what I presented in the workshop, and some of my thinking since, around how genAI has impacted the trust relationship between faculty and students especially concerning academic integrity. Perhaps in future posts I’ll take on the broader concerns because I do think that the problem of genAI and trust is much much bigger than just academic integrity. 

I’ve also invited two of the participants who came to the workshop to reflect in the following weeks. So look for those coming out soon. 

Faculty/Student Trust Relationships and Academic Integrity 

I’ve never been super interested in the “cheating” conversation in education. There is no action that universally constitutes cheating – what is cheating in one class is collaboration or innovation in another. From my perspective, cheating is more about power than it is about anything else and outside of having a Ph.D. in moral philosophy it basically just comes down to trust (or the lack there of) between students and faulty that they are not trying to stick it to one another. Generative AI has directly upset this aspect of trust.

In the workshop I brought in Beth McMurtrie’s recent piece in The Chronicle “Cheating Has Become Normal” which explores stories from faculty and students about pervasive cheating, much of which seems tied to genAI. She points to an annual survey at Middlebury College that asks students to anonymously admit to violating the school’s honor code. In 2019, before the release of ChatGPT, it was at about 35% of students who admitted to this but in 2024 the number rose to 65%. 

GenAI further complicates the trust relationship around cheating by upsetting mechanisms that some faculty have used to rely on for determining plagiarism.  While it is true that LLM’s can spit out verbatim text from their training data (which has its own copyright implications) that is not how they are supposed to work. Often LLMs create text which would not be picked up by a traditional plagiarism detector, which is just looking for exact text matches. 

Enter AI detection software. This software tries to find patterns of writing that LLMs are more prone to use. That is, if they are not prompted otherwise – I found early on that using the prompt “rewrite this so that it can not be detected by AI detection software”, did the trick to take AI generated content from high scores to low ones. But the biggest problem with these kinds of software is that they can, in many cases, accuse students of their work being AI generated when in fact it is not. And they tend to do so more often with students who are learning English as a second language. 

To understand faculty/student trust relationships broadly I’ve been looking at  Felten, Forsyth, and Sutherland (2023) work in this area. They outline four key dimensions of classroom trust that they identified by asking university faculty, from four different countries, what “trust moves” they used to build and maintain trust with students: 

  • cognition (demonstrating knowledge and competence)

  • affect (showing care and concern)

  • identity (recognizing and being sensitive to identities)

  • values (acting on principle). 

This is an important lens though, like all studies, they do point to limitations: 

We focus on the specific “moves” (actions or behaviours) teachers use to try to build trust with and among their students (we are adapting the concept of “moves” from the scholarly work on rhetorical moves in writing studies; e.g. Graff, Birkenstein, and Maxwell [2014]). We are not trying to demonstrate whether these moves are effective, nor are we considering student trust moves. For now, we are concentrating on teachers’ goals and actions. What moves do teachers make in the classroom related to trust?

Hoping to find student perspectives, as well as searching for a genAI specific framing, brought me to new research by Luo (2024) which more directly considers the student perspective on trust, specifically around genAI. Though the study is small, Luo reveals what she calls an absence of "two-way transparency" in how genAI is being handled in higher education. She found that while students were often required to declare their genAI use and even submit chat records, they don't see the same level of transparency from faculty about how this disclosure affects grading, how AI detection tools are being used, or how faculty themselves might be using genAI tools in course design. Luo found this imbalance reinforces power inequities. Citing Selwyn et. al. from 2021 she states that: "As transparency is mostly required from the student’s side, students may be ‘placed in disempowered surveilled positions’ and feel constantly monitored."

This tension around transparency highlights something important about trust – it needs to be reciprocal. Sutherland, Forsyth, and Felten came back in 2024 to build on their previous research and seemed to find something similar. In this analysis they found that successful trust building involves both "trust me" moves from faculty (demonstrating expertise and care) as well as "I trust you" moves (showing faith in students' judgment and capabilities). With genAI, we seem to be struggling to find this balance, often defaulting to surveillance and verification rather than mutual trust building. 

To bring all of this into the workshop I presented the various models and findings from these studies and gave participants time to sit with them and discuss at their tables. I asked them to think about how they might integrate them, if they might need to make their own models, or if they had other resources that they were currently using. There were other things too - we worked through scenarios for faculty, students, and faculty developers for instance. If I were to do it again I think I might tighten it up and focus things only on academic integrity but it is hard for me to only frame that under the umbrella of trust – because the trust issue is just so much bigger. 

In the workshop we collaborated in this messy notes google doc but there was also a lot of me running around with a mic surfacing different thoughts throughout the room. I can’t really bring that part into a blog post but what I can do is ask some of the participants to reflect on their thoughts around these matters now that the workshop is over. I made this a broad offering at the workshop and Bonni Stachowiak and Mike Goudzwaard took me up on the offer. In the following weeks you will see posts from them following up on this one.

Previous
Previous

Beyond Algos Part 2: A Problem of Trust, Not Just Literacy

Next
Next

Raising Tech Consciousness: An Article and a Lesson