Beyond Algos Part 2: A Problem of Trust, Not Just Literacy

Civics of Tech Announcements

  1. Upcoming Tech Talk on Jan 7: Join us for our monthly tech talk on Tuesday, January 7 from 8:00-9:00 PM EST (GMT-5). Come join an informal conversation about events, issues, articles, problems, and whatever else is on your mind. Use this link to register.

  2. Join us on Bluesky: We have made the decision to quit using our Twitter account. We are going to give Bluesky a shot for those interested. Please follow us @civicsoftech.bsky.social and join/follow our Civics of Technology starter pack. Civics of Tech board member Charles Logan (@charleswlogan.bsky.social) has recently shared a number of other critical tech start packs that can help you get started if interested. Let's all just hope this platform doesn't get enshittified.

  3. Spring Book Clubs - The next two book clubs are now on the schedule and you can register for them on the events page. On February 18th, Allie Thrall will be leading a discussion of The Propagandists' Playbook. And on April 10th Dan Krutka will be leading a discussion of Building the Innovation School, written by our very own CoT Board Member Phil Nichols.

 

By Bonni Stachowiak

This post follows up on Autumm Caines’ post from last week: Beyond Algos: GenAI and Educational Trust. This week, I reflect on Autumm’s POD Conference Pre-Conference workshop. The next post will conclude this series with another participant’s reflections. 

 

The moment I saw the description of the pre-conference workshop and that Autumm Caines was leading it, I knew that I wanted to participate. The discourse around AI’s role in education and workplaces has had far too much of an emphasis on hacks, or, on the other end of the spectrum, catching students “cheating.” I was intrigued by Autumm’s focus on educational trust. My already-high expectations were exceeded. Not only did the experience in the workshop  expand my knowledge, but also helped me form and deepen relationships with people who are trustworthy in both character and competence. [Note: I share one of many inspirations from my experience at POD in the “Deck of Spaces” card deck unboxing and more POD 2024 reflections video]

I have empathy for faculty who focus more on catching students “cheating” by using AI than on altering their approaches. A global pandemic was more than enough to heighten burnout levels in  higher education (Coffey, 2024). Then, along came ChatGPT in November of 2022, a chatbot that made accessing and interacting with a large language model almost as easy as texting a friend. The pressure on faculty to swiftly acquire a new set of digital fluencies related to AI was palpable throughout higher education. I appreciated the clarity and targeted aspirational learning brought forth by academic and technology leaders from Barnard College with their Framework for AI Literacy (2024). BenMessaoud (2024) echoed the need for new competencies, coupled with new curriculum, writing: 

This is a call to action: to embrace the convergence of human insight and AI, to nurture a future where technology amplifies humanity. The moment is now, to commit to an overhaul that will echo through the annals of our time, shaping a legacy of learning that is truly fit for the future we envision.

But adding an entirely new body of knowledge to faculty members’ overflowing plates seemed unrealistic and cruel. Fortunately, there are those suggesting that we are not required to drop all other priorities in our lives and instantly become “experts” at AI. Instead, they invite us to [Slow-Walk] Our Use of Generative AI (Lang, 2024) and use our pedagogical values to guide us. Lang writes: 

Slowness in teaching and learning has many benefits and can take many forms. But we live in a world that values speed, and plenty of administrators and students want results delivered as quickly and effortlessly as possible. Those of us who teach should not hesitate to remind them of the virtues of slowing down.

In talking with faculty at my institution, as well as interviewing over thirty people for my podcast on the topic of AI in higher education, I have been surprised about an often missing piece of the conversation. Many educators have high comfort levels with using AI to reduce their grading workload, or for producing lesson plans and syllabi. However, they simultaneously require students to either not use it at all, or to provide meticulously documented accounts of each step in their process involving AI use. AI abstinence or detailed transparency is required on the students’ part, but those same expectations are not self-imposed by most faculty members. 

Autumm described in the POD pre-conference workshop the “arms race” involving technology “improvements” designed to combat unauthorized AI usage and tools designed to obfuscate the use of AI. Students’ keystrokes can now be monitored via “educational” technology to ensure they are performing the required labor. Meanwhile, some services allow students to simulate typing on a keyboard, in an effort to combat these efforts. Excelsoft’s proctoring software is one such tool used to monitor typing patterns and detect potential unauthorized AI assistance. Keystroke logs can also be used in attempts to contrast human authorship from transcribed essay writing (Crossley, et al, 2024). Students can combat these efforts by techniques such as using virtual keyboards or by employing anti-keylogger software. Autumm also shared how with the large percentage of faculty using AI generated content detection tools, even the smallest false positives can compound at a rapid pace (Davalos & Yin, 2024).

We clearly do not solely have a competence problem. We also have a trust problem. 

As we continue to seek to develop our AI fluencies, let us also work on cultivating trust within the learning communities we facilitate. We cannot get every micro decision regarding learning and teaching “right.” When in doubt, let’s trust students and nurture their innate curiosity and capacity for growth.

 

Bonni Stachowiak hosts the Teaching in Higher Ed podcast, publishing weekly episodes since June 2014. She serves as dean of teaching and learning and professor of business and management at Vanguard University of Southern California.

Next
Next

Beyond Algos: GenAI and Educational Trust