Beyond Algos Part 3: Creating Space to Think and Talk About Trust

Civics of Tech Announcements

  1. Upcoming Tech Talk on Jan 7: Join us for our monthly tech talk on Tuesday, January 7 from 8:00-9:00 PM EST (GMT-5). Come join an informal conversation about events, issues, articles, problems, and whatever else is on your mind. Use this link to register.

  2. Join us on Bluesky: We have made the decision to quit using our Twitter account. We are going to give Bluesky a shot for those interested. Please follow us @civicsoftech.bsky.social and join/follow our Civics of Technology starter pack. Civics of Tech board member Charles Logan (@charleswlogan.bsky.social) has recently shared a number of other critical tech start packs that can help you get started if interested. Let's all just hope this platform doesn't get enshittified.

  3. Spring Book Clubs - The next two book clubs are now on the schedule and you can register for them on the events page. On February 18th, Allie Thrall will be leading a discussion of The Propagandists' Playbook. And on April 10th Dan Krutka will be leading a discussion of Building the Innovation School, written by our very own CoT Board Member Phil Nichols.

 

This post follows up on Autumm Caines’ post from two weeks ago: Beyond Algos: GenAI and Educational Trust. It is the third and final post in the series, following Bonni Stachowiak’s reflection posted last week on Autumm’s POD Conference Pre-Conference workshop.

 

By Mike Goudzwaard

Like Bonni, when I read the workshop description of “Beyond Algorithms: GenAI and Educational Trust” in POD Network’s annual conference schedule, I knew that is where I needed to be on Sunday morning in Chicago.

The topic was timely, as my institution was preparing to launch both a GenAI institute and a grant program offering instructors support and funding to incorporate GenAI into their courses. I had participated in community forums and talked with instructors and students, and I knew that many of the fears and hopes of teaching and learning in an era of GenAI were indeed linked to the existence of trust, or lack thereof.

Applying SIFT

I was enticed by Autumm’s inclusion of the SIFT framework from the recently published book: *Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online (2023)* by Mike Caulfield and Sam Wineburg. 

Mike and I both taught in a quantitative literacy program at a state college a few years ago. I was familiar with his work that frames how to think critically and verify information. I was eager to spend some time with other educational developers, designers, and instructors digging in and applying the SIFT framework to teaching and learning with GenAI.

I had also recently seen Matthew Rascoff’s interview with Sam Wineburg about Verified for the “Academic Innovation for the Public Good” author series (10/9/24). I highly recommend watching the recorded interview. As Matthew says in his introduction: 

“Sam and Mike are concerned with the onslaught of dubious information that confronts all of us, who are negotiating our news, our political judgments, and our social relationships on the internet. Together, they’ve developed a set of practices that can be used by anybody of any age to gauge the reliability of that information.” 

In the POD workshop slide deck (slides 22-26), Autumn explained the four moves Mike and Sam outlined in their book:

  • Stop

  • Investigate the source

  • Find better coverage

  • Trace claims

The library guide SIFT and Generative AI Content from University of Michigan-Dearborn provides a further introduction into incorporating SIFT into a teaching context. 

Mike has recently been writing about experiments with LLM’s for media literacy and conspiracy analysis. In Critical Reasoning with AI: Initial Analysis of a Conspiracy Claim, he uses prompting to ask ChatGPT (o4) to perform a Toulmin-style analysis of a conspiracy argument. The resulting response breaks down the claim, including its assumptions, evidence, rebuttals, and outside references, and suggests what additional evidence might be most useful in analyzing the claim. Caulfield concludes, “Again: LLMs are bad at facts, but surprisingly good at fact-checking. We should think more about this.” Yes, we should think more about this, and deep-dive workshops like Autumn’s are a great place to start. 

Hacking FOMO

This pre-conference workshop was a chance to hack the typical FOMO (fear of missing out) dizziness of many concurrent sessions spread across multiple buildings that typifies most conferences. I wanted a deep dive, and time to unpack ideas, and to have second and third conversations with newly-met colleagues. Spending three hours on Sunday morning with a cohort of people sounded like the depth and breadth I was looking for.

Harvesting Gifts

Those were the reasons I signed up, and here are my takeaways from participating in the Beyond Algos workshop: 

The Gift of Collective Knowledge: 

This workshop exceeded my expectations. I had the good fortune to meet and work with great colleagues at the back table for three hours, but that didn’t happen by accident. Autumm brought the right framing and created the space for these connections to happen. In her previous post, Autumn points out that one of the ways she served us was by, “running around with a mic surfacing different thoughts throughout the room.” This is not only an act of service, but also a means of allowing the collective knowledge in the room to emerge. 

The Gift of Round Tables:

One of the people at my table was Bonni Stachowiak, who I know from her prolific Teaching In Higher Ed podcast. In fact, Bonni had a great interview with Mike Caulfield around the release of Verified in 2023. Honestly, I was a bit starstruck at first, however Bonni is so open with her questions, learnings, and reflections that she invites everyone into the experience. I was inspired to see her multiple modes of engaging and sharing this workshop. Bonni produced a great sketchnote of the session, recorded a video reflection (in which she discusses our mutual interest in card decks like the Deck of Spaces), and she followed up with a reflective piece last week.

The Gift of Time: 

It is absolutely worth investing time in a pre-conference workshop, especially one offered by Autumn! I appreciated the time to read recent research, contribute to shared notes, and discuss the challenges to educational trust in an age of GenAI. We need to make time to think about AI and trust, and as Autumn points out, this doesn’t happen on its own. We must create these spaces. This is an important counter-balance to the larger AI narrative and associated tech talks and sales pitches. 

The topic of trust and GenAI is not impacting everyone and every institution in the same ways. I found benefit in engaging away from the daily context of my own institution and broadening the conversation to the experiences of colleagues in the room. 

Trust will not be (re)built by one of us, but by all of us.

Mike Goudzwaard serves as the Associate Director of Learning Innovation at the Dartmouth Center for the Advancement of Learning. He is a contributing author to the edited volume, Recentering Learning: Complexity, Resilience, and Adaptability in Higher Education (2024).

Previous
Previous

Being a Civics of Tech Parent Part 3: Commitments

Next
Next

Beyond Algos Part 2: A Problem of Trust, Not Just Literacy