Yes, we need an AI policy for our courses. But developing one is just the beginning.
Civics of Tech Announcements
Next Tech Talk on Feb 4th: Join us for our monthly tech talk on Tuesday, February 4th from 8:00-9:00 PM EST (GMT-5). Join in an informal conversation about events, issues, articles, problems, and whatever else is on your mind. Use this link to register.
Join us on Bluesky: We have made the decision to quit using our Twitter account. We are going to give Bluesky a shot for those interested. Please follow us @civicsoftech.bsky.social and join/follow our Civics of Technology starter pack. Civics of Tech board member Charles Logan (@charleswlogan.bsky.social) has recently shared a number of other critical tech start packs that can help you get started if interested. Let's all just hope this platform doesn't get enshittified.
Spring Book Clubs: We have now have three spring book clubs on the schedule that you can register to attend on the events page.
On February 18th, Allie Thrall will be leading a discussion of The Propagandists' Playbook.
On March 13th, Jacob Pleasants and Dan Krutka will be leading a discussion of Nicholas Carr’s new book, Superbloom: How Technologies of Connection Tear Us Apart. This book club is a new addition!
On April 10th, Dan Krutka will be leading a discussion of Building the Innovation School, written by our very own CoT Board Member Phil Nichols.
Today’s blog post originally appeared on April 16, 2024 on The Hub for Teaching and Learning Resources at the University of Michigan Dearborn. We continue to find the ideas in this piece to be generative and thought-provoking and so are happy to share it with the Civics of Tech community. This is the first in a series and we will be posting the next installment in a few weeks.
By Shelly Jarenski
Like many other educators, I am not alone in feeling that Generative AI has upended my world. As a literature professor who teaches most of my classes in the online asynchronous format, my entire pedagogy for the last 20 years has been built on student writing. Now, a technology exists that can generate often good, sometimes great, student writing very quickly. This tumult is coming just on the heels of the last time my pedagogy was fundamentally upended. It is no exaggeration to say the pandemic changed me as an educator. Changing again is overwhelming but we are here! And we need to speed up, but not as much as we need to slow down!
In this blog post, the first in a series of entries on “Responding to Generative AI with an Ethics of Care,” I recount my experience developing an AI policy, an experience that I thought would be about finding the right language, but that instead forced me to reflect on the fundamental goals and purposes of my courses. And I encourage fellow educators to use the experience of developing a policy on AI as a starting point for rethinking what learning should and could be with the ultimate goal of talking to students more openly about the reason we are all here—their own learning.
The initial advice for navigating AI that I, and I am sure many of you, found in consulting the Hub resources and blog was to respond to AI with the teaching strategies that we already know work, “authentic assessments, humanized pedagogies, and the avoidance of punitive approaches.” (For example, in this early document the Hub put together in February 2023, and in this blog by Autumm Caines). This is essential advice that I feel like I did not really understand when I first tried to follow it. At first, I implemented this advice through design. But design alone is not enough. It has been more important to figure out how to align my pedagogy with my deepest values, to communicate these values to students, and to ask them what values they want to bring to bear on their education.
Take developing an AI policy as an example. When I first heard this advice, it was a simple directive. Another syllabus addition. It seemed so simple, in fact, that I just figured my existing plagiarism policy already covered Generative AI, since I state, along with information about penalties: “You will always perform better in my classes if you give a task your best try. I want to hear your thoughts, and your original thoughts are always stronger than the things you find online (these are usually written for high school and even middle school audiences, so they’re usually just not that strong–it is never A work even when I don’t catch you doing it.)”. Add “find online or generate through AI software” and viola, I have a policy. Simple.
But when you really start learning about what generative AI can do, you need to start asking more fundamental questions. Developing a policy isn’t about adding a line to your syllabus, it is about uncovering your fundamental values and articulating your core expectations for student work. And what brought me to this foundational place? An AI tool itself pushed me to rethink who I am and what I want as an educator.
Well not really, it was an AI tool powered by thoughtful, empathetic human intelligence that brought me to this place. The GenAI Class Policy/Teaching Philosophy Generator is a custom GPT developed by Autumm Caines that can do more than help you find the best wording for your syllabus (it can do this too but I encourage you to go further!). The simple but enlightening prompts in the GPT ask you to dig deep and reflect seriously about teaching and learning. For example, the AI begins with the question “please rate your stance on AI use in the classroom on a scale from 1 to 10, with 1 being very restrictive and 10 being very permissive?” and builds to questions that ask you to reflect on both your concerns about genAI but also to imagine what opportunities it might present, and to questions about specific methodologies you value.
One caveat about Autumm’s tool is that you can only access it through ChatGPT Plus, which is a paid subscription.If you don’t subscribe to ChatGPT Plus you will need to schedule an appointment to access the policy generator with an Instructional Designer from the Hub, There is substantial benefit to doing this anyway. I personally found the experience of using the generator akin to a pedagogical therapy session, wherein I had to confront the very foundations of my philosophy and practice, so it was useful to have Autumm to help me process those reflections and turn them into policy and practice. (More about the potential toll of this level of care, this emotional labor, in a later entry.)
The policy I developed just three months ago is already not entirely the policy I would develop today. Confronting AI has been a growth process unparalleled in my career. It has made me tired! But it is also an opportunity to get myself, and ultimately my students, to look at the core of what we want from one another in the classroom. After I developed my working policy, I used it as a starting point for the real responsibility I found that I had, talking to students about AI. I will describe that process in my next entry.
Shelly Jarenski is an Associate Professor of English literature at the University of Michigan-Dearborn. Her research interests include American literature and culture, critical race theory and women and gender studies and the scholarship of teaching and learning. She is the author of Immersive Words: Mass Media, Visuality, and National Literature, 1839-1893 (University of Alabama Press, 2015) as well as essays in MELUS, American Quarterly, Resilience: A Journal of the Environmental Humanities, Western American Literature, and in edited collections.