Practices of Care

Civics of Tech Announcements

  1. Next Tech Talk on April 1st: Join us for our monthly tech talk on Tuesday, April 1st from 8:00-9:00 PM EST (GMT-5). Join an informal conversation about events, issues, articles, problems, and whatever else is on your mind. Use this link to register.

  2. Next Book Club: On April 10th, Dan Krutka will be leading a discussion of Building the Innovation School, written by our very own CoT Board Member Phil Nichols.

  3. The Latest Book Reviews:

  4. Be sure to join us on Bluesky @civicsoftech.bsky.social and join/follow our Civics of Technology starter pack


This blog was originally posted at the Hub for Teaching and Learning Resources at the University of Michigan Dearborn on February 18, 2025. It is cross-posted with permission.

by Shelly Jarenski


This is my third and final blog post in my series,“Responding to Generative AI with an Ethics of Care.” You can read my previous entries on gen AI at the Hub Blog linked here. In this entry, I want to talk about some pedagogical moves I have made in my teaching in response to generative AI. Are they the right moves? I don’t know. Are they working? So far, they seem to be, but it is certainly early in both my implementation of these methods and in students’ familiarity with generative AI to know for sure. I can say that these are evidence based practices with a long history, and they’ve reinvigorated my pedagogy and helped me to connect with students in new ways. The three practices I have been incorporating into my teaching in the last year that I would like to focus on in this blog are: essay-free English classes, ungrading, and Curate and SIFT.

Recently, at ISSOTL and MLA, I have been giving presentations based on the idea of crisis moments as opportunity moments. You can read the abstract for the more conceptual of the two presentations, the ISSOTL presentation, at this link.  I view crisis moments as potential opportunity moments because crises necessitate the implementation of new practices. This presentation was inspired by Laura Cruz and Eileen Grodziak’s essay, linked here, “SOTL under Stress: Rethinking Teaching and Learning Scholarship during a Global Pandemic,” in which they argue, among other claims, that the Covid-19 pandemic, and the abrupt and profound changes it brought to educational practices, might allow for new questions to be asked in the field of the scholarship of teaching and learning and that these shifts created opportunities as well as challenges. One of the opportunities they argue emerged in the Covid-19 crisis moment was the centering of compassion, “Rather than despair of our current situation, we should perhaps be proud of the fact that, through the darkest hours of modern academic history, we have sustained, and been sustained by, a love of teaching, care for our students, and the belief that higher education matters.”

Some may see generative AI as a crisis. Certainly I have felt that way, as an instructor who has always relied primarily on student writing as a pedagogical tool. But, as I have argued in this series, care, compassion, and as Cruz and Grodziak put it,” a love of teaching,” are still present, if not more present, in this moment. And this moment can also be seen as an opportunity. It is an opportunity to reconsider what we are doing and why, and to ask ourselves if what we have been doing has really been working as well as we think it has been all along. An opportunity to reconsider engagement and trust as they exist in the classroom. And it is an opportunity to humanize ourselves and our students and learning itself in the radical ways that gen AI doesn’t allow for (yet, or maybe never). 

Essay Free English Classes and Authentic Assessments

Dave Cormier developed an AI teaching checklist in 2023 and in it, he bluntly advises professors to ask themselves: “Have I considered why I was using the take home assessments affected by generative AI (e.g. right answer questions, long form text like essays)? Can I replace them with new approaches that serve the same purpose?” This was hard for me to ask myself but it has been so beneficial!  It was hard, in part, because I have been stuck on how one might do English studies in the “real world” for so long. English studies is a fairly abstract practice. We’re not advocating for a specific social issue, doing an experiment, or writing an instruction manual. We are reading texts and thinking about them. To be clear, I feel that literature has monumental consequences for the “real world.” Through studying literature students develop empathy, critical thinking, political consciousness, and more. But the reading and thinking part as been hard for me to conceptualize as something they “do” outside of the classroom.

But by asking myself hard questions–why is writing the best way for students to “do” English? Is writing essays really helping them to accomplish what I think it is helping them accomplish?–I began to understand how I might do things differently.  As Cruz and Grodziak propose in their new list of questions for SoTL in crisis moments, I asked myself “How is it working?” and then I asked myself again, “No, how is it really working?.” These questions allowed me to distill my learning objectives down to their essence: I want students to learn how to close read literary texts and other works of art, think deeply about them, and then express their ideas about the texts they’ve read. Once I saw that objective clearly, it was easier to see that writing isn’t the only way to accomplish it. Writing might not even be the best way to accomplish it. Peer annotation helps a great deal with the reading portion of this goal. In terms of expressing, I have found lots of modes through which they can express their ideas that aren’t essays. Creating. Teaching. Curating. Authentic assessment has come easier to me since AI has emerged because I had to confront my own practices and assumptions more urgently than I have before.. 

Of course, AI can do some of the things I am asking them to do.So I am not claiming that these alternatives to essay writing are in any way “AI proof”. However, because the assignments are more engaging, students tend to be more motivated to do them on their own, or at least with AI assistance, rather than just generating them wholesale. Also, I have tried to include stages so that students at least have to engage with content even if there is an AI hand in producing the content. For example, students don’t just write a lesson plan, they write a lesson plan and then teach that lesson plan. Or they will curate an imagined museum exhibit based on the content of the course and then present their plans to a classmate. These kinds of activities encourage engagement and thinking even if AI is part of the process. 

How is authentic assessment related to care? For the academic year of AY 2022-23, The Hub For Teaching and Learning at the University of Michigan-Dearborn hosted Brian Dewsbury as a scholar in residence for his expertise in inclusive teaching. A Hub Blog recap of his first workshop and a video are linked here. During his residency, he explained some central tenets of inclusive teaching, such as allowing students to draw on their own diverse backgrounds and experiences as they complete their assignments, providing flexibility in the kinds of work students can do to demonstrate their learning, and allowing students to engage in activities that connect to their experiences and interests outside of the classroom. These tenets of inclusive teaching promote a sense of belonging for students, an experience that is often explicitly withheld in traditional approaches to teaching and learning. Traditional teaching and learning can feel like an elite club that students must earn membership to, while practices that promote belonging rather than invite them in. Practices of authentic assessment draw explicitly on these principles of inclusive teaching; they allow students some choice in how they express their learning, they allow them to bring in their prior experiences and knowledge into the classroom and they ask them to relate what they are learning in class to those prior experiences and knowledge. Authentic assessment is inclusive, and inclusivity is care. 

Ungrading

I’ve long been curious about ungrading. When I have tried to incorporate this practice in the past, I have come up against some obstacles. One obstacle was just not knowing how to do it. Another was not having any kind of campus norm to tap into. At a time when I wasn’t aware of anyone else at my university doing this kind of work, I was concerned it would frustrate students rather than help them. The teaching cohorts formed during the pandemic, largely facilitated by our Hub for Teaching and Learning, brought me into contact with lots of folks on my campus doing this work. This contact provided me with models, support, and the confidence that students might encounter these practices in other classrooms on my campus. Finally, the emergence of generative AI gave me the opportunity, I would even say permission, to finally implement ungrading. 

The ungrading practice I use is essentially a form of what is known as “specs grading,” which is explained in this Inside Higher Education essay written by Linda Nilson, who is also the author of the book, Specifications Grading; Restoring Rigor, Motivating Students, and Restoring Faculty Time.  In my case, it works like this: when I grade an assignment, I focus on whether or not a student has met the learning objectives for that particular assignment, and that they have attempted to do so in good faith. Once the student and I have together decided that the objectives have been met, then they earn all available credit for that assignment. In order to reflect on the learning objectives of the assignment and reflect on whether or not they believe they have met those objectives, students complete a self evaluation for every assignment they turn in. I provide ample feedback on their work at multiple stages, explaining to students why they have or have not met the learning objective. If they have not, then I ask them to redo the assignment until they have. It is in this loop of feedback, reflection, and (sometimes) revision where I have found ungrading to be more rigorous, in many ways, than traditional grading. Instead of assigning a low grade for work, students have to keep working with the material until they have achieved certain a certain level of facility. In my literature courses, that usually means being specific when talking about the literary work, using well selected quotes, explaining the significance of those, and telling me what ideas they have developed in response to the literature we are studying. If a student turns something in that is too vague or too broad, they are asked to redo the assignment. 

Nothing is AI proof, but this strategy addresses AI in a couple of ways. The goal is to motivate students to engage with the literary texts we are studying, as deeply as possible, even if their engagement is AI assisted. In my experience with applying restorative justice to plagiarism cases, I have found that students often plagiarize out of fear of getting a bad grade or a lack of confidence, rather than a simple desire to cheat. Removing the fear that traditional grading evokes can be motivating to many students, and can encourage them to take risks with their own work, rather than leaning into the comfort of AI generated material that, to them, might “sound better” than what they feel they can produce. Another way in which this practice addresses AI is that a lot of the qualities that AI generated material currently possesses are not qualities that allow a student to receive full credit on an assignment. AI generated papers on literature tend to be  broad overviews of the topic rather than specific engagement with the language and details of the work. Certainly, strong prompt engineering can get around these vague generalities, but students who are over relying on AI are often not putting in that level of prompt engineering. Therefore, when a student submission is vague, broad, or addresses a work by the author we are studying but not the work we have read in my class, I can ask students to redo their submissions until they are being specific and addressing the more complex themes of the literature  in more detail, rather than just giving them a bad grade and leaving it at that.. 

Ungrading is care because it has had a transformative effect on my relationships with students. By removing the fear of “doing it wrong,” my strongest students have thrived. Students who have had less college preparation are similarly free to express themselves more authentically, rather than spending too much time trying to figure out how their work is “supposed to sound.” I am able to have more meaningful conversations with my students about the material and what it means to them rather than spending time on formulas like how to write a strong thesis. It levels the playing field to a certain extent, where students who haven’t learned how to play the game of university life or speak the language in the exact way an instructor wants them to speak it can earn an A by doing the core aspects of the work instead: reading the literature, thinking about the parts of the literary text that are meaningful to them, and telling me why. Since I started using ungrading, I’ve had students write to the authors we are studying, share pieces of literature we are reading with their families, relate their own lives to characters lives in ways I seldom used to see. Upgrading helps meI focus on their ability to choose meaningful evidence from a text and tell me what they have found significant about it, rather than deciding if they’re telling me in a way that fits an accepted academic formula. 

Curate and SIFT

Curate and SIFT is a specific assignment that was created for a specific context at U of M–Dearborn, but I want to talk about it here because it is highly relevant for our gen AI era and for an ethics of care. I am writing this blog post at the dawn of the second Trump presidency, where misinformation and disinformation are more rampant than ever and, at times, seem very much to be winning. Curate and SIFT is a direct intervention in this media landscape. And perhaps more importantly, students really love this assignment. We have created a full website for Curate and SIFT, linked here,, with an overview of the assignment, sample materials, and updates on a SoTL project about it. 

Curate and SIFT was conceptualized and designed  by Autumm Caines, lead instructional designer at Dearborn’s Hub for Teaching and Learning, and Anne Dempsey Moussa, the first year experience librarian at Dearborn’s Mardigian Library. It was created specifically for Dearborn’s Foundation program, linked here, a program for first year students (FITIACS and transfers) in our College of Arts Science and Letters. Curate and SIFT is an iterative assignment that uses the SIFT method, a “web literacy” method developed by Mike Caufield and Sam Wineberg, the authors of Verified: How to Think Straight, Get Duped Less and Make Better Decisions about What to Believe Online, linked here, as a starting point.The SIFT method is a break from information literacy methods of the past which encouraged students to spend extended time with a source and read it closely for signs of reliability and relevance, signs such as an author’s bio or footnotes. In a digital information age, such things as authority and credibility can be easily faked, even more so when information is AI generated and sources are sometimes invented by a bot in their entirety. In such a landscape, Caufield and Wineberg argue, it is more effective to fact check sources rather than mine them for clues. SIFT, an acronym for Stop, Investigate the source, Find better coverage, and Trace claims back to their original context, encourages students to get off the page quickly and use trusted resources to verify credibility and veracity. This link will take you to our LibGuide about SIFT, created by Anne Dempsey Moussa. 

With the Curate and SIFT assignment, students are introduced to the SIFT method. Then, they collect examples of questionable claims and news stories that they encounter in their everyday lives. They bring these collected examples in to the classroom space to share with their peers and explain to their peers why these examples caught their eye as problematic in the first place. Then, students practice SIFT on one another’s examples. They repeat this process multiple times throughout the term with guidance and feedback from the instructor. The goal is to make verifying the credibility of a source or claim a habit that is simply ingrained into their daily web/gen AI usage. 

Curate and SIFT is care because of the level of engagement it promotes in the classroom. Students are so motivated by being asked to bring in their own examples, and by working on material that is meaningful to them and to others. They love interacting with one another via material they know matters to another person. Going back to the principles of inclusive teaching I attributed to Bryan Dewsbury earlier in this blog post, Curate and SIFT is another example of students bringing their diverse experiences, knowledge, concerns, and expertise into the classroom, which we know promotes a sense of belonging. Curate and  SIFT is care because it builds on students’ inherent curiosity about the world. It is care because it is helping them navigate a media landscape that is omnipresent and often scary, overwhelming, disheartening, and confusing. We have been using Curate and SIFT for two semesters so far in the Foundations program. Students are already reporting it as their favorite assignment in a given course and explaining the ways they plan to use it in their lives, learning, and careers outside of our classrooms. Curate and SIFT is already having an impact. 

A Word About Trust

I would like to end this blog post with a word about trust in the context of care. Loss of trust is one of the biggest threats to our classrooms that AI poses, as these blog posts by Autumm Caines (linked here) and Seth Bruggeman (linked here) articulate. It tempts us to no longer trust our students and the work they do. It tempts them to rely on AI generated output rather than our lecture notes, reading assignments, and other materials. This threat to trust is causing disengagement on both sides. If we can’t trust our students to do their own work, why bother grading it, why bother assigning it at all. If they think we don’t care or don’t trust them to try, why wouldn’t they just stop working. AI can do their reading for them and can produce most assignments, so if their professors don’t trust them anyway, why bother? Because loss of trust is such a threat in an AI era, investing in trust is a radical act at this moment. I believe that each of the practices discussed in this blog post constitutes a radical act of trust. If I give you options for demonstrating your knowledge an engagement, I am telling you that I trust you to do this work. If I am giving you full points for completing an assignment in good faith, I am trusting your good faith efforts. If I am trusting you to bring content into the classroom that we will treat seriously in a semester-long assignment, I am trusting you to be thoughtful and respectful in collecting this content. Trust may be the most effective expression of care we have access to right now. We should deploy it as often we can in as many ways as we can.

Previous
Previous

See You IN PERSON at the AERA25 Civics of Tech Meetup

Next
Next

Superbloom