Quick Bites - Critical Tech Articles, Pods, and Blogs

By Marie K. Heath

I value our Civics of Tech book clubs and a long, deep, dive into an idea or theory. But I also appreciate more byte-sized (sorry, couldn’t resist) dips into critical technology. Today’s blog offers appetizer summaries and links to what we’ve been reading and listening to recently around critical technology, education, machine learning, and more. Please share what you have been reading with us in the comments or on Twitter @civicsoftech! 

At This School, Computer Science Class Now Includes Critiquing Chatbots

By Natasha Singer

Across the United States, universities and school districts are scrambling to get a handle on new chatbots that can generate humanlike texts and images. But while many are rushing to ban ChatGPT to try to prevent its use as a cheating aid, teachers like Ms. Shuman are leveraging the innovations to spur more critical classroom thinking. They are encouraging their students to question the hype around rapidly evolving artificial intelligence tools and consider the technologies’ potential side effects.

The aim, these educators say, is to train the next generation of technology creators and consumers in “critical computing.” That is an analytical approach in which understanding how to critique computer algorithms is as important as — or more important than — knowing how to program computers.

Why We Must Resist AI with Dan McQuillan

Tech Won’t Save Us Podcast

Paris Marx is joined by Dan McQuillan to discuss how AI systems encourage ranking populations and austerity policies, and why understanding their politics is essential to opposing them. Dan McQuillan is a Lecturer in Creative and Social Computing at Goldsmiths, University of London. He’s also the author of Resisting AI: An Anti-fascist Approach to Artificial Intelligence. You can follow Dan on Twitter at @danmcquillan.

“We ‘Said Her Name’ and Got Zucked”: Black Women Calling-Out the Carceral Logics of Digital Platforms

By Kishonna L. Gray and Krysten Stein

Scholars have grown concerned around the increasing carceral logics embedded in social media practices. in this essay, we explore the process of getting “zucked” as a trend within digital platforms that disproportionately punishes minoritized digital users. Specifically, Black women report that with the advent of increased safety measures and policies to secure users on digital platforms, they become subject to harms of the institutional practices. By extending the conversation on carcerality beyond the confines of prisons, jails, and other forms of criminal justice supervision, we argue that structures and institutions expand the lines of surveillance and that those traditionally subject to such harm continue to be affected. although the concept of getting “zucked” might seem like an innocent response to individuals who violate terms of service, Black women suggest that this practice disparately targets them for speaking about racist and sexist incidents on- and offline. Such surveillance is misogynoir in public spaces, as Black women are punished for organizing on social media.

Here are the Stadiums That are Keeping Track of Your Face

By Georgia Gee

Facial recognition technology is in high demand by sports teams. A 2021 study of 40 venue directors representing teams from Major League Baseball, Major League Soccer, the National Basketball Association, the National Football League, and the National Hockey League indicated that the software was on the top of the wish lists for venues.

Christian Lau, chief technology officer of the Los Angeles Football Club and BMO Stadium, told the Wall Street Journal in 2020: “Our plan is to move everything to face.” The following year, BMO began using facial recognition technology from California-based Alcatraz AI. The company’s “Rock” system can also be used to ascertain whether certain people should be allowed to enter specific spaces, including medical facilities.

A More Perfect Human

Throughline Podcast

Today, AI is everywhere. Breakthrough technologies like ChatGPT make news, while less glamorous but more ubiquitous programs are woven into every part of our lives, from dating apps to medical care. In many ways, AI is the invisible architecture of modern life. It's a reality that's both mundane and terrifying. And it's accelerating at a rapid rate, even as we still grapple with some of the most fundamental questions it raises about what, if anything, is uniquely human. In this episode, we explore the tension between our love of AI and our fear of it — and try to decode the humans behind the machines.

SCOTUS on the Internet: It’s Complicated

Slate Amicus Podcast

Everybody talks about Section 230 and nobody quite knows what it does. And section 230, this is a juggernaut. It’s been a long time coming here at the Supreme Court. It’s described as, quote, the 26 words that created the Internet. Can you just walk us through, please, as though we’re seven, what Section 230 was designed to do, why it became the bete noir of particularly conservatives, but across the spectrum what the purpose was, because I think that now we talk about it as though it was just designed to immunize Internet companies from everything. And that’s not right.

Racial Justice Amidst the Dangers of Computing Creep: A Dialogue 

by Niral Shah and Aman Yadav

Why do we actually want people to learn computing? Will more people learning computing actually advance racial justice? If not, why are we calling for more computing education? We hope that our dialogue here provokes a broader conversation in the field around these questions.

Previous
Previous

Asking Technoskeptical Questions about ChatGPT

Next
Next

Book Commitment Issues? Join the Civics of Tech Book Club!