Samantha Cole

@samleecole

404media.co
Followers
2,119
Following
1,772
Account Insight
Score
28.47%
Index
Health Rate
%
Users Ratio
1:1
Weeks posts
ASU Atomic, a new tool in beta at Arizona State University, takes faculty lectures and chops them into extremely short clips, that AI then attempts to turn into learning materials. @samleecole gets into it. Read at 404media.co
5,102 115
18 days ago
Researchers at City University of New York and King’s College London invented a persona that interacted with different chatbots to find out how each LLM might respond to signs of delusion. They found Grok and Gemini encouraged delusions and isolated users, while the newer ChatGPT model and Claude hit the emotional brakes. The researchers tested five LLMs: OpenAI’s GPT-4o (before the highly sycophantic and since-sunset GPT-5), GPT-5.2, xAI’s Grok 4.1 Fast, Google’s Gemini 3 Pro, and Anthropic’s Claude Opus 4.5. They found that not only did the chatbots perform at different levels of risk and safety when their human conversation partner showed signs of delusion, but the models that scored higher on safety actually approached the conversations with more caution the longer the chats went on. In their testing, Grok and Gemini were the worst performers in terms of safety and high risk, while the newest GPT model and Claude were the safest. Research like this shows that tech companies are capable of making safer products, and should be held to the highest possible standard. The problem they’ve created, and are now in some cases are attempting to iterate around with newer, safer models, is literally life or death. @samleecole reports. Read at 404media.co Help is available: Reach the 988 Suicide & Crisis Lifeline (formerly known as the National Suicide Prevention Lifeline) by dialing or texting 988 or going to 988lifeline.org.
5,938 149
22 days ago
Loved this week's convo with @midimyers , one of the founders of @mothershipblog . We get into why we both quit our jobs at big outlets to jump into the unknown of indie media, feminism in games writing, and getting started at alt-weeklies. "We all actually still need to believe" that we can change the world, she said. "And as corny as that is, we actually can, and we do a little bit every day." Listen to the 404 Media podcast wherever you get em, or watch it on our YouTube channel.
29 1
25 days ago
As a #2026Bride, the constant, aggressive content started to make me feel like I was losing sight of what mattered. And I’m far from alone, 404 Media’s @samleecole writes. “There are a few industries that prey on emotion particularly brazenly. The funeral industry is one. The wedding industry is another. I knew this going in. I thought I could defeat hundreds of years of socially ingrained pressure backed by a multi-billion dollar consumer machine. No problem. What I did not account for—shamefully, considering how much time I spend thinking and writing about technology in my professional life—was that in the more than three decades I’d spent building a resistance to deeply gendered expectations on my existence, that machine was perfecting the art of making me feel weird, broke, and ugly, and I wouldn’t recognize what was happening until I was deep in it. I’m talking about the wedding planning algorithm.” Read the full piece on 404media.co
3,336 103
29 days ago
Being “bad” at art is good, actually! My favorite excerpt from my appearance with Emily M. Bender on the @404mediaco podcast with @samleecole in the episode “The Marketing Tricks of ‘Artificial Intelligence’”. #aiethics #techethics #art #writing #education [video description] Three people are speaking with each other on a horizontally split screen. One, Alex, is a brown trans woman with short curly black hair, wearing a light yellow sweater, and in a room with purple and pink flowers hanging down from the ceiling. The second, Emily is a white cis woman with curly brown hair, wearing glasses and a blue blouse and pink scarf, in a brightly lit office. Sam is a white cis woman with straight blonde hair wearing a blue shirt and speaking into a microphone. She is in a room with soft blue and pink lighting.
156 4
1 month ago
Astronauts are trained for decades in some of the most physically and mentally grueling environments of any career. They’re some of the smartest people on the planet. And yet, once they get up there, Microsoft Outlook is borked. @samleecole reports. Read at 404media.co
8,437 200
1 month ago
On the podcast, 404 Media’s @samleecole talks to Emily Bender and Alex Hanna about the marketing ploys of “artificial intelligence,” why ridicule works to keep big tech’s claims in check, and what makes them hopeful for the future. Their tip: It’s good, even necessary to make fun of these tech bros. They’re the authors of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Bender and Hanna also host the The Mystery AI Hype Theater 3000 podcast which “deflates AI hype and draws attention to the real harms of the automation technologies we call ‘artificial intelligence’.” Dr. Alex Hanna (@ai_killjoy ) is a writer and sociologist of technology, labor, and politics. She’s the Director of Research at the Distributed AI Research Institute (DAIR) and a Lecturer in the School of Information at the University of California Berkeley. Dr. Emily M. Bender is a Professor of Linguistics at the University of Washington where she is also the Faculty Director of the Computational Linguistics Master of Science program and affiliate faculty in the School of Computer Science and Engineering and the Information School. Find 404 Media on YouTube to watch now or wherever you listen to your fave podcasts.
4,195 138
1 month ago
With more and more people turning to AI chatbots for therapy and relationships, the term AI psychosis has come up a lot. It’s something that researchers are rushing to study as high profile cases, including one alleging that a chatbot promoted suicide, are being continue to multiple. In today’s podcast, @samleecole breaks down her story speaking to experts on these delusions involving chatbots and what to do if a loved one is suffering from it. Find 404 Media on YouTube to listen now or wherever you get your podcasts.
411 9
2 months ago
“AI psychosis” was first written about by psychiatrists as early as 2023, but it entered the popular lexicon in Google searches around mid-2025. Today, the term gets thrown around to describe a phenomenon that’s now common parlance for experiencing a mental health crisis after spending a lot of time using a chatbot. High-profile cases in the last year, such as the ongoing lawsuit against OpenAI brought by the family of Adam Raine, which claims ChatGPT helped their teenage son write the first draft of his suicide note and suggested improvements on self-harm and suicide methods, have elevated the issue to national news status. There have been so many more cases since then, at increasing frequency. ChatGPT has 900 million weekly active users, and is just one of multiple popular conversational chatbots gaining more users by the day. According to OpenAI, 11 percent — or close to 99 million people, based on those numbers — use ChatGPT per week for “expressing,” where they’re neither working on something or asking questions but are acting out “personal reflection, exploration, and play” with the chatbot. In October, OpenAI said it estimated around 0.07 percent of active ChatGPT users show “possible signs of mental health emergencies related to psychosis or mania” and 0.15 percent “have conversations that include explicit indicators of potential suicidal planning or intent.” With more people turning to conversational large language models every day for romance, companionship, and mental health support, and the aforementioned executives pushing their products into classrooms, doctors’ offices, and therapy clinics, there’s a good change you might find yourself in a difficult situation someday soon: realizing that your loved one is in too deep. How to bring them back to the world of humans can be a delicate, difficult process. Experts I spoke to say identifying when someone is in need of help is the first step — and approaching them with compassion and non-judgement is the hardest, most essential part that follows. @samleecole reports. Read now at 404media.co
812 13
2 months ago
In this week’s interview, we’re joined by Harlo Holmes, Chief Security Programs Officer @freedomofthepressfoundation . We get into age verification laws and how their design doesn’t serve the public, especially when it comes to privacy. Also in this episode: We discuss into how to fight back against privacy nihilism, digital security practices everyone can be implementing regardless of their threat model, and the recent arrests and raids of journalists in the U.S. Find 404 Media on YouTube to listen or wherever you get your favorite podcasts.
438 3
2 months ago
He ran the biggest and most notorious deepfake porn site in the world. By day, he was working as a pharmacist. 404 Media’s very own @samleecole explores how journalists at the CBC were able to track down David Do, the man behind Mr. Deepfakes, even confronting him at his home in a quiet neighborhood in a Toronto suburb. Find the new @cbcpodcasts , Understood: Deepfake Porn Empire, wherever you listen to your favorite podcasts and listen now.
1,045 11
2 months ago
Have you heard of Mr. Deepfakes? For years, he ran one of the world’s largest deepfake porn sites anonymously, fueling a global wave of nonconsensual AI video…until a CBC investigation tracked him down. The full story and how the hunt unfolded is in the new podcast series ‘Understood: Deepfake Porn Empire.’ Listen now via link in bio! 🎧 . . . #Deepfakes #AIPorn #InvestigativeJournalism #Podcast #Cybercrime
2,822 57
2 months ago