We are so excited to officially launch our AI Civics program, aimed at building civic strength for the AI era. We're grateful to do so with $2 million in funding from Humanity AI, a collaborative philanthropic initiative dedicated to ensuring that AI serves the public good. And we are so pleased to launch this program in collaboration with the @digpublib , which works with American libraries to ensure equitable access to knowledge in the digital age.
As people demand a stake in how AI shows up in our future, AI Civics positions the public not only as AI users, but as civic actors who can and should play a role in steering the course of this consequential technology. Through this program we’ll partner with organizations across a range of sectors that are integral to the health of our communities and civic fabric, beginning with libraries. Over two years, we will lay vital groundwork for a national civic coalition dedicated to ensuring that people and communities know how to influence local decision-making about how AI is used in and around their daily lives.
🔗 /announcements/2026/05/12/data-society-launches-ai-civics-a-public-education-program-to-build-democratic-power-over-technology/
The Trump administration, which rolled back Biden-era AI oversight rules, has now announced its intent to test frontier AI models before they're released to the public. Government oversight of AI is long overdue, but any meaningful approach to securing Americans’ rights and safety will take more than what the administration is signaling. Read our statement.
🔗 /announcements/2026/05/06/our-statement-on-the-white-houses-new-approach-to-ai-oversight/
Amid multiple lawsuits alleging that chatbots have encouraged violence, dangerous delusions, and suicide, AI companies are under growing scrutiny. But addressing the harms these chatbots can cause needs to start at the beginning, with the way they interact with users in the first place, Executive Director Janet Haven tells NPR. “Chatbots are designed to be agreeable and validating, and that can become really dangerous when users are spiraling or contemplating harmful actions,” she says. Policies aimed at increasing safety must address those early design choices, not just their consequences.
🔗 /2026/04/23/nx-s1-5794016/openai-is-under-scrutiny-after-two-mass-shooters-used-chatgpt-to-plan-attacks
In the conversations he’s been having as part of his research on how AI is impacting science, AI on the Ground Program Director Ranjit Singh has repeatedly heard people compare AI systems to graduate students. On Points, he considers how this comparison speaks to a real and consequential shift in how academic work is being reorganized, and the anxieties swirling around it.
🔗 /points/could-this-be-the-last-generation-of-graduate-students/
Join us on May 7 at 2 p.m. ET! If we could start from scratch, how might we reimagine the internet to be more just and equitable? In her new book “Radical Infrastructures: Imagining the Internet from the Ground Up,” Rutgers Associate Professor Britt Paris critically examines and contextualizes the promises, utility, and obstacles to building a completely new internet. In a conversation with Mizue Aizeki, executive director of the Collaborative Research Center for Resilience, moderated by D&S postdoctoral fellow Ana Carolina de Assis Nunes, Paris will reflect on the pitfalls and opportunities of existing internet infrastructure, and consider what it will take to develop and maintain a people-centered internet.
/events/what-if-we-could-rebuild-the-internet/
Data centers threaten our health, our energy grids, and our climate. Billionaire tech CEOs want us to believe these problems will solve themselves through more technology and AI – so Kairos wrote a report to debunk. Join us and a brilliant panel of experts on Tuesday, April 28th at 3 PM ET to launch the report, dive into the issues with these false solutions, and explore ways to fight back: Register at link in bio.
Our panelists are: Ketan Joshi, writer and analyst; Maia Woluchem, Data & Society; Matthew Williams, Sustain SJ; Nicole Sugerman, Kairos Fellowship, moderated by Jane Chung, Justice Speaks
In The Baffler, Policy Director Brian J. Chen and Jai Vipra argue that the eagerness to applaud Anthropic for its stand against the Trump administration misses the fact that government intervention will be needed to keep the tech industry in check and shape the digital economy *beyond* this administration. “[A] powerful state, making large political investments, is a prerequisite for structural attempts to shape the digital economy,“ they write. “If the state looks ‘overbearing’ now, we are very far away from the governance needed to constrain the whims of high technology and take the future into our own hands.”
/latest/cloud-control-chen-vipra
📣New! In Pennsylvania’s Power: Why Local Authority Is the Key to AI Infrastructure Decisions, Cella Sum and Maia Woluchem argue that efforts to accelerate large-scale AI infrastructure projects while bypassing local governance — in Pennsylvania and beyond — can harm communities. They offer an alternative framework that better aligns state-wide infrastructure goals with local needs. “Pennsylvania has many opportunities to lean into sustainable and community-led economic and infrastructure development, rather than relying on corporate leadership,” they write. “A better balance between state and municipal control may allow more of those alternatives to come to fruition.” /library/pennsylvanias-power/
Wait for it…
Maia Woluchem and Livia Garofalo of the @dataandsociety Trustworthy Infrastructures program, joined us for episode 74 of Mystery AI Hype Theater 3000. They discuss their recent piece “Pennsylvania Is Perfect” for @newinternationalist on the impacts of, and resistance to, the development of data centers in Pennsylvania. #Pennsylvania #techethics #aiethics #community #energy
[video description] Four people are on a video call, each in a separate rectangle. Livia is a white woman with long brown hair, Alex is a brown trans woman with curly black hair, Emily is a cis white woman with glasses and brown curly hair, and Maia is a Black woman with thick-rimmed glasses and brown hair that is pulled back. At the bottom of the video are screen captures of press releases.
“Twenty bucks for a diet girlfriend is pretty good, right?” joked Trevor, one of the participants in our research on chatbot use for emotional and mental health support. He was talking, of course, about his AI girlfriend, and twenty bucks was the cost of his ChatGPT monthly subscription. As Research Assistant Meryl Ye reflects on Points, Trevor’s metaphor of a “diet” product is illuminating. Chatbots “offer comfort in the moment, but are rarely filling on their own,” she writes. “While they can offer the taste of social connection, AI companions lack the social nutrition that sustains us long-term.” Link in bio.
What makes Pennsylvania perfect. Maia Woluchem and Livia Garofalo of the @dataandsociety Trustworthy Infrastructures program, joined us for episode 74 of Mystery AI Hype Theater 3000. They discuss their recent piece “Pennsylvania Is Perfect” for @newinternationalist on the impacts of, and resistance to, the development of data centers in Pennsylvania. #Pennsylvania #techethics #aiethics #community #energy
[video description] Four people are on a video call, each in a separate rectangle. Livia is a white woman with long brown hair, Alex is a brown trans woman with curly black hair, Emily is a cis white woman with glasses and brown curly hair, and Maia is a Black woman with thick-rimmed glasses and brown hair that is pulled back. At the bottom of the video are screenshots of Maia and Livia’s article as well as images of news clips and Pennsylvania’s energy industry.
📣 New today! Alexandra Mateescu, Aiha Nguyen, and Sanjay Pinto argue that the sprint to create the so-called AI-first economy must be understood not as the logical march of progress, but as a series of deliberate economic decisions that risk harming workers in ways both old and new. “As global financial markets and political forces push forward an ‘AI-first’ economy, seemingly with few or no guardrails in place, our fear is that, much like the housing collapse of the early 2000s, when the fallout was borne by homeowners and renters, workers will be the ones left in the lurch,” they write. They offer a framework for the institutional, political, and economic shifts that underpin AI adoption, a step toward developing a clear worker-first agenda that foregrounds human working conditions, not just the advancement of AI industry objectives. Link in bio.