As Europe prepares to triple its data centre capacity, our latest investigation revealed that key environmental data on individual facilities, including energy consumption and water use, is being withheld from the public under EU law.
The confidentiality clause at the heart of the story was proposed by Microsoft and DigitalEurope, a lobby group whose members include Amazon, Google and Meta, and adopted by the European Commission almost word for word. National authorities were then instructed to keep all individual data centre information secret, effectively blocking freedom of information requests.
Ten leading legal scholars told us the clause may violate EU transparency rules and the Aarhus Convention, an international treaty guaranteeing public access to environmental information.
The result is that communities, researchers, journalists and the wider public are left in the dark about the environmental impacts of Europe’s growing number of data centres.
This investigation was published on our website and with media partners internationally, including The Guardian, Le Monde, El País, Altreconomia, Die Zeit, EUobserver, The Journal, Público and Tech Policy Press.
➡️ Swipe to see the key findings and read the investigation via the link in bio.
A vlog, A joke. A takedown.
What looks like an isolated incident is part of a much larger shift. India’s digital censorship architecture is expanding, faster, broader, and now powered by AI-era urgency. From moral panic to national security to cybercrime, each justification adds a new layer of control, often without transparency or due process.
The question isn’t whether harms exist, they do. The question is how those fears are being used, and who ultimately gets silenced.
Swipe through.
This week, Donald Trump amplified AI-enhanced photographs of eight Iranian women on Truth Social, claiming the Islamic Republic was preparing to hang them. The underlying cases are real, women like Bita Hemmati, who faces a death sentence, and Diana Taherabadi, a 16-year-old held in a juvenile prison. But the packaging was unverified, and the Iranian state responded within 24 hours with its own AI-generated mockery of the dynamic.
The binary of “AI or real” is no longer adequate to the information space we live in. Real photographs get dismissed as fake. Fabricated images get weaponised. And the women whose cases actually matter disappear.
My latest for @techpolicy.press , building on @witness_org work on synthetic media and human rights. Link in bio.
#AIpolicy #HumanRights #Iran #Disinformation
The Pentagon wants AI that can fight wars — without limits. One of the United States’ leading AI companies says there are lines it won't cross. And this week, that standoff turned into an all-out confrontation.
Listen to the full podcast via the link in the bio to hear from Kat Duffy, senior fellow for digital and cyberspace policy at the Council on Foreign Relations, and Amos Toh, senior counsel in the Liberty and National Security Program at the Brennan Center for Justice discuss the implications of the dispute between Anthropic and the Pentagon.
As artificial intelligence (AI) becomes embedded in state security and surveillance across Europe, the legal safeguards meant to constrain its use are increasingly being left behind. Read more via the link in our bio from Ashwin Prabu and Marlena Wisniak at the European Center for Not-for-Profit Law.
Ramsha Jahangir, Senior Editor at TPP, recently spoke with members of the Digital Services Act (DSA) Observatory, organizing the second annual DSA and Platform Regulation conference in Amsterdam. Since the DSA went into effect two years ago across the European Union, the law has been tested by national elections, geopolitical tensions, high-profile enforcement actions, and the rapid rise of generative AI models. Listen via the link in the bio about what the first few years of enforcement have shown us across the world.
Read more via the link in our bio from TPP Fellow Jake Laperruque about some of the surveillance tech tools ICE is using to spy on protestors, from electronic location tracking and facial recognition to license plate and vehicle surveillance.
Operation Metro Surge"—the massive immigration enforcement operation playing out right now in Minnesota—was billed as a targeted effort to apprehend undocumented immigrants. But what it has exposed goes far beyond immigration enforcement.
It has pulled back the curtain on a sprawling surveillance apparatus that incorporates artificial intelligence, facial recognition, and other novel tools—not just to enable the raids that have turned violent and, in some cases, deadly; but also to silence dissent, to intimidate entire communities, and to discourage people from even watching what masked federal agents are doing in their own neighborhoods.
Listen more about what is unfolding in Minneapolis and beyond with Irna Landrum, a senior campaigner at Kairos Fellowship, and Alejandra Montoya-Boyer, VP of the Center for Civil Rights and Technology at the Leadership Conference on Civil and Human Rights.
Is social media addictive? And if so, can social media companies be held responsible? Learn more about the upcoming social media addiction trial in the Los Angeles Superior Court from TPP fellow Varsha Bansal.
Read more about how governments around the world are beginning to use AI in the legislative process from journalist Chris Stokel-Walker, author of How AI Ate the World.
The latest DHS AI inventory, released in late January 2026, revealed more than 200 AI use cases currently in deployment or development at DHS, with ICE driving much of this growth by using AI to process tips, review social media and mobile device data, and employ facial recognition tools. Read more about the chilling impacts of such surveillance via the link in the bio.
How can humans retain control over AI systems that delegate crucial decisions to autonomous machine decision-making?
Virgílio Almeida, Fernando Filgueiras and Ricardo Fabrino Mendonça explore the dangers of increasingly automated AI models, and what we can do to ensure human agency and oversight is protected.