Center for Humane Technology

@humanetech

A better future with technology is possible. CHT is dedicated to ensuring that today's most consequential technologies actually serve humanity.
Followers
9,145
Following
29
Account Insight
Score
34.24%
Index
Health Rate
%
Users Ratio
315:1
Weeks posts
The wait is over! Starting today, May 15, you can stream @theaidocfilm on @peacock This film takes the dizzying complexity of AI — the promise, the peril, the competing ideologies, the economic incentives — and creates a shared experience we can all see and respond to.
746 39
1 day ago
Center for Humane Technology’s co-founder, Tristan Harris, lays out his five point plan to stop the arms race for attention, protect children from the perils of the online world, and use social media to promote the discovery of love and unity rather than loneliness and division. Full intv on the @mostlyhumanmedia YouTube, @iheartradio and wherever you get your podcasts. Link in bio.
1,259 47
6 days ago
The heads of leading AI companies have promised that superintelligent AI will cure cancer. Physician, public health researcher, and futurist Dr. Emilia Javorsky argues that this promise is not only false — it's siphoning resources from research and technology that might actually make a difference. Read our producer Julia Scott's key takeaways from our conversation with Dr. Javorsky at the substack link in our bio.
67 1
8 days ago
"Mythos is the clearest evidence yet that our system for developing, assessing, and disseminating powerful AI systems is dangerously dysfunctional." CHT Executive Director Julie Guirado has a new op-ed for the Persuasion substack today on what the Mythos story reveals about the dangerous, broken incentives of the AI race — and why we can't rely on the goodwill of a handful of self-interested companies to rein in such a powerful technology. Read Julie's full piece here: munity/p/the-case-for-ai-regulation
55 6
13 days ago
"AI will cure cancer." It's one of the promises driving the race to superintelligence. And it carries a brutal logic: if superintelligent AI really can cure cancer, then anyone trying to slow it down — even over serious risks — is letting people die. But what if the premise is wrong? On this week's episode of Your Undivided Attention, Dr. Emilia Javorsky — physician, public health researcher, and director of the Futures Program at the Future of Life Institute — explains why superintelligence won't get us to a cancer cure. And how AI could revolutionize medicine, if we choose a different path. Check it out at the Substack link in our bio.
44 1
16 days ago
"We measure AIs to see whether they can pass a bar exam, write working code, and use your computer interface. We test to see how good they are at completing complex tasks, or just impressing humans. What we don’t have is a rigorous, credible way of evaluating what AI does to us. To our minds, our thoughts, or our communities." In our latest Substack post, Imran Khan — who leads our work on psychosocial evaluations of AI — explains why we need better AI benchmarks that focus not on raw capability but on how AI is impacting users. Identifying how tech interacts with and changes human nature is the critical first step towards building more humane technology. You can read his full piece at the substack link in our bio.
61 1
19 days ago
Exciting news! @theaidocfilm is now available to buy or rent. If you missed the theatrical release, you can now see this critically important film at home. Find it wherever you get your movies on demand. Invite your friends and family for a movie night — it's meant be watched together.
28 3
1 month ago
"What we're striving for here is pretty simple. We believe that AI should be built to augment human labor, not replace it." In the latest episode of Your Undivided Attention, Pete Furlong and Camille Carlton from our policy team dive deep into The AI Roadmap, CHT's most robust set of AI solutions to date. Here's Pete talking about our vision for the future of AI and work. Listen to the full conversation wherever you get your podcasts and read the AI Roadmap at humanetech.com/ai-roadmap.
46 5
1 month ago
It can feel like the people building AI have already decided our future and the rest of us are just along for the ride. There's a natural feeling that the path we're on is inevitable. But it's not true: Society has steered powerful forces before and we can do it again. On the latest episode of Your Undivided Attention, Camille Carlton and Pete Furlong from our policy team walk through CHT's AI Roadmap: our report outlining seven core principles for how AI should be built, deployed, and governed. Each principle is grounded in real, actionable solutions across three domains: norms, laws, and product design — high-impact interventions across government, industry, and civil society that build on top of each other. Check out the full discussion wherever you get your podcasts and read the report at humanetech.com/ai-roadmap
65 1
1 month ago
"The world we're heading towards [with #AI] is good for a handful of soon-to-be trillionaires. It's not good for regular people. – @humanetech co-founder @tristanharris , on the March 20, 2026 episode of #RealTimeHBO See @theaidocfilm in theaters and join #TheHumanMovement @ human.mov
3,681 110
1 month ago
@tristanharris went on GPS with @fareedzakaria to talk about the recent verdicts against Meta and YouTube for the harmful, addictive designs of their platforms. Meta and YouTube were incentivized to choose unsafe product designs because of the race to capture our attention. We are facing a similar race with AI today. And it is critical that we learn from the social media era and establish rules of the road that ensure real accountability for AI products before irreversible harms occur. That’s why we’re releasing The AI Roadmap, a report outlining seven principles for how AI should be built, deployed, and governed. You can read the whole report on our website: humanetech.com/ai-roadmap
166 4
1 month ago
I went on GPS with @fareedzakaria to talk about the recent verdicts against Meta and YouTube for the harmful, addictive designs of these platforms. These design choices were the results of incentives, of a race to capture our attention. The incentives behind AI are much more dangerous. Instead of a race for attention, it’s a race to build a God, own the $50 trillion world economy, and build intelligence that can replace all of human labor – and if I don't get there first, I'll permanently lose. If you thought social media's harms were bad, we haven’t seen anything yet. We need to learn from the social media era and establish rules of the road that would ensure real accountability before the harms occur, not a decade later. That’s why @humanetech is releasing "The AI Roadmap" (in the interview I called it 'Solutions Report'), a report outlining seven principles for how AI should be built, deployed, and governed. You can read the whole report on our website: humanetech.com
1,359 30
1 month ago