We brought HARD MODE to Code with @claudeai
Anthropic invited 6 of the best projects from the first @hardmodemit hackathon to showcase physical AI work at their flagship event in SF.
Next up the projects will travel to Tokyo and London. Wild to see the work of this community already travelling. This is just the beginning ⚙️
We built a holographic Polaroid that turns everyday objects into AI characters.
Point Lumière at your mouse, your keys, your AirPods, or anything sitting on your desk, and it “develops” that object into a one-of-a-kind 3D character with its own voice, personality, and backstory.
A few minutes later, that character appears as a hologram inside the camera.
Built in 48 hours at @hardmodemit with @nil_yelnats@a_lint_roller Tiger Jewell-Alibhai, and @luv.melo
What would you point it at first?
#MITHardMode #Lumiere #AICharacters #HardwareHack #CreativeTechnology
We built a modular AI toy driven by real-world play and imagination at @hardmodemit AI x Hardware Hackathon! 🚀 👾 ☄️
Embodied Blocks lets you build any object you want...then give it intelligence. It tells a story based on how it’s played with and what information it can sense about the physical world. Each configuration of 3D-printed modular blocks with various sensing components collects play data (audio, image, motion) then processes it into narrative feedback via Claude API, encouraging tangible play and perception as a mode of storytelling.
[2026 MIT Hard Mode Play track winner 🏆 ]
Thanks to sponsors @bambulab_official@claudeai and organizers @aha_medialab@designintelligencelab@marcelosco@quincykuang@cyrusclarke
Wassup team @swarsahgal@kelili_@demi.hu@jimmych0604@alex_l099@adriantsao
embodiedblocks.com
Excited to launch Geni @playgeni today! This is the culmination of everything we’ve learned about building physical AI experiences at MIT, working with families, and most importantly, how AI can positively impact learning and education. It’s deeply inspired by some of my personal heroes: Froebel, Piaget, and Papert, and their shared belief that children learn best by making. In a world increasingly shaped by AI and passive media consumption, their ideas feel more important than ever.
Thanks for all the support from
@mitdesignx@mitdesignacademy@mitarchitecture@mitsap
We built an AI rubber ducky with @claudeai that codes for you (and won the Anthropic prize at @hardmodemit AI x Hardware Hackathon) 🐤🐤🐤
Programmers talk to rubber ducks.
Ours talks back—and fixes your code.
An AI-powered mechanical duck that pecks at your keyboard, constantly making and correcting mistakes in the real world.
A physical experiment in “vibe coding”:
Can AI adapt, re-plan, and recover—like humans do?
Thanks to @aha_medialab , @designintelligencelab and @mitmedialab for hosting the hackathon
And @cyrusclarke@quincykuang for organizing
Check our Devpost for project details:
/software/rubber-ducky-0eukpb
#ai #hci #mit #hardware
[Hardmode Hackathon 2026 - Project
Highlight] meet [AI]legory of the Cave!
In Plato’s Allegory of the Cave, prisoners chained to a wall perceive only shadows—and mistake them for reality. We extend this concept in our piece Plato’s [Al]legory of the Cave, an interactive installation featuring an LLM-controlled character who exists only as a shadow. The piece begins dark. A physical light switch invites interaction; flipping it on illuminates the stage and awakens the shadow man. His singular goal is to turn off the light that defines his existence. Humans place objects in his path—blocks, frames, and stairs—to obstruct him. He kicks, pushes, sometimes gives up, then begins again. The cycle repeats as he discovers changes to his world that he cannot comprehend.
Concept
In Plato’s Allegory of the Cave, prisoners are chained facing a wall, seeing only shadows cast by objects behind them. They mistake these shadows for reality, never comprehending the three-dimensional world that produces them. The shadow man is such a prisoner: confined to 2D, he perceives silhouettes but never their source.
This mirrors the perceptual limits of AI. An LLM cannot see. It can only read descriptions of sight. It processes language about the world without directly accessing the world itself. The shadow man embodies this gap. Both are confined to representations.
The piece is a meditation on AI as cave-dweller - processing patterns without grasping the higher-dimensional reality that casts them.
Collaborators: @cathyfang.my@skye.gao.xy Sam Chin, Chin Leonie Dong, Alejandro Lopez-Rodriguez
Learn more about the project here: https://samch.in/projects/allegory-of-the-cave.html
[Hardmode Hackathon 2026 - Project Highlight] meet Human Operator!
Human Operator is a human augmentation tool that allows AI to briefly take control of your body to help you learn and do things you normally cannot do. To do this, it uses a Vision-Language Model for human motor control through Electrical Muscle Stimulation (EMS). Vision-based commands are generated via open-ended speech input through the Claude API to control finger and wrist stimulation for intuitive on-body interaction.
🏆 Winning project at MIT Hard Mode 2026 (Learn Track) 🏆
Team: @valleballemis@peter.he_@ashleyneall@yutongwu_hallie@danielfromamsterdam@seanhhlewis
Learn more about the project here: /danielkaijzer/Human-Operator/
One week since Demo Day at HARD MODE: the first Hardware × AI Hackathon at MIT Media Lab.
200+ hackers. 48 hours. 40+ projects across 6 tracks all reimagining what AI can be.
Teams built projects like a mechanical rubber duck that helps when you’re stuck on code. One team gave an AI a (human) body. Others went SOFT MODE and used clay as an interface. There were plushies powered by AI, an AI that questions rather than answers, new interfaces using Pepper’s Ghost, Intelligent objects that live their own lives and much more….
Huge thanks to our sponsors and judges from Anthropic, Qualcomm, and Akamai. Also Bambu Lab, E14 Fund, Gig Labs, Institute of Foundation Models, and all our amazing volunteers!
More projects, photos, and videos coming soon.
Photo credits:
@hahatango
Jonathan Williams & Paula Aguilera
@cyrusclarke
Thanks to
@aha_medialab@designintelligencelab
hardmode.media.mit.edu
The first Hardmode: MIT AI and Hardware Hackathon has officially come to a close.
Thank you to everyone who helped organize and participated in the hack. It was an unforgettable experience for us, and we hope it was the same for everyone who joined.
We would like to thank MIT Media Lab, the MIT Morningside Academy for Design, and the broader MIT community, along with the faculty and student organizers who helped make this event possible.
A huge thank you as well to our sponsors for their generous support:
Anthropic, Akamai, Qualcomm, Bambu Lab, E14, Institute of Foundation Models, igus, HRT, Seeed Studio, Tensorent, Snap, Viam, Bootloop, Murata, and NVIDIA.
We are incredibly grateful for the community that came together to make this event happen.
Hopefully we will see you next year!
#mit #hackathon #ai #medialab #hardware
48h. 200+ participants. 40+ projects.
One mission: To Re-imagine what AI can be.
This weekend we hosted the first HARD MODE: Hardware x AI Hackathon at MIT Media Lab.
No apps. No SaaS. Hackers came from around the globe to build intelligent physical systems that take AI out of the screen and into the real world.
Supported by Anthropic, Qualcomm, Akamai, Bambu Labs and more. Special shout to Marc Raibert and RAI who brought down the house with technical fun.
Thank you to every builder, mentor, volunteer, visitor, and sponsor who made this real.
—HARD MODE
⚙️
Video credit:
Footage: @johary2shots
Edit: @cyrusclarke and @johary2shots