Excited to launch Geni @playgeni today! This is the culmination of everything we’ve learned about building physical AI experiences at MIT, working with families, and most importantly, how AI can positively impact learning and education. It’s deeply inspired by some of my personal heroes: Froebel, Piaget, and Papert, and their shared belief that children learn best by making. In a world increasingly shaped by AI and passive media consumption, their ideas feel more important than ever.
Thanks for all the support from
@mitdesignx@mitdesignacademy@mitarchitecture@mitsap
At the @hardmodemit hackathon, we tried to rethink what future ai in objects should look like…
When we think of giving ai physical capabilities, the first thought is a humanoid robot that can do everything, but that isn’t cute.
So instead of giving a bunch of new ability to one device, we tried to buff up each object a little💪, so they can all coordinate and do stuff together in a way that’s slightly more lovely.
Which object is your favorite?
#aiobjects #aihardware #mit #hackathon
One week since Demo Day at HARD MODE: the first Hardware × AI Hackathon at MIT Media Lab.
200+ hackers. 48 hours. 40+ projects across 6 tracks all reimagining what AI can be.
Teams built projects like a mechanical rubber duck that helps when you’re stuck on code. One team gave an AI a (human) body. Others went SOFT MODE and used clay as an interface. There were plushies powered by AI, an AI that questions rather than answers, new interfaces using Pepper’s Ghost, Intelligent objects that live their own lives and much more….
Huge thanks to our sponsors and judges from Anthropic, Qualcomm, and Akamai. Also Bambu Lab, E14 Fund, Gig Labs, Institute of Foundation Models, and all our amazing volunteers!
More projects, photos, and videos coming soon.
Photo credits:
@hahatango
Jonathan Williams & Paula Aguilera
@cyrusclarke
Thanks to
@aha_medialab@designintelligencelab
hardmode.media.mit.edu
We are part of this year’s Core77 Design Awards jury in the Emerging Technologies category with Aria Xiying Bao @xixixi_aria and Greg Tran @gregtran .
From commercial to cultural, environmental to discursive, The Core77 Design Awards honor excellence across all areas of design enterprise. The Emerging Technologies category encompasses systems, services, research, hardware, or software products created with the aid of recently created or developed software and hardware technologies. Examples can include projects that incorporate the use of artificial intelligence (AI), virtual reality (VR), augmented reality (AR), blockchain technologies, robotics, biometrics, advanced materials or new production processes.
@designatmit@mitarchitecture@mitsap@mitdesignacad@artsatmit
HARD MODE: what a weekend. 🔧🤘🚀
200+ builders. 40+ teams. 48 hours. One mission: prototype the future of AI that helps humans truly flourish.🌟
Engineers, designers, researchers, and makers from across the US came together to build physical systems — things you can hold, wear, share, and interact with. The challenge? Rethink how hardware powered by AI can help humans connect, learn, reflect, work, play, and thrive.
The final demo day was something else. Watching teams present working prototypes across all six categories of human flourishing — in just 48 hours — reminded us exactly why this work matters.
We’re in awe of what this community built. 🥹
A huge thank you to our sponsors for making this possible — including platinum sponsors Anthropic, Akamai, and Qualcomm, who also hosted workshops throughout the weekend. And a special shoutout to Marc Raibert, founder of @bostondynamicsofficial and director of @robotics_and_ai_institute , for bringing Spot along and giving us all a glimpse of where intelligent hardware is headed.
This is what advancing humans with AI looks like. 🤖
#HARDMODEhackathon #MIThackathon #AHA #AdvancingHumansWithAI
Incredible weekend at @hardmodemit hackathon. 200 participants came from all over the world to create new physical forms and behaviors for artificial intelligence.
What if AI wasn’t just an assistant you open in an app?
We propose AI Cohabitants: physical AI companions that live alongside you with their own character and autonomy. Instead of waiting for commands, they quietly exist in your space, learning and interacting over time.
The Stochastic Parrot is a project from the MIT Media Lab and the MIT Design Intelligence Lab.
Link: https://www.media.mit.edu/projects/the-stochastic-parrot/overview/
Come checkout HARDMODE to imagine new ways of interacting with AI!
#ai #medialab #mit #aicompanion #hackathon
[Blank] Scope is an AI-powered binocular that lets users see the world both as it is—and as it could be. One lens shows live reality, while the other reveals a real-time, AI-generated transformation based on the user's chosen time period and cultural perspective. By turning physical dials, users shift between historical visions and speculative futures, blending memory, perception, and imagination into a single, embodied experience.
Created by Chiun Lee @chiunleekim , Qingyun Liu @k_universe_d , and Krystal Montgomery @krystal.montgomery for 4.043/4.044 Interaction Intelligence: a course at MIT taught by Marcelo Coelho @marcelosco and supported by William McKenna, Sergio Mutis @sergiomutis1 , Xdd @realxdd44 , and Quincy Kuang @quincykuang .
@mitarchitecture@mitsap@mitdesignacad@designatmit@artsatmit
#ai #artificialintelligence #genai #llm #largelanguagemodels #llo #largelanguageobjects #vr #ar #mixedreality
Geni made its debut at Toy Fair NYC!
Geni @playgeni an audio-based storytelling machine that brings imagination to life. This is the first physical AI product to spin off from @designintelligencelab . Geni started as a project in 4.043 Interaction Intelligence and we couldn’t have made it this far without the amazing support from DesignX @mitdesignx , MAD @mitdesignacademy , and MIT Architecture @mitarchitecture@mitsap@designatmit@artsatmit
Noema is a large language object that segments and reconstructs human perception through spatialized audio experiences, such as ambient cues, storytelling, and music. By shifting perception from sight to sound, NOEMA turns AI into a sensory extension: an inner voice that sees, interprets, speaks, and even questions.
Developed by Melo Chen & Nomy Yu for 4.043/4.044 Interaction Intelligence: a course at MIT taught by Marcelo Coelho @marcelosco and supported by William McKenna and TAs Sergio Mutis @sergiomutis1 and Xdd @realxdd44 .
@mitarchitecture@mitsap@mitdesignacad@designatmit@artsatmit