Robert Hodgin

@roberthodgin

Artist and Head of Research & Development at Rare Volume, Brooklyn 🏳️‍🌈
Followers
6,519
Following
661
Account Insight
Score
32.91%
Index
Health Rate
%
Users Ratio
10:1
Weeks posts
Weekend experimentation. This video (https://lnkd.in/eW-mFc6T) from the Prix de Lausanne 2026 came across my feed and it was mesmerizing. It features Dayeon Yeom doing a contemporary dance performance and winning the audience favorite award. I was curious about using AI workflows to see if I could extract enough meaningful movement information to do something, anything, with the data. The first challenge was the fact that the video features 2 very different camera angles: one locked camera showing the full stage, and another close-up tracking camera. I used YOLOv8 and Python to turn it into a consistent tracking shot with bounding box matching between cut frames, and then ran it through Topaz Video AI for upressing as best it could. The results weren't perfect but decent enough to continue. Then I used RTMPose with temporal smoothing to generate a clean silhouette mask with Robust Video Matting, and derived per-joint velocity vectors and skeleton mapping which I saved out to CSVs. In TouchDesigner, I reconstructed the skeleton with scaled points and encoded the velocity as RGB color. I converted that into a 3D texture volume and use that to drive a fluid/smoke sim in real-time. Ultimately, the original video is much more compelling but it was an interesting lesson in what is and isn't possible with current AI workflows. The biggest issue was that I wasn't able to get a clean 3D track so that I could simulate the dancer's movement in 3D space. The stage and backdrop were too featureless to get usable data. I tried a few things (MotionBERT 3D pose lifting, enhanced floor/backdrop tracking with optical flow and CLAHE, foot-anchor dead reckoning [this was a longshot], and DVD depth estimation). And ultimately the motion blurring (and non-traditional body poses) caused some issues with skeleton alignment. It worked for 99% of the dance, but there were a few hiccups associated with full body spins and strange floor poses.
122 3
2 months ago
A 2nd demo I created for @datlabnyc to show off some basic flocking logic that I use in the audio vis for the 1.5M particles. Created in @touchdesigner using python
167 3
2 months ago
A demo I created for @datlabnyc to show of the inner logic behind the Magnetosphere audio vis. Created with @touchdesigner using python.
325 17
2 months ago
Dust. Exploring subtle installation experiences. Always loved seeing dust motes floating in sun beams so I decided to try and reproduce that experience in TouchDesigner using @josefpelz T3D (Texture 3D operators) as well as PCR (Point Cloud Rendering) for GLSL bokeh effects. The project is a medium res fluid sim that I can add velocity to via webcam and optical flow. The source image is run through monocular depth estimation to achieve a depth map so that I can have a rough version of the original source image as an extruded 3D mesh. I tried using this mesh as an obstacle for the fluid, but this causes dust particles to sometimes adhere to the obstacle bounds so I have disabled that feature until I can come up with a better solution. #touchdesigner #dust #glsl
212 16
2 months ago
Further prototyping of Infinite Frames. Using an ESP32 Feather V2 and a BNO08x 9-dof sensor as a virtual flashlight. @touchdesigner #touchdesigner #infiniteframes #prototype #adafruit
185 11
2 months ago
Controlling the lighting (gobo-style, GLSL fakery) via TouchOSC on my Android device. 60fps, featuring 256 images from the Met collection (forgiveness for the cropping, sacrilege) dynamically resizing and animating. #touchdesigner #infiniteframes
109 4
3 months ago
Added a nicer post process shader so I could do infinite panning in any direction. The gobo lighting texture and reflection mapping happens in this new step instead of baking it into the previous shader.
142 6
3 months ago
Real-time prototype of my "Infinite Frames" Houdini setup. Ever since I first made it, I wanted to see it run at interactive framerates. TouchDesigner makes it really easy to do initial prototyping and look-dev for interactive projects we will build out in Cinder C++. * Runs at 60fps * Subdivision logic is handled in Python * Picture frames are flat with some faked beading and/or normal maps * Webcam input is used as a super subtle glass reflection * Using MidJourney AI images for the content because of convenience only, this is not an attempt to showcase gen AI outputs. I needed 256 images that were thematically related. #touchdesigner #rarevolume #infiniteframes #subdivision
167 6
3 months ago
Magnetosphere 2.0 dev/design update. 60fps, 3440x1440 output (resized for insta), 1,000,000+ particles, audio reactive. #touchdesigner #raye #audioreactive #generative #flocking Audio: RAYE - "WHERE IS MY HUSBAND!"
594 45
3 months ago
When AI can generate anything instantly, what's left for artists to make? Robert Hodgin. Flight404. Co-creator of the iTunes visualizer. Co-founder of Cinder C++ framework. Head of R&D at Rare Volume, New York. 30 years pioneering generative art. Before AI made it a prompt. His work lives at the V&A, Smithsonian, SF Exploratorium. His code powers installations from Google to WarnerMedia's 11-story LED sculpture in Times Square. At DDD26: "Art After Automation" When output becomes infinite and tools autonomous what does it mean to be a designer? Time. Care. Uncertainty. The qualities that resist automation. DDD Milano. ⚡ 10th Anniversary Edition. May 7–9 📌 Superstudio Village milano.ddd.live #DDD26
68 0
3 months ago
Real-time astrophysics sandbox we are developing at @rarevolume . The simulation space is seeded with a predefined total mass distributed amongst ~25k spherical masses. These particles exhibit universal gravitation and attract each other. When two particles collide, the larger steals mass from the smaller. when they get too large, they nova into thousands of new particles which have the same total mass as the object that exploded. This results in a constantly evolving explosive display of birth and death, pushed around by expanding shockwaves, within in fluid, at 60fps. @touchdesigner #generative #physics #universe #bigbang
279 22
3 months ago
Real-time audio reactive flocking simulation. Magnetosphere 2.0 Made with Touch Designer #touchdesigner #flocking #magnetosphere #doechii #SZA Audio: Doechii (feat. SZA), "girl, get up."
207 11
4 months ago