Home forkniPosts

Alexander Korneev

@forkni

• Generative artist. • Ex-Head of Technology at @inter.nyc
Followers
2,202
Following
4,434
Account Insight
Score
28.63%
Index
Health Rate
%
Users Ratio
0:1
Weeks posts
Real-time FLUX.2-Klein model running locally on a single RTX5090 GPU. Conditioned with prompt and RGB camera input. Custom KV cache bank implementation for temporal consistency + a ton of optimizations 😵‍💫😈 • #flux2 #realtimerendering #nvidia #touchdesigner #blackforestlabs
63 11
4 days ago
Real-time diffusion with FLUX.2-Klein4B model + streamV2V in Touchdesigner 😳 Very impressed with the results! Running smoothly on Nvidia RTX 5090 GPU locally. Swipe to view the input ⏩️ • #flux2 #torch #nvidia #blackforestgermany #touchdesigner
40 8
14 days ago
For the last couple months I’m diving into FLUX.2 inference acceleration and here’s latency results so far: LEFT SIDE: real-time input RIGHT SIDE: real-time diffused output • Technology stack: • NVIDIA 5090 GPU • FLUX.2-Klein 4b model • TensorRT acceleration • FP8 selective quantization • Pre-inference prompt calibration • CUDA-Link for minimizing latency overhead • 2 steps, 320x512px resolution • 13 FPS • #blackforestgermany #flux2 #nvidia #touchdesigner
23 8
28 days ago
Times like this remind you how much work depends on the people around you. What begins as a structured set can slowly open into a long improvisation. And every now and then the stage and the visuals briefly meet in the same moment — something you could never fully plan. Nomad Path Live Performance by @firouzfarmanfarmaian and @interactive.items Made with @touchdesigner and #streamdiffusion by @dotsimulate #generativeart #audiovisual #realtime #newmedia
109 8
2 months ago
Why does sharing GPU memory between two processes still require copying data through the CPU? At 60fps, that round-trip costs milliseconds you can’t afford. I built CUDA-Link to fix that — an open-source Python library that shares GPU textures and tensors directly between processes. Zero-copy, no CPU detour. Started because I needed live TouchDesigner frames in AI models without the latency hit. The GPU → CPU → GPU path was eating my frame budget, so I wrapped CUDA’s IPC API in pure Python and cut it out entirely. What it does: → Zero-copy GPU sharing between any two processes → Ring buffer — producer and consumer never block each other → PyTorch, CuPy, NumPy output modes → Pure Python — pip install and go 60fps in production. Easy. Open-source: github.com/forkni/cuda-link (link in bio) • #touchdesigner #python #cuda #realtimeai
56 7
2 months ago
Live depth feed from Azure Kinect camera → ControlNet + StreamV2V (temporal consistency) → diffused output at 30 FPS with SDXL-Turbo at 2 Timesteps on a single RTX 5090. Most StreamDiffusion implementations use TensorRT (incompatible with StreamV2V) — torch.compile gives both speed and consistency. Currently working on further pipeline acceleration 🚀🚀🚀 #streamdiffusion #torch #realtimerendering #generativeart #ai
430 54
2 months ago
Real-time StreamDiffusion from Kinect Azure input (RGB + Depth) ControlNet Depth + StreamV2V • #kinect #streamdiffusion #touchdesigner #nvi̇di̇a #realtimerendering
56 12
3 months ago
60FPS real-time video generation with StreamDiffusion from Orbbec camera. • - ControlNet depth for figure shape coherence • - StreamV2V with smart caching & Feature injection for temporal stability • • Everything running real-time in Touchdesigner on RTX5090 GPU • • Big shoutout to @ekarasyk for the GPU and trust 🖤 #streamdiffusion #nvidia #torch #realtimerendering #touchdesigner
58 7
3 months ago
So, this is it. • After months of tries and errors, I could manage to adapt StreamV2V code to be compilable! What you see here - is real-time render with StreamDiffusion + StreamV2V + Depth ControlNet. 3 diffusion steps; 30FPS; SDTurbo diffusion model. Produced and recorded in real time-time in TouchDesigner. • #streamdiffusion #torch #nvidia #realtimerendering #touchdesigner
30 5
3 months ago
OK, now we talking… Real-time video generation with StreamDiffusion from 60FPS input! . Left part of the video - unprocessed input, Right part of the video - generated output. Just made it work, so it’s ruff sketch, but looks very promising for me. Also it’s running on single NVIDIA 4090 GPU, but with CUDA 12.8, so it will work on 50xx GPU’s as well! Hell yeah! 🔘 •• #streamdiffusion #realtimerendering #nvidia #pytorch #touchdesigner
55 3
4 months ago
Another StreamDiffusion study 🥀 • #streamdiffusion #generativeart #realtimerendering #touchdesigner #derivative
31 0
4 months ago
Real-time StreamDiffusion generation in Touchdesigner. • V2V + ControlNet • accelerated with Torch.compile • 45FPS • #touchdesigner #derivative #streamdiffusion #realtimerendering #ai
37 0
4 months ago