Follow if you wanna learn how to make this! Here is my workflow for creating photorealistic people that feel real in a documentary...
But first: If you’re an AI filmmaker, I would love to connect! Follow me, comment, and DM me if you like 🙂
I made this film because I was amazed by the new logo that just popped up on my screen today. I saw it in between all the other app icons on my phone and thought: Why do all of these icons look the same? Logos in general are starting to look kind of similar... I wish every brand had a really unique logo tbh. But well, that’s another discussion!
Here is a simple trick I use very often in my videos to create photorealistic characters: I create them using the good old text-to-video technique. I work without any start frame in Sora 2 (not Pro) and tell it to randomly create some characters (I guide it a little bit with age, etc.). Then I generate a 20-second video with multiple characters and pick the ones I like. If necessary, I tweak them a bit in Nano Banana and upscale. Then I use these characters in Kling to create the videos.
I think you can still tell the characters are AI, but I’m REALLY looking forward to Veo 4 — I hope they’ll fix this audio issue.
I would love to hear your take on this! Regarding logos AND AI voices. Save this video for later!
Drop your guess in the comments before the reveal! I mixed real photos I took with my camera with Al to fake a documentary. #aifilmmaking #filmmakingtips #tutorials
@invideo.io How I made the Mercedes G-Wagon spec ad with A.I. full breakdown and total costs …
Quick note first: I’m always happy to connect with other A.I. creatives!
If you’re one too, feel free to send me a DM and follow for more
prompts and workflows!
Check out the final video in my post yesterday! I made it with invideo and their new Agent One.
As this is a remake of my old G-Waggon Commercial i already had the script and the narration.
Then, I grabbed some of the stillframes of my old commercial and went to invideo and I told Agent One to use Nano Banana to transform the bright Base Images of the Gwaggon into the moody style in the savannah. I then used these images for the start frames.
Then I created the Start Frame Agent in AgentOne. I fed it the images and used the prompt in the video. I couldnt stop generating startframes with agent one as its so much fun to iterate with an agent. It generates, reviews and sometimes even regenerated images that it didnt liked. Wild! I generated over 70 Startframes, and nearly 50 were actually usable. I didnt expected that!
Then I used another Agent that I created to generate the videos. I fed it my base Multishot prompt (DM if you want it) and it came up with REALLY usable clips using this Base Prompt together with my shitty prompt that I gave Agent One. In total I generated around 20 Seedance Videos with Multishots between 5 and 15 Seconds.
Also pretty handy: I told agent One that i wanna download EVERYTHIGN and it just gave me a zip with all the files.
In the end I spent 3 Days working on this (Re iterating with the agent, sometimes manually correcting things, Editing, Sound effects, color grading etc.) and spent around 200$ on this Spec Ad in credits.
Have you ever tried agents ?Whats your experience? Drop it in the comments 👇#filmmaker #ai #tutorial
Ad for @invideo.io / Comment „Tutorial“ for the full Tutorial!
I’ve always been skeptical about agents. I tried a couple before and honestly wasn’t really convinced. But I think I was using them the wrong way. I thought the whole point of using an agent was to hand over creative decisions and I don’t want to give up creative control.
But with Agent One, it’s possible to use agents more like collaborators for batch processing. I use it a lot for brainstorming, exploring ideas, and batch-processing start frames. That’s where it becomes really useful: letting the agent run with my creative direction and seeing what it comes up with.
So remaking my old Mercedes-Benz G-Class spec ad the one I released a couple of months ago was the perfect way to test it.
Should I make a full tutorial about it? If so, comment what you’d like to know, and I’ll make a video on it!
Have you ever used agents in your workflow? What were your biggest learnings?
Toolstack:
- invideo Agent One (Overall Framework and Multimodal Platform)
- Nano Banana 2 and Nano Banana Pro
- ByteDance Seedance 2 and Seedance 2 Fast
- Blackmagic Design Davinci Resolve
Here is how we shot this hybrid AI commercial for the german chocolate brand @jokolade
This was a very challenging project as we wanted to combine real life action + AI environments. There was a lot of planning involved and my team at @promptrstudios & me worked extremely hard on this the last couple of weeks. If you wanna see more worfklow breakdowns like this it wold mean a lot to me if you hit „follow“ and drop a comment ! :)
First, We started with a script and a storyboard. When it comes to hybrid productions with a big team its super important that everyone is on track and knows exactly what to do. A small „trick“ is probably that we had the rough outline of the AI generated visiuals before we went to the shoot. So we locked in a rough idea of lighting, composition, shotsizes etc. with mockups that we generated before the shoot.
I was a little bit nervous because wideshots usually are really tricky as AI isnt really good with details in wideshots. And we had a LOT of wideshots here.
Luckily Kling 4K came out that REALLY helped us witht he wideshots.
🎬 Production:
Exec Producer: @tim_sproten
Producer: @aa_lysh_aa
Managing Director Promptr @georg.ramme
AL: @Anna Krenkel
Director: @arabellabartsch Kam: @tobi.koppe
1st AC: @Dominik Bodammer
1st AD: @Ina Sprinckstub
2nd Cam: @Marcel Riedel
Gaffer: volker langholz
Sound: @l_jannis
Styling: Sophia Costima
H&M: teena denzinger
✦ AI:
AI Director: Simon Meyer
AI Artists:
@cirok.ai@justinbaessler@lucasfiederling@the_lubbertus@_kotowski_@nik.behrendt
🖥 Post Production:
Grading: Paul Breuer, Nadir Mansouri/ mograde.net
Edit: @Andre Gelhaar , Nikolai Kotowski, Justin Bässler
Sound: Klangufer - Florian Schäfer
AD/ Every A.I. artist knows this feeling! Here is my complete workflow & learnings for creating this 3-minute film! ...
But first: this post is sponsored by @tapnow.ai_official TapNow. It’s honestly one of the best open canvas tools out there! Can recommend checking it out!
I had SO many learnings about Seedance 2.0 while creating this piece! Most important: there is a HUGE difference in quality between Reference Mode and Startframe Mode. Somehow the quality drops if you use only one image in Reference Mode. So I burned through a lot of credits finding this out. Tapnow has both, so it was easy to switch to Startframe Mode. But this is something you really need to know about Seedance 2.0.
I started by writing the script. Then I jumped into startframe generation. TapNow has ALL the models, so I used OpenAI Sora (yes, the good old Sora) to create the initial characters. My process is always to generate a couple of characters with video models! For me, the characters look much more realistic when generated with Sora instead of Google Nano Banana.
Then I iterate with Nano Banana on still frames from these initial video gens and use the output as startframes again. I repeat this process for each scene.
This was made entirely on TapNow with Seedance, Sora, and Nano Banana 2 / Pro.
Huge THANKS to @justinbaessler who worked with me on this can recommend following this guy, more is coming from him!
But one big question: how do you guys stay organized on open canvas systems? Even though you can group elements and Tapnow has great features for that, I feel lost after the first 100–200 gens.
Ad/ @klingai_official 4K IS OUT NOW! I made this film with it here’s why it’s INSANE...
But first: this post is sponsored by Kling! I highly recommend checking out the new model.
I’ve been a big fan since the first versions of Kling, and this is a HUGE update. One thing it’s really, really good at is wideshots. Wideshots used to be a big challenge with models that could only generate in 720p/1080p, because faces would get distorted very quickly. But with Kling 4K, that’s honestly not really an issue anymore.
The overall fidelity is much better — it’s crisp, and fine pores and skin texture really shine. This is HUGE for anyone serious about filmmaking.
Watching this film on a bigger screen is now really, really impressive. We’re entering the cinema/TV era of AI-generated films. Upscalers are going to have a tough time competing now.
What’s your impression of Kling 4K? @promptrstudios@the_lubbertus@nik.behrendt@klingai_official
The AI Tools I Actually Use in April 2026 ...
After 18 months of buying every AI tool that shipped, I just cleaned up my entire AI toolstack. The field moves so fast I re-sort every 3–4 weeks what actually opens every day vs. what’s just sitting in tabs eating money.
Your tools say what kind of work you’re trying to make. Here is my full breakdown of what I’m actually running right now in the carousel. 9 tools. April 2026.
What’s missing from my stack? Which tool are you using every day that I should know about?
@googledeepmind Nano Banana Pro, Nano Banana 2
@klingai_official
CRAFTR (Link in bio)
@bytedance Seedance 2
@sunomusic@syncdotso
Alibaba Group Z-Image Turbo
@claudeai Claude
@artlist.io@beeble_ai Switch X
@magnific_ai
Not Everything is A.I. here! The location is real! I took pictures of this beautiful village and created a skateboard contest that never existed! #filmmaker #skateboarding #cinematography #ai
Wide-angle shots with A.I. are tricky! Faces quickly become distorted! Here is an easy fix for your wide-angle shots!
In my last story, I asked you what you struggle with as an AI filmmaker. Something that came up a lot was „wide shots“! Yes! You’re right wide shots aren’t easy! It’s so hard to do that I typically try to avoid them. And I think that’s (for now) the best thing to do when dealing with it.
I get it! In the storyboard or the client’s vision there is this huge, spectacular wide shot. But AI is extremely bad with details. So what do you do?
I always try to come up with another shot instead. Sometimes three mediums with a little bit of background blur cut together are even better than a wide. Or covering it up with motion blur. When the camera is alive and you introduce some motion blur, you cannot tell that the quality is significantly lower than mediums or close-ups. Or, if you really NEED to make this wide shot happen, maybe the technique below is something for you!
Are you also struggling with wide shots? What are your workarounds? Made with @magnific_ai@freepik
Comment „CRAFTR“ to get the link! I use Craftr in EVERY video I create! Hope you like it too!
Grid Prompting is a technique that alot of people are not aware of. It helps you exploring a scene, get ideas for angles and helping you to find a starting frame for video gen.
Ad / I switched from Nano Banana to UNI-1 and here is why… (Steal my Prompts!)
@lumalabsai just dropped their new image model Uni-1 and for me this is a gamechanger. If you wanna try it out yourself: https://lumalabs.ai/simonmeyer10
My biggest problem with Nano Banana: You can get absolutely insane photos out of it. But that’s exactly the issue: they look like photos. Sometimes even like stock photos. Too sharp. Too evenly lit. Too „perfect.“
Not great for startframe generation for video.
With Uni-1 we finally have a model that feels like it was trained on video stills. It’s ridiculously cinematic and real.
In this video you can see some examples of how photorealistic this thing actually is. The lighting feels real. Not „AI real.“ Actually real.
I animated all of these Uni-1 Startframes with Kling AI 3 in Luma. Overall Uni-1 is seriously impressive. The reasoning is next level, the image quality is insane, and most importantly it actually looks like film, not like a render.
Have you tried Uni-1 yet? What are your learnings?