Curious Refuge

@curiousrefuge

The World’s First Home for AI Filmmakers
Followers
99.0k
Following
67
Account Insight
Score
43.64%
Index
Health Rate
%
Users Ratio
1479:1
Weeks posts
This animated short “27 Days” by Dale Williams at Promise is a real tearjerker. Love the stop-motion-inspired art style and how it was inspired by geometric wood art from his home. What really got me was the ending. One of those pieces that quietly makes you reflect on the most meaningful memories in your own life. 💖 #ai #aifilm #aishort #aifilmmaking #aivideo
366 21
22 hours ago
I’m impressed by this character consistency workflow that uses a single GPT Image 2.0 character sheet and turns it into a multi-shot cinematic sequence in Seedance 2.0. We tried it ourselves on our own character and cinematic sequence, and the results looked solid. The consistency is not 100% as the Seedance model does hallucinate a little bit but it’s pretty darn close. Prompt structures in comments. Super cool workflow Nexora, thanks for sharing! #aivideo #aitools #aifilmaking #aifilm #ainews
766 51
2 days ago
Models like Sync-3 and Magnific (Freepik) Speak let you upload a video and separate audio file, then update the lip sync to match. So I took an AI-generated video and ran it through both models. Sync-3 and Freepik Speak ended up pretty close overall and the results are solid, especially with just one speaker. Once you introduce multiple speakers though, that’s where things start breaking down. You’ll sometimes see both characters moving their lips at the same time or accidentally dubbing each other’s lines, especially with more complex camera movement and multi-shot scenes. That said, being able to take a video you already like and update the dialogue after is a pretty big upgrade and we’ve added this workflow in our blog about lip-sycing with AI. Link in bio. #ai #aifilm #aiad #aifilmmaking #ailipsync
124 4
2 days ago
This AI music video by Dave Clark is SO cool. The gold mic, the driving shots, the wide aerials…and the pacing of the cuts with the track all come together really well. I’m really digging the aesthetic and visual treatment of this. Got me feeling like going on a late night drive to this track. #ai #aifilm #aiad #aifilmmaking #aimusicvideo
119 13
4 days ago
We tested Happy Horse against Seedance across a variety of VFX effects, and in these more complex multi-shot prompts, the differences in realism and consistency became pretty obvious. Our latest blog post breaks down the different VFX tests we ran and how each model performed. I don’t think the final verdict will surprise you. Curious to hear what you think 👀 Link in bio. #ai #aifilm #aiad #aifilmmaking #aivideo
204 13
7 days ago
There’s been a lot of hype around Happy Horse recently, with some early examples making it seem like it might outperform Seedance in certain areas. But now that Alibaba has publicly released the model, we can finally run our own tests. We tested Happy Horse 1.0 against Seedance 2.0 across water physics, fire physics, VFX, everyday interaction and conversations shots, and the results were kind of surprising. Seedance still comes out ahead overall in our opinion. While Happy Horse was able to follow most of the prompts and generate the intended actions, its outputs especially in larger action-heavy shots tended to drift away from the original reference image and start to resemble video game cinematics rather than live-action footage. Seedance, on the other hand, preserved realism and image fidelity better overall. That said, it’s still pretty impressive how quickly Happy Horse came out of nowhere and got this close to some of the leading video models already. #ai #aifilm #aiad #aifilmmaking #aivideo
120 5
8 days ago
Lip sync with multiple speakers has honestly been one of the harder things to get right in AI video so far, especially if you want to use your own audio. This Seedance 2.0 workflow is interesting because you upload: – a reference image – a blacked out video with audio OR audio file – a prompt (camera motion + VO) …and the lip sync comes out surprisingly accurate across live-action and animation, even in more complex multi-shot scenes. Have you tried this workflow yet? Curious what you think. #ai #aifilm #aiad #ailipsync #aivideo
377 39
9 days ago
Been playing with light refraction and multi-exposure layers…love how it turns a basic portrait into something more interesting. Here’s the master prompt for MJv7: portrait of a [SUBJECT] natural skin texture, minimal makeup, soft editorial beauty, multiple ghosted duplicates of her face layered around the frame, mirrored kaleidoscopic composition, prism glass effect, chromatic aberration, rainbow light refractions across the image, soft pastel blue background, dreamy lighting, subtle blur on duplicates, sharp focus on center face, fashion editorial photography, 85mm lens, shallow depth of field, soft diffused light, high detail, filmic tones If you try it out, share it with us! We'd love to see what you come up with 👀 #prompt #aiartwork #aifilm #aiart #midjourneyai
168 0
10 days ago
We just tested a lip sync workflow in Seedance 2.0… and it might actually compete with HeyGen and other avatar models. You upload: – a reference image – a blacked out video with audio OR audio file – a prompt (camera motion + VO) The prompted VO is really important here to make sure the model doesn’t hallucinate or riff off of the original speaker verses. Feels like a legit breakthrough. Lip sync has been a pain point for a while, and this gets us a lot closer. #ai #aifilm #aiad #aifilmmaking #aivideo
111 13
11 days ago
This AI spec watch film by J. Felipe Orozco (made with Runway) pulled 50M+ views in under 48 hours. Watching this made me think… how fun would it be if age worked like this? It almost turns age into a superpower. Cool idea, and a really nice way to bring it back to a watch “for every version of you.” #ai #aifilm #aiad #aifilmmaking #aivideo
452 14
14 days ago
We came across a really interesting workflow that could help solve environmental consistency in AI production. It’s called the “burst frame” technique. You take a single image, prompt 20 quick shot variations using Seedance 2.0, and generate a multi-angle sequence from the same scene. Because everything comes from one source, the environment stays consistent across shots. It ends up working like a fast pre-vis tool. Explore angles, pick your favorites, and build from there. Shoutout to Kai Turner for sharing this method. #ai #aifilm #aiad #aifilmmaking #aivideo
194 6
15 days ago
We ran more tests between GPT Image 2.0 and Nano Banana 2, this time using reference images. GPT Image 2.0 is stronger with character consistency, while Nano Banana 2 tends to do better with environments and background consistency. It also does a nicer job making subjects feel naturally placed in the scene, instead of looking composited in. Overall, Nano Banana 2 is still our default since it’s faster and way cheaper. But when we need more precision or some creative problem solving, we’ll go with GPT Image 2.0. #ai #aifilm #aiad #aifilmaking #openai
190 10
16 days ago