Your brand isn’t everyone. It’s everything. Introducing Brand Studio by Stability AI, the creative production platform built just for you.
Get started at the link in bio.
This week’s Stability Seconds shows how you can quickly iterate across multiple musical directions with Stable Audio 2.5 ⏩
Starting with one short video clip, we applied three different prompts to shift the genre and mood of the video’s track in seconds:
1️⃣ Shoegaze synth: Dreamy and hazy
2️⃣ Ethereal classical: Airy and uplifting
3️⃣ Experimental drone: Textured and cinematic
You can learn more and start using Stable Audio 2.5 on our website's learning hub.
How are artists using AI to make music? 🎶
That’s what our Audio Research team set out to understand when they analyzed 337 musical works. The research examines how artists are using AI tools in their creative and production process.
We’re sharing the key findings, with insight on emerging AI music creation practices that artists and creative professionals can draw inspiration from.
You can read the full paper and learn more on our website's learning hub.
In this week’s Stability Seconds, we’re showing you how to generate custom, commercially safe audio for your next video project with Stable Audio 2.5 🎵
Here’s how you can do it:
▶️ Find a short video clip that you want to add sound to.
▶️ Prompt Stable Audio 2.5 for genre, overall mood, and instruments that match your video.In our prompt, we used terms like “cinematic,” “awe-inspiring,” and “dramatic horn section.”
▶️ Generate a 3-minute track, then scrub through to find the best cut.
▶️ Add that audio to your clip to create a polished soundtrack.
You can learn more and start using Stable Audio 2.5 on our website's learning hub.
With the release of our latest audio model Stable Audio 2.5, we’re sharing our updated best practices for prompting.
Built for enterprise-grade sound production, Stable Audio 2.5 introduces capabilities like improved musical structure, faster inference at less than 2 seconds on a GPU, and support for audio inpainting.
With effective prompting techniques, you can get the most out of Stable Audio 2.5 for professional use cases like advertising, game soundtracks, and short-form video.
You can read the full guide with the link in our bio.
Today we’re launching Stable Audio 2.5: The first audio model built for enterprise-grade sound production 🔊
Audio influences brand engagement by 86%, but few enterprises are leveraging audio as an extension of their brand, making customized sound an untapped differentiator.
Stable Audio 2.5 is purpose-built for this opportunity to create customizable, high-quality audio at scale, with capabilities that include:
▶️ Improved musical composition: Generate full songs with multi-part structure, meaning a clear intro, middle, and outro.
▶️ Audio inpainting: Input audio, select where the track should start, and the model uses the context to generate the rest of the track.
▶️ Customization: Our team can fine-tune Stable Audio 2.5 to help enterprises create the right sound for their brand.
▶️ Faster inference: The model can generate up to three-minute long tracks in under two seconds on a GPU, outputting in just eight steps (compared to ~50 in the previous model).
With the launch of Stable Audio 2.5, we’re also partnering with leading sound branding agency amp, part of the Landor Group, a @wpp company, to co-develop enterprise solutions for innovative brands who want to create iconic sound identities and experiences.
You can try Stable Audio 2.5 now at StableAudio.com and the Stability AI API, as well as through our platform partners 👉 Fal, @replicate_hq , and @comfyui .
You can learn more with the link in our bio.
How did @hubspot increase image generation on their platform by 150%? With Stable Diffusion 3.5 Large on Amazon Bedrock ✅
When HubSpot was looking for a way to give their customers on-brand image generation without limits on their AI-driven platform, they found a solution using our models via Amazon Bedrock.
Read the full case study with the link in our bio.
The creative industry is being challenged right now. Demand for content is skyrocketing, budgets are shrinking, and timelines are tighter than ever.
And while we all know AI is here to help, not all AI is actually helping.
Most AI isn’t built for enterprise level creative work because it requires the precision and control that your average AI just can’t provide.
That’s why today, we’re introducing Stability AI Solutions: A new offering designed to help enterprises scale creative production with generative AI.
ℹ️ What’s in a solution
Each solution delivers custom models and workflows built with leading media generation and editing tools, along with everything needed to meet the standards of enterprise production: professional services, flexible deployment options, and built-in features such as brand safety guardrails, indemnification, compliance, and dedicated support.
🔎 What’s available today
Our initial suite of solutions is tailored for the Marketing / Advertising / Design verticals: Product Photography, Brand Style, Product Concepting & Design, and Digital Twins.
🔌 Options for deployment
Stability AI Solutions can be deployed in a variety of ways to meet different enterprise needs: on-prem, via secure API endpoints, accessed through web-based applications, and through our ongoing collaboration with @wpp , also via WPP Open.
You can learn more on our blog via our bio 👆
In this week’s Stability Seconds, we’re showing you how to use prompt weighting, a technique that helps you control which parts of your prompt have more or less influence on the final image. It’s a fast way to steer your image toward your desired output without rewriting your prompt.
Here’s how to do it:
1️⃣ Emphasize elements: Put parentheses around the part of your prompt you want to focus on, then add a colon and a number. For example, (trench coat:1.5) tells the model to emphasize the trench coat. The higher the number, the more it stands out.
2️⃣ De-emphasize elements: Put parentheses around the part of your prompt you want to make less prominent, then add a colon and a number below one. For example, (background:0.5). The lower the number, the less it stands out.
You can try prompt weighting with Stable Image Ultra on our website's learning hub.
In this week's Stability Seconds, we’re getting you up to speed on how to use simple prompting techniques to create more precise images faster by not having to rewrite your prompt. Here’s how:
1️⃣ Use positive and negative prompts: Combine positive prompts to describe what should appear in the image, like “an editorial close-up,” and negative prompts to guide the model away from unwanted elements, such as “low resolution.”
2️⃣ Create prompt journals: When using Stable Image Ultra, try the prompt journal node to store a list of prompts directly into your workflow. This keeps your best prompts in one place, so you can easily reuse them or build on them later.
You can find more prompting techniques and start creating with Stable Image Ultra on our website's learning hub.
Today, we're breaking down how to use image-to-image with Stable Image Ultra in less than a minute ⏰
This workflow allows you to:
1️⃣ Apply a new style while preserving structure
2️⃣ Maintain a consistent look across a full set of assets
3️⃣ Iterate faster by working from a visual reference
You can use Stable Image Ultra in @comfyui through its native API nodes, or access the model directly on the Stability AI API.
We’re kicking off Stability Seconds, a new series of mini tutorials designed for those who want to spend more time creating and less time configuring ⏰
Each episode will break down how to use our generative tools into simple steps — just enough to move you forward without slowing you down.
Today's topic: how to set up ComfyUI with our models, so you can start generating images using a workflow that doesn’t require an API key or GPU.