📌 This feature is available as an add-on to all plans and consumes credits.
If you would like to upgrade your plan, check out our upgrade guide
Generate stunning b-roll and backgrounds by simply typing a prompt—no stock searches or uploads needed. Perfect for creating on-brand, cinematic visuals in seconds.
Generate AI Video assets
Open a video project and navigate to the Media tab at the top of the page.
Select the Create with AI tab.
In the prompt input box, describe the video asset you want.
Press Tab to use the auto-prompt feature, which uses your video script as context to generate a prompt for you.
Under the dropdown, select the AI model you would like to use to generate the clip.
Veo 3.1 model:
Video asset length: 8 seconds.
Resolution: 1080p
Audio disabled
Sora 2 model:
Video asset length: 8 seconds.
Resolution: 720p
Audio enabled
Press Enter to begin generating the video asset using the selected model.
Once generated:
Your video asset appears with a preview in the media panel.
You can add it to any scene in any video.
Video assets are stored at the user (not workspace) level.
Use the three-dot menu to copy the video asset ID or prompt.
💬 FAQs
What is Veo 3.1?
What is Veo 3.1?
Google’s latest text-to-video model that creates 8-second clips with motion and advanced effects.
What is Sora 2?
What is Sora 2?
OpenAI's latest text-to-video model that creates 8-second clips with motion and sound effects.
How long does it take to generate a text to video asset?
How long does it take to generate a text to video asset?
Generation can take 3–5 minutes.
Can I reuse generated clips?
Can I reuse generated clips?
Yes, they’re saved in your media panel and tied to your user account. Future updates will allow you to add them to the shared workspace media library.
How many credits does a generative asset consume?
How many credits does a generative asset consume?
Each generative model consumes a different amount of credits. Sora 2 consumes 48 credits, Veo 3.1 Fast consumes 48 credits, and Veo 3.1 consumes 96 credits. To learn more about credits check out the article: What are credits, and how do they work for Enterprise customers in Synthesia?
How do I enable or disable generative video assets in Synthesia?
How do I enable or disable generative video assets in Synthesia?
Admins can turn this feature on or off for the entire organization or specific workspaces via Feature Settings under General settings.
📚 To learn more about credit consumptions, check out our credits for self serve users and credits for Enterprise customers articles
Notes
Notes
Generative Video assets, like Veo 3.1 and Sora 2, are artificial intelligence components created, trained, and deployed by third parties, rather than by Synthesia itself. When making these features available through the Services to customers, Synthesia takes steps to test and verify their performance and accuracy and to apply similar governance and moderation controls that otherwise apply to the native components of the Services. When possible, Synthesia will onboard the providers of these components as sub-processors. The goal is to give you a single, trusted environment for video creation without the burden of sourcing and integrating these models on your own.
Unlike Synthesia's native components, generative video assets are often general application models that can be prompted to create a wide variety of outputs. The content produced by generative video assets ultimately reflects the prompts that you provide, notwithstanding the governance and moderation controls. For that reason, customers must exercise good judgement should they elect to use, as they will be responsible in the event they prompt these components in a manner that violates the Acceptable Use Policy , for example, by generating and publishing an asset that infringes a third party's intellectual property rights.
