How Moderation Works on Our Platform
Some of our AI features are powered by third-party model providers. When you submit a prompt such as a new background or outfit, it's sent to the provider, who runs it through their own moderation systems before generating a result.
That means:
The decision to block or allow your prompt comes from the provider of these technologies.
We might not be able to surface a specific reason on why these were blocked.
Our support and moderation teams can't override a decision on your behalf.
We know this can be frustrating, especially when a prompt feels clearly benign. We've written this article to help you resolve it yourself as quickly as possible.
What to Do if Your Prompt is Blocked
1. Rephrase and Try Again
Most blocks are triggered by specific words or phrases that overlap with the provider's restricted categories, even when your intent is harmless. Common culprits include:
Names of real people, especially public figures
Words associated with violence, weapons, or self-harm (even in a fictional or historical context)
Brand names, character names, or copyrighted titles
Medical, anatomical, or clinical terms
Words that have multiple meanings, where one meaning is restricted
Try describing what you want using more general or descriptive language. For example, instead of naming a person, describe their appearance. Instead of a brand, describe the style.
2. Add Context to Your Prompt
Providers' moderation systems consider the framing of a request. Adding context like "educational illustration for a biology textbook" or "stylized cartoon for a children's story" can help the system understand your intent.
Enriching your prompt with context and more detailed descriptions will also help you in obtaining an output that matches what you envisioned.
3. Check the Provider's Content Policy
Each provider publishes its own policy on what's allowed. If you keep hitting blocks on similar prompts, reading the relevant policy is usually the fastest way to understand the boundary.
For example, here are the policies from some of the major model providers:
While this is not an exhaustive list of all model providers, nor of all the providers we use in Synthesia, every vendor will usually have a dedicated safety page where you can find more information about their policies, moderation process and expected outcomes.
When to Contact Our Support Team
We're happy to help with:
Technical errors: if you're getting an error that doesn't look like a content block (e.g. timeouts, failed uploads, billing issues)
Feature questions: how to use a tool, what its limits are, what file formats it supports
Account issues: credits, subscriptions, access
If you're not sure which category your issue falls under, contact us and we'll point you in the right direction.
A Note on Responsible Use and Synthesia Moderation
While we don't moderate third party services ourselves, our Acceptable Use Policy and Content Moderation Guidelines still apply to your use of the platform. While the prompt moderation relies on our model providers, the final content of your Synthesia videos will always be moderated at the point of generation. This means that even though some AI generated content might not be blocked by the provider while you are still editing the video, your finalized video still needs to meet our moderation policies and might be blocked upon review, and therefore not be generated.
