At our core, we build full stack foundational models and products that redefine short-form video and shape whatever format comes next. As part of that mission, we aim to create a safe and responsible environment for AI-powered video generation. We combine proactive technology, human review, and transparent policies to ensure the content on our platform aligns with our values and legal requirements.

How Content Moderation Works

  1. Proactive Detection - Mirage Studio uses automated detection systems to identify potentially harmful or prohibited content during the creation process. This includes scanning for content that may violate our Acceptable Use Policy.
  2. **Human Oversight - **The Mirage Studio team is deeply involved in the creation and maintenance of these automated systems, and may also review content that is flagged to confirm compliance with our guidelines.
  3. **Enforcement Actions - **Content found to violate our policies may be blocked, removed, or prevented from being generated. Repeat or severe violations may result in permanent bans. 

Safety and Security Commitment

As outlined in ourSafety and Security statement, Mirage Studio prioritizes:
  • Secure Systems – Data encryption in transit and at rest
  • Privacy – User content is stored and processed in accordance with our Privacy Policy
  • Transparency – Clear communication on moderation decisions and policies
  • Responsible AI – Guardrails to reduce the risk of generating harmful content

Why This Matters

We believe AI-powered creativity should be safe, ethical, and respectful. Our safeguards help ensure Mirage Studio remains a positive space for creators, businesses, and communities. If you have questions or if you come across content that violates our Acceptable Use Policy, and you believe it was created on our platform, please contact hello@captions.ai.