Meta, the parent company of Facebook, announced on Friday the development of a new artificial intelligence model, “Movie Gen,” designed to create highly realistic video and audio content based on user prompts. This new AI tool, Meta claims, rivals leading media generation startups such as OpenAI and ElevenLabs, adding another major player to the rapidly expanding AI-generated media landscape.
Movie Gen’s Capabilities
Movie Gen can generate video clips of up to 16 seconds and audio clips as long as 45 seconds. These clips are not just generic creations; the AI can sync audio and visuals with precision, adding background music and sound effects that match the tone and action in the videos. Meta showcased samples of the model’s outputs, which included videos of animals swimming, people painting on canvases, and more surreal effects like adding pom-poms to a man running through a desert or turning a dry parking lot into a wet, puddle-filled scene where a man is skateboarding.
Read More: OpenAI unveils new tools to boost developer capabilities in AI race
Beyond its content generation capabilities, Movie Gen can also edit existing videos, allowing users to enhance their content with AI-generated effects and audio tracks, expanding the creative possibilities for content creators across various industries.
Blind Tests
Meta’s introduction of Movie Gen arrives as competition in the AI-generated media space intensifies. OpenAI, a company backed by Microsoft, has already showcased its generative video model “Sora” earlier this year, which has been described as capable of producing feature film-quality clips from text prompts. Other startups like ElevenLabs, Runway, and Kling have also launched their own versions of generative AI models for media production.
Meta highlighted blind tests comparing Movie Gen with these competitors, showing that the model performed favorably in generating both video and audio content. These tests help establish Movie Gen as a serious contender in the AI media generation field, raising the stakes for innovation and quality in this fast-evolving space.
Collaborations and Industry Response
While Meta’s Llama large-language models are available for open developer use, the company indicated that it has no plans to release Movie Gen to developers just yet. Meta representatives cited the need to assess risks on a case-by-case basis, and the company has been cautious about potential misuse, such as AI-generated deepfakes.
Instead, Meta plans to collaborate directly with content creators and the entertainment industry, integrating Movie Gen into its own products sometime next year. Hollywood has already been exploring the use of generative AI technologies like Movie Gen to enhance video production processes, with many filmmakers intrigued by the potential for innovation. However, some in the industry remain cautious, particularly over concerns that AI models may be trained on copyrighted materials without proper permissions, raising legal and ethical questions.
Ethical Concerns and Future Outlook
The rise of AI-generated media has sparked global debates over its potential for misuse, particularly in creating deepfakes that could influence elections or spread disinformation. Countries like the U.S., India, Pakistan, and Indonesia have already raised alarms about the implications of AI-generated content on political campaigns, calling for regulatory oversight.
Read More: One-fifth of UK GPs are using AI in clinical practice, study shows
Meta acknowledged these concerns but has not yet provided specific details on how it plans to address the risks associated with Movie Gen. The company has emphasized that it will carefully monitor the model’s use, especially within the entertainment industry.