Meta’s Movie Gen AI model can generate video with sound or music
Facebook owner Meta announced Friday that it has built a new AI model called Movie Gen that can create realistic-looking video and audio clips in response to user queries, claiming it can rival tools from leading media generation startups like OpenAI and ElevenLabs.
Examples of Movie Gen’s creations, provided by Meta, showed videos of swimming and surfing animals, as well as videos that used real photos of people to depict them performing actions such as painting on a canvas.
Movie Gen can also generate background music and sound effects that sync with the videos’ content, Meta said in a blog post, and use the tool to edit existing videos.
In one such video, Meta had the pom-pom tool stuck in the hands of a man running alone through the desert, while in another, a parking lot where a man was skateboarding turned from dry ground into a parking lot covered by a splashing puddle .
Videos created by Movie Gen can be up to 16 seconds long, while audio can be up to 45 seconds long, Meta said. It shared data showing that blind tests show the model performs favorably against offerings from startups such as Runway, OpenAI, ElevenLabs and Kling.
The announcement comes as Hollywood struggles to leverage generative AI video technology this year, after Microsoft-backed OpenAI in February first showed how its product Sora could create feature-length videos in response to text prompts.
Technologists in the entertainment industry are eager to use such tools to improve and speed up filmmaking, while others worry about embracing systems that appear to be trained on copyrighted works without permission.
Lawmakers have also raised concerns about how AI-generated falsifications, or deepfakes, are being used in elections around the world, including in the US, Pakistan, India and Indonesia.
Meta spokespeople said the company is unlikely to release Movie Gen for open use by developers, as it has done with its Llama series of large-language models, saying it is considering the risks for each model individually. They declined to comment specifically on Meta’s rating for Movie Gen.
Instead, they said, Meta was working directly with the entertainment community and other content creators on using Movie Gen and would incorporate it into Meta’s own products sometime next year.
According to the blog post and a research paper on the tool released by Meta, the company used a mix of licensed and publicly available datasets to create Movie Gen.
OpenAI has met with Hollywood executives and agents this year to discuss potential partnerships with Sora, although no deals have reportedly resulted from those talks yet. Concerns about the company’s approach increased in May when actor Scarlett Johansson accused the ChatGPT creator of imitating her voice without permission for its chatbot.
Lions Gate Entertainment, the company behind “The Hunger Games” and “Twilight,” announced in September that it was giving AI startup Runway access to its film and television library to train an AI model. In return, the studio and its filmmakers can use the model to expand their work.
© Thomson Reuters 2024
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)