You’ll want to try Meta’s awesome new AI video generator
Meta has shared another entrant in the AI video race that has seemingly taken over much of the industry in recent months. The tech giant has released a new model called Movie Gen, which, as the name suggests, generates movies. It is noticeably more comprehensive in its feature list than many others upon initial rollout, similar to OpenAI’s Sora model, which attracted so much attention upon its initial unveiling. That said, Movie Gen also shares with Sora a restriction on access to specific filmmakers working with Meta, in lieu of a public rollout.
Movie Gen is impressive based on the demonstrations of its ability to produce movies from text prompts, as seen above. The model can take 16-second videos and upscale them to 1080p resolution. The caveat is that the video is displayed at 16 frames per second, a speed slower than any film standard. At a more normal 24 fps, the film fragment should not be longer than 10 seconds.
Movie Gen Action
Still, 10 seconds can be enough with the right prompt. Meta gave Movie Gen a nice personalization feature reminiscent of the Imagine tool for creating images with you in them. Movie Gen can do the same thing with a video, using a reference image to place a real person in a clip. If the model can regularly match the demonstration, many filmmakers may be eager to give it a try.
And the videos aren’t just limited to a prompt that then needs to be rewritten to create another video that you hope will be better. Movie Gen has a text-based editing feature where a prompt can fine-tune a piece of the movie or change an aspect of it as a whole. You can ask characters to wear different outfits or set the background to a different location. That flexibility is impressive. The adjustments also extend to the camera movements, with panning and tracking requests being understood by the AI and incorporated into the video or its subsequent edits. The awareness of objects and their movements likely comes from the recently released SAM 2 model Meta, which can tag and track objects in videos.
Audio AI future
Good visuals are all too common among AI creators these days, but Meta is also turning to the audio side of filmmaking. Movie Gen uses the text prompts for the video to produce a soundtrack that blends in with the sight, bringing sounds of rain into a rainy scene or car engines revving to a movie set in a traffic jam. It will even create new music to play in the background and try to match the atmosphere of the requested video. Human speech is not currently part of Movie Gen’s repertoire.
Meta has hidden impressive AI engines from the public before, most notably with an AI number generator that was said to be too good to release due to misuse concerns. The company didn’t claim that this was the reason for keeping Movie Gen away from most people, but it wouldn’t be surprising if it was a contributing factor.
Still, going the OpenAI Sora route means Meta will have to ignore the possibility that a more open rival will gain some of its market share. And there are tons of AI video generators available or coming soon. That includes the new or recently upgraded models of Runway, Pika, Stability AI, Hotshot and Luma Labs’ Dream Machine.