Turn your selfie into an action movie with this new AI image-to-video feature
AI-powered video creator Runway has added the promised image-to-video feature to its Gen-3 model that was released a few weeks ago, and it may be just as impressive as promised. Runway has improved the feature to address the biggest limitations of the Gen-2 model that was released early last year. The improved tool is miles better in character consistency and hyper-realism, making it a more powerful tool for creators looking to produce high-quality video content.
Runway’s Gen-3 model is still in alpha testing, available only to subscribers who pay $12 per month per editor for the most basic package. The new model had already generated a lot of interest when it launched with only text-to-video capabilities. But as good as a text-to-video engine is, it has inherent limitations, especially when it comes to characters in a video who look the same across multiple prompts and appear to be in the real world. Without visual continuity, it’s difficult to craft a story. In previous iterations of Runway, users often struggled to keep characters and settings consistent across scenes when relying solely on text prompts.
Providing reliable consistency in character and environment design is no small feat, but using an initial image as a reference point to maintain coherence between shots can help. In Gen-3, Runway’s AI can create a 10-second video guided by additional motion or text prompts in the platform. You can see how it works in the video below.
From photos to movies
Runway’s image-to-video feature doesn’t just ensure that people and backgrounds stay the same when viewed from a distance. Gen-3 also includes Runway’s lip-sync feature, so that someone speaking moves their mouth in a way that matches the words they’re saying. A user can tell the AI model what they want their character to say, and the movement is animated to match. The combination of synchronized dialogue and realistic character movement will interest many marketing and advertising developers looking for new and, ideally, cheaper ways to produce video.
Runway isn’t done adding the Gen-3 platform. The next step is to bring the same improvements to the video-to-video option. The idea is to keep the same motion, but in a different style. For example, a human running down the street becomes an animated anthropomorphic fox running through a forest. Runway is also bringing its control features to Gen-3, such as Motion Brush, Advanced Camera Controls, and Director Mode.
AI video tools are still in the early stages of development, with most models excelling at creating short-form content but struggling with longer stories. That puts Runway and its new features in a strong position from a market standpoint, but it’s far from alone. Midjourney, Ideogram, Leonardo (now owned by Canva), and others are all working on creating the definitive AI video generator. Naturally, they’re all keeping a close eye on OpenAI and its Sora video generator. OpenAI has some brand awareness advantages, among other benefits. Toys”R”Us has even created a short film commercial featuring Sora and premiered it at the Cannes Lions Festival. Still, the film about AI video generators is only in its first act, and the triumphant winner’s slow-motion cheer at the end is far from inevitable.