Tech & Gadgets

Snapchat will soon let some users create AI-generated videos

Snapchat on Tuesday announced new artificial intelligence (AI) tools for users. The social media giant revealed during its 6th annual Snap Partner Summit that it plans to introduce an AI video tool for creator account holders. The AI ​​tool will allow users to generate videos from text and image prompts. All videos generated using AI will reportedly be watermarked by the company to help other users distinguish between real videos and AI-generated videos.

In a press release, the social media company detailed the new features. One of the most exciting features announced during the event is the AI ​​Video tool. It’s called Snap AI Video and is only available to Creators on the platform. To become a Creator, users must have a public profile, be active posters for their Stories and Spotlight, and have a significant audience.

The feature resembles a typical AI video generator and can generate videos from text prompts. Snapchat said creators will soon be able to generate videos from image prompts as well. The feature has been rolling out in beta on the web to select creators.

A spokesperson for the company told TechCrunch reports that the AI ​​feature will be powered by Snap’s own fundamental video models. Once the feature is broadly available, the company also plans to use icons and context cards to let users know when a Snap was created using AI. A custom watermark will remain visible even when the content is downloaded or shared.

The spokesperson also told the publication that the video templates have been thoroughly tested and security evaluations have been carried out to ensure that they do not generate malicious content.

Additionally, Snapchat also released a new AI Lens that lets users look like their older selves. Snapchat Memories, which is available to Snapchat+ subscribers, now supports AI captions and Lenses. Additionally, My AI, the company’s native chatbot, is also getting improvements and can perform several new tasks.

Snapchat says users can now solve more complex problems, interpret parking signs, translate foreign language menus, identify unique plants and more with My AI. Finally, the company is also partnering with OpenAI to give developers access to multimodal large language models (LLMs) to enable them to create more Lenses that recognize objects and provide more context.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button