The news is by your side.

Google is joining the effort to help discover content created with AI

0

Google, whose work in artificial intelligence has helped make AI-generated content much easier to create and distribute, now wants to ensure that such content is also traceable.

The tech giant said Thursday that it was collaborating on the development of digital content credentials, a kind of “food label” that identifies when and how a photo, a video, an audio clip or other file was produced or modified – also with AI The company will collaborate with companies such as Adobe, the BBC, Microsoft and Sony to refine technical standards.

The announcement follows a similar pledge announced on Tuesday by Meta, which like Google has enabled the easy creation and distribution of artificially generated content. Meta said it would promote standardized labels that identified such material.

Google, which has poured money into its artificial intelligence initiatives for years, said it would explore how to integrate the digital certification into its own products and services, although it did not specify the timing or scope. The Bard chatbot is connected to some of the company's most popular consumer services, such as Gmail and Docs. On Google-owned YouTube, which will be included in the digital identification effort, users can quickly find videos with realistic digital avatars pontificating about current events in voices supported by text-to-speech services.

Recognizing where online content comes from and how it changes is a high priority for lawmakers and technology watchdogs in 2024, when billions of people will vote in major elections around the world. After years of disinformation and polarization, realistic images and audio produced by artificial intelligence and unreliable AI detection tools made people even more doubtful about the authenticity of the things they saw and heard on the internet.

Configuring digital files with a verified record of their history could make the digital ecosystem more trustworthy, according to those who support a universal certification standard. Google is joining the steering committee of one such group, the Coalition for Content Provenance and Authenticity, or C2PA. The C2PA standards are supported by news organizations such as The New York Times, but also by camera manufacturers, banks and advertising agencies.

Laurie Richardson, Google's vice president of trust and security, said in a statement that the company hoped its work would “provide important context to people, helping them make more informed decisions.” She noted Google's other efforts to give users more information about the online content they encountered, including tagging AI material on YouTube and offering details about images in Search.

Attempts to link references to metadata – the underlying information embedded in digital files – are not flawless.

OpenAI said this week that its AI image generation tools would soon add watermarks to images as per C2PA standards. Starting Monday, images generated by online chatbot ChatGPT and standalone image generation technology DALL-E will include both a visual watermark and hidden metadata designed to identify them as created by artificial intelligence. However, this move is “not a silver bullet to address provenance issues,” OpenAI said, adding that the tags “can easily be removed accidentally or intentionally.”

(The New York Times Company is suing OpenAI and Microsoft for copyright infringement, accusing the tech companies of using Times articles to train AI systems.)

According to the report, there is “a shared sense of urgency” to strengthen trust in digital content a blog post last month from Andy Parsons, the senior director of the Content Authenticity Initiative at Adobe. The company released artificial intelligence tools last year, including the AI ​​art generation software Adobe Firefly and a Photoshop tool known as generative fill, which uses AI to expand a photo beyond its boundaries.

“The stakes have never been higher,” Mr. Parsons wrote.

Cade Metz reporting contributed.

Leave A Reply

Your email address will not be published.