Tech & Gadgets

Google will soon make it easier to detect AI-generated images

Google announced Tuesday that it is introducing new ways for users to tell if an image was created using generative artificial intelligence (AI). While the tech giant is busy developing internal tools to watermark AI-generated content, it also joined the Coalition for Content Provenance and Authenticity (C2PA) as a member of its steering committee in February. After working with other industry players to develop a new technical standard, the company is now integrating it into images accessible through its tools.

Google will help users identify AI-generated images

The risk with AI-generated images is that many digitally created and enhanced images are indistinguishable from real images. This has greatly increased the problem of deepfakes, where a realistic image generated by AI depicting a person, place or event is claimed as real in order to spread misinformation.

Google stated in a blog post that it worked with other members of the coalition in the first half of the year to develop a new version (2.1) of the technical standard called Content Credentials. It is said to be more secure against various types of manipulation and has stricter technical requirements. This standard is now being added to images that can be searched using Google tools.

The tech giant said that Content Credentials will be integrated into images that appear on Google Images, Lens and Circle to Search. This means that when users go to the “About this image” section of an image, they will be able to check the C2PA metadata to confirm whether the image was created or edited using AI tools.

In addition, Google said it plans to integrate C2PA metadata into its advertising systems. This data will inform the company’s core policies and enforcement strategies going forward. It’s also exploring ways to pass C2PA information to viewers on YouTube to help them know whether the video was recorded on camera or created digitally.

It is striking that the tech giant has developed its own watermark technology for AI content called SynthID. Created by Google DeepMind, this system embeds information into the pixels of the image in such a way that it remains invisible to the eye, but becomes visible when checked using special tools.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button