The news is by your side.

Meta calls for industry efforts to label AI-generated content

0

Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, president of global affairs at Meta, called an emerging effort to detect artificially generated content “the most urgent task” facing the tech industry today.

On Tuesday, Mr Clegg proposed a solution. Meta said it would advertise technological standards that companies across the industry could use to identify flags in photo, video and audio material that would indicate that the content was generated using artificial intelligence.

The standards could enable social media companies to quickly identify content generated with AI and posted on their platforms, and allow them to add a label to that material. If widely adopted, the standards could help identify AI-generated content from companies like Google, OpenAI and Microsoft, Adobe, Midjourney and others that offer tools that allow people to quickly and easily create artificial messages.

“While this isn't a perfect answer, we didn't want perfection to be the enemy of good,” Clegg said in an interview.

He added that he hoped this effort would be a rallying cry for companies across the industry to adopt standards for detecting and flagging that content is artificial, making it easier for all of them to spot it.

As the United States heads into a presidential election year, industry watchers believe AI tools will be widely used to post fake content to misinform voters. Over the past year, people have used AI to create and distribute fake videos of President Biden making false or inflammatory statements. The New Hampshire attorney general's office is also investigating a series of robocalls that appeared to use an AI-generated voice from Mr. Biden urging people not to vote during a recent primary.

Meta, owner of Facebook, Instagram, WhatsApp and Messenger, is uniquely positioned as it develops technology to drive widespread consumer adoption of AI tools while being the world's largest social network capable of AI distribute generated content. Mr Clegg said Meta's position gave her particular insight into both the generation and distribution sides of the issue.

Meta focuses on a set of technological specifications called the IPTC And C2PA standards. It is data that specifies whether a piece of digital medium is authentic in the metadata of its content. Metadata is the underlying information embedded in digital content that provides a technical description of that content. Both standards are already widely used by news organizations and photographers to describe photos or videos.

Adobe, which makes the editing software Photoshop, and a host of other technology and media companies have been working on it for years lobby their peers to adopt them the C2PA standard and have the Content Authenticity Initiative. The initiative is a partnership among dozens of companies — including The New York Times — to combat misinformation and “add a layer of tamper-proof provenance to all types of digital content, starting with photos, video and documents,” according to the initiative.

Companies offering AI generation tools could add the standards to the metadata of the videos, photos or audio files they helped create. That would signal to social networks like Facebook, Twitter and YouTube that such content was artificial when uploaded to their platforms. Those companies could in turn add labels indicating that these posts were generated by AI to inform users who viewed them on the social networks.

Meta and others also require users who post AI content to indicate whether they have done so when uploading it to the companies' apps. Failure to do so will result in fines, although the companies did not specify what those fines might be.

Mr Clegg also said that if the company determines that a digitally created or altered post “poses a particularly high risk of materially misleading the public about an important matter”, Meta could add a more prominent label to the post to inform the public to provide more information. information and context about its origin.

AI technology is developing rapidly, which has prompted researchers to keep pace with the development of tools for identifying fake content online. While companies like Meta, TikTok, and OpenAI have developed ways to detect such content, technologists have quickly found ways to circumvent these tools. Artificially generated video and audio are even harder to recognize than AI photos.

(The New York Times Company is suing OpenAI and Microsoft for copyright infringement over their use of Times articles to train artificial intelligence systems.)

“Bad actors will always try to circumvent the norms we create,” Mr Clegg said. He described the technology as both a “sword and a shield” for the industry.

Part of that difficulty comes from the fragmented nature of how tech companies approach this. Last fall, TikTok announced a new policy for that, users would have to add tags to videos or photos they uploaded that were taken with AI YouTube announced a similar initiative in November.

Meta's new proposal would attempt to connect some of these efforts. Other industry efforts, such as the Partnership in AIhave brought together dozens of companies to discuss similar solutions.

Mr Clegg said he hoped more companies would agree to join the standard, especially in the presidential election.

“We felt particularly strongly that during this election year it would not be justified to wait for all the pieces of the puzzle to fall into place before taking action,” he said.

Leave A Reply

Your email address will not be published.