Meta identifies networks pushing misleading content likely generated by AI
Meta said on Wednesday that it had found “likely AI-generated” content on its Facebook and Instagram platforms in a misleading manner, including comments praising Israel’s handling of the Gaza war that were posted beneath posts by international news organizations and U.S. lawmakers.
The social media company said in a quarterly security report that the accounts posed as Jewish students, African Americans and other concerned citizens, targeting audiences in the United States and Canada. It attributed the campaign to the Tel Aviv-based political marketing firm STOIC.
STOIC did not immediately respond to a request for comment on the allegations.
Why it’s important
While Meta has found simple profile photos generated by artificial intelligence in influence operations since 2019, the report is the first to report the use of text-based generative AI technology since its emergence in late 2022.
Researchers fear that generative AI, which can quickly and cheaply produce human-like text, images and audio, could lead to more effective disinformation campaigns and influence elections.
In a press release, Meta security officials said they had stopped the Israeli campaign early and did not believe new AI technologies were hampering their ability to disrupt influence networks, which are coordinated efforts to spread messages.
Executives said they had never before seen such networks using AI-generated images of politicians that were so realistic they were mistaken for authentic photos.
Key quote
“There are several examples in these networks of how they’re probably using generative AI tooling to create content. Maybe it gives them the ability to do that faster or do that at a higher volume. But it hasn’t really impacted our ability to detect them,” said Mike Dvilyanski, head of Meta Threat Investigations.
In numbers
The report highlights six covert influence operations that disrupted Meta in the first quarter.
In addition to the STOIC network, Meta also shut down an Iran-based network that focused on the Israel-Hamas conflict. However, no use of generative AI was found in that campaign.
Context
Meta and other tech giants are grappling with how to address potential misuse of new AI technologies, particularly in elections.
Researchers have found examples of image generators from companies like OpenAI and Microsoft producing photos with mood-related misinformation, despite the companies having policies against such content.
The companies have focused on digital labeling systems to flag AI-generated content as it’s created. However, the tools don’t work with text, and researchers question their effectiveness.
What’s next
Meta will be tested by elections in the European Union in early June and in the United States in November.
© Thomson Reuters 2024