The news is by your side.

Explicit deepfake images of Taylor Swift evade security and swamp social media

0

Fake, sexually explicit images of Taylor Swift, likely generated by artificial intelligence, quickly spread across social media platforms this week, disrupting fans who saw them and prompting calls from lawmakers to protect women and crack down on the platforms and technologies that spread such images.

One image shared by a user on X, formerly Twitter, was viewed 47 million times before the account was suspended on Thursday. X suspended several accounts that posted the doctored images of Ms. Swift, but the images were shared on other social media platforms and continued to spread despite those companies' efforts to remove them.

While X said it was in the process of removing the images, fans of the pop superstar flooded the platform in protest. They posted related keywords, along with the phrase “Protect Taylor Swift,” in an attempt to drown out the explicit images and make them harder to find.

Reality Defender, a cybersecurity company focused on detecting AI, determined that the footage was likely created using a diffusion model, an AI-powered technology accessible through more than 100,000 apps and publicly available models, says Ben Colman, co-founder and head of the company. managerial.

As the AI ​​industry booms, companies are racing to release tools that allow users to create images, videos, text and audio recordings with simple prompts. The AI ​​tools are extremely popular, but have made it easier and cheaper than ever to create so-called deepfakes, which depict people doing or saying things they have never done before.

Researchers now fear that deepfakes are becoming a powerful disinformation force, allowing ordinary internet users to create non-consensual nude images or embarrassing images of political candidates. Artificial intelligence was used to create fake robocalls of President Biden during the New Hampshire primary, and Ms. Swift was featured in deepfake ads this month promoting cooking utensils.

“It's always been a dark undercurrent of the Internet, different types of non-consensual pornography,” said Oren Etzioni, a computer science professor at the University of Washington who works on deepfake detection. “Now it is a new strain that is particularly harmful.”

“We're going to see a tsunami of these AI-generated explicit images. The people who generated this see this as a success,” Mr Etzioni said.

X said it had a zero-tolerance policy towards the content. “Our teams are actively removing all identified images and taking appropriate action against the accounts responsible for posting them,” a representative said in a statement. “We are closely monitoring the situation to ensure that any further violations are addressed immediately and the content is removed.”

Although many of the companies producing generative AI tools prohibit their users from creating explicit images, people are finding ways to break the rules. “It's an arms race, and it seems like when someone comes up with a guardrail, someone else figures out how to jailbreak,” Mr. Etzioni said.

The images come from a channel on the messaging app Telegram that is dedicated to producing such images 404 Media, a technology news site. But the deepfakes gained widespread attention after they were posted on X and other social media services, where they quickly spread.

Some states have restricted pornographic and political deepfakes. But the restrictions have not had a major impact and there are no federal regulations on such deepfakes, Mr. Colman said. Platforms have tried to tackle deepfakes by asking users to report them, but that method hasn't worked, he added. By the time they are flagged, millions of users have already seen them.

“The toothpaste is already out of the tube,” he said.

Ms. Swift's publicist, Tree Paine, did not immediately respond to requests for comment late Thursday.

Ms. Swift's deepfakes led to renewed calls for action from lawmakers. Rep. Joe Morelle, a New York Democrat who introduced a bill last year that would make sharing such images a federal crime, said on everywhere with women, every day. .”

“I have repeatedly warned that AI could be used to generate non-consensual intimate images,” Senator Mark Warner, a Virginia Democrat and chairman of the Senate Intelligence Committee, said of the images on X. “This is a deplorable situation . ”

Rep. Yvette D. Clarke, a New York Democrat, said advances in artificial intelligence have made creating deepfakes easier and cheaper.

“What happened to Taylor Swift is nothing new,” she said.

Leave A Reply

Your email address will not be published.