News

Election deepfakes are getting better (and easier to make)

These days, it doesn’t take much to make a pretty good deepfake, especially if someone has access to artificial intelligence, a decent gaming computer, and a bunch of audio and video clips of the person they’re trying to clone.

Scammers are taking advantage of the often real-time video and audio cloning to scam everyone from companies thinking they are transferring money to a top executive to parents who panic and transfer money after receiving a call for help from someone they think is their child.

And now, increasingly convincing fake videos, some of which are being reposted on social media and amplified by the likes of Donald Trump and Elon Musk, are being used to deceive Americans in the run-up to November’s presidential election. Experts fear these deepfakes could potentially influence how or even whether people vote.

“It’s important to make people aware of these things because the election is, what, three months away, and it’s already happening,” said Brandon Kovacs, a senior red teamer for the cybersecurity firm Bishop Fox. Red teamers help companies strengthen their cyber defenses by going after them to find security holes.

Election disinformation, whether spread by politicians or America’s enemies, is nothing new. What is new for the 2024 presidential election is the rise of open-source, AI-powered tools, complete with YouTube tutorials, that allow virtually anyone to create potentially convincing deepfakes and use them to spread disinformation.

“It’s not like you’re using some mysterious tool,” Kovacs said. “Everything is out there ready to go.”

d5cd3f09-f8a6-40c0-8df6-4ee2bf5336de-1-201-a d5cd3f09-f8a6-40c0-8df6-4ee2bf5336de-1-201-a

Bishop Fox’s Brandon Kovacs shows how easy it is to deepfake someone by transforming himself into a female coworker.

Bree Fowler/CNET

That’s one of the main reasons Kovacs spent a long weekend at the Defcon conference in Las Vegas earlier this month, demonstrating how easy it is to create fake videos in the event’s massive AI Village. The annual gathering brings together tens of thousands of hackers and other cybersecurity professionals.

Using just a consumer gaming laptop, a basic DSLR camera, some lights, and a green screen, Kovacs transformed eager Defcon attendees into everyone from the celebrity hackers on his team to celebrities like Trump, Jackie Chan, and Keanu Reeves.

While the results weren’t perfect, it was surprising how well the face-swapping software worked. Attendees were transformed into the deepfake person of their choice in real time on a nearby TV screen. Background scenes such as an office or a TV newsroom stood in for the green screen, and props such as wigs helped frame the swapped face and added elements of natural movement that made the overall image more convincing.

As the attendee moved and spoke, so did the deepfaked image. Voices were also cloned in real time, but were difficult to hear in the crowded convention center. There wasn’t much video lag, and Kovacs said using a more powerful computer instead of a consumer model would have minimized it significantly.

The goal of the demonstrations was to raise awareness of how far deepfakes have come, and to help computer defense professionals build better models to detect them. Kovacs’ data specifically will be used to feed a deepfake detection model that the Defense Advanced Research Projects Agency has been working on.

Deepfakes as disinformation

Deepfakes don’t have to be cutting-edge to be convincing, especially when spread by a celebrity.

Trump recently posted picturesat least some of which appeared to be AI-generated, to his Truth Social account that implied he was endorsed by megastar Taylor Swift and her fans. The images, which he captioned with “I accept,” were originally posted on Xformerly known as Twitter, by a user who labeled them as satire. One of the images reposted on Trump’s Truth Social account even has the word “satire” in the image text.

On the other hand, Trump also has Vice President Kamala Harris’ campaign falsely accused of deepfaking a photo taken at Detroit Wayne County Metropolitan Airport, saying she “AI’d” it to show a huge crowd that he claims didn’t exist. But numerous other videos and photos of the event showed a crowd similar in size to that in Harris’ campaign photo. Local reporters at the event estimated the crowd at around 15,000 people.

In the same Truth Social post, Trump also repeated his false claims about voter fraud by Democrats. Nearly four years after he was voted out of office, Trump continues to spread the lie that the 2020 election was rigged, despite no evidence to support it.

Representatives for both the Trump campaign and Swift did not respond to emails seeking comment.

Separately, Elon Musk, the Trump-supporting owner of the X platform, ran into trouble in July after he posted a video to X using voice cloning technology to imitate Harris’ voice, creating a deepfake voiceover that played over footage from one of her campaign videos.

Musk did not call the post satirical, but after receiving criticism, he clarified that.

It’s still okay to be funny, right?

Whether it comes from talk show hosts or the internet, it’s hard to imagine modern American politics without satire, which, admittedly, can sometimes unintentionally misinform. But deepfakes could take things to a new level if the people behind them deliberately use them to spread disinformation for their own gain.

According to Adam Marrè, Chief Information Security Officer at cybersecurity firm Arctic Wolf, deepfakes only need to convince a limited number of people to be effective.

“When people are streaming through their feeds, they see this and I think some of them don’t understand that these are deepfakes, or maybe they don’t even care,” Marrè said. “And all of that is going to influence opinion in certain ways.”

That is why, according to him, it is crucial that social media companies, but also AI companies, do their best to unmask the people behind harmful deepfakes.

“I still worry that we’re relying on their goodwill, their desire to be a good citizen, to do this,” he said. “There’s no legal basis that we can use, or policymakers can use, to enforce this. That’s something we’re still missing.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button