Microsoft Says AI Deepfake Abuse Should Be Illegal
Artificial intelligence seems to be everywhere these days, doing good by helping doctors detect cancer and doing bad by helping fraudsters rip off unsuspecting victims. Now, Microsoft says the U.S. government needs new laws to hold those who abuse AI accountable.
In a blog post tuesdayMicrosoft said US lawmakers should pass a “comprehensive anti-deepfake fraud bill” that targets criminals who use AI technologies to steal from or manipulate ordinary Americans.
“AI-generated deepfakes are realistic, easy for almost anyone to create, and are increasingly being used for fraud, abuse, and manipulation — particularly to target children and seniors,” Microsoft President Brad Smith wrote. “The biggest risk is not that the world will do too much to solve these problems. It’s that the world will do too little.”
Microsoft’s call for regulation comes as AI tools proliferate across the tech industry, giving criminals increasingly easy access to tools that can help them more easily gain the trust of their victims. Many of these schemes abuse legitimate technology designed to help people write messages, conduct research for projects, and create websites and graphics. In the hands of fraudsters, those tools can create fake forms and credible websites that fool users and steal from them.
“The private sector has a responsibility to innovate and implement safeguards that prevent the misuse of AI,” Smith wrote. But he said governments should establish policies that “promote responsible AI development and use.”
Already behind
While AI chatbot tools from Microsoft, Google, Meta, and OpenAI have only been widely made freely available in recent years, the data on how criminals are abusing them is already staggering.
Earlier this year, AI-generated pornography of global star Taylor Swift spread “like wildfire” online, amassing more than 45 million views on X, according to a February report from the National Sexual Violence Resource Center.
“While deepfake software was not designed with the explicit intent of creating sexual images and videos, it has become its most common use today,” the organization wrote. Still, the group notes that “there are few legal recourses for victims of deepfake pornography,” despite widespread recognition of the problem.
A report from this summer by the Identity Theft Resource Center found that fraudsters are increasingly using AI to create fake job postings. This is a new way to steal people’s identities.
“The rapid improvement in the look, feel, and messaging of identity fraud is almost certainly the result of the introduction of AI-driven tools,” the ITRC wrote in its report. June Trend Report.
All this is on top of the rapid spread of AI-manipulated online messages that attempt to undermine our shared understanding of reality. A recent example emerged shortly after the attempted assassination of former President Donald Trump earlier in July. Manipulated photos spread online and appeared pictures of Secret Service agents laughing as they escorted Trump to safety. The original photo shows the officers with neutral facial expressions.
Even in the past week, X owner Elon Musk shared a video using a cloned voice of Vice President and Democratic presidential candidate Kamala Harris to disparage President Joe Biden, describing Harris as a “diversity asset.” X Service Rules prohibit users from sharing manipulated contentincluding “media that is likely to cause widespread confusion about public issues, affect public safety, or cause serious harm.” Musk has defended his post as a parody.
Microsoft’s Smith, for his part, said that while many experts focus on deepfakes being used to influence elections, “the broader role they play in these other forms of crime and abuse deserves equal attention.”