The news is by your side.

Dark corners of the web offer a glimpse into the nefarious future of AI

0

When the Louisiana parole board met in October to discuss the possible release of a convicted murderer, it called on a doctor with years of experience in mental health to talk about the inmate.

The parole board wasn’t the only group to pay attention.

A collection of online trolls took screenshots of the doctor from an online feed of her testimony and edited the images with AI tools to make her appear naked. They then shared the doctored files on 4chan, an anonymous message board known for promoting harassment and spreading hateful content and conspiracy theories.

It was one of several times that people on 4chan had used new AI-powered tools like audio editors and image generators to spread racist and offensive content about people who had appeared before the parole board, said Daniel Siegel, a graduate student at Columbia University. which investigates how AI is exploited for malicious purposes. Mr. Siegel monitored activity on the site for several months.

The manipulated images and audio did not spread far beyond 4chan’s borders, Mr. Siegel said. But experts who monitor fringe message boards said the efforts offered a glimpse into how nefarious internet users could use advanced artificial intelligence tools to boost online harassment and hate campaigns in the coming months and years.

Callum Hood, head of research at the Center for Countering Digital Hate, said fringe sites like 4chan – perhaps the most infamous of them all – often provided early warning signals about how new technology would be used to project extreme ideas. Those platforms, he said, are filled with young people who are “very quickly adopting new technologies like AI” to “project their ideology back into mainstream spaces.”

These tactics, he said, are often adopted by some users on more popular online platforms.

Here are several problems arising from AI tools that experts on 4chan have discovered – and what regulators and tech companies are doing about them.

AI tools like Dall-E and Midjourney generate new images from simple text descriptions. But a new wave of AI image generators is being created with the aim of creating fake porn, including removing clothing from existing images.

“They can use AI to create an image of exactly what they want,” Mr Hood said of online hate and disinformation campaigns.

There is no federal law banning the taking of fake images of people, leaving groups like the Louisiana Parole Board scrambling to determine what can be done. The board has opened an investigation following Mr. Siegel’s findings about 4chan.

“We would certainly question any images that are produced that portray our board members or participants in our hearings in a negative manner,” said Francis Abbott, executive director of the Louisiana Board of Pardons and Committee on Parole. “But we have to operate within the law, and whether it’s against the law or not – that’s for someone else to decide.”

Illinois expanded his law regulating revenge pornography so that targets of non-consensual pornography created by AI systems can sue creators or distributors. California, Virginia and New York have done so has also passed laws prohibiting this the distribution or creation of AI-generated pornography without consent.

Late last year, ElevenLabs, an AI company, released a tool that could create a convincing digital replica of a person’s voice saying everything typed into the program.

Almost as soon as the tool went live, users on 4chan distributed clips of a fake Emma Watson, the British actor, reading Adolf Hitler’s manifesto, “Mein Kampf.”

Using content from the Louisiana Parole Board hearings, 4chan users have since shared fake clips of judges making offensive and racist comments about suspects. Many of the clips were generated by ElevenLabs’ tool, said Mr. Siegel, who used an AI voice identifier developed by ElevenLabs to investigate their origins.

ElfLabs rushed over define bordersincluded where users have to pay before they could access voice cloning tools. But the changes did not appear to slow the spread of AI-created votes, experts say. Dozens of videos using fake celebrity voices are circulating on TikTok and YouTube, many of which share political disinformation.

Some major social media companies, including TikTok and YouTube, have since required labels on some AI content. President Biden issued an executive order in October He asked all companies to label such content and directed the Commerce Ministry to develop standards for watermarking and authenticating AI content.

As Meta looked to gain a foothold in the AI ​​race, the company embraced a strategy to release its software code to researchers. The approach, commonly called “open source,” can speed up development by giving academics and technologists access to more raw material to find improvements and develop their own tools.

When the company released Llama, its major language model, in February to select researchers, the code was quickly adopted leaked on 4chan. People there used it for different purposes: they modified the code to lower or eliminate guardrails, creating new chatbots capable of producing anti-Semitic ideas.

The effort provided a preview of how free-to-use and open-source AI tools can be adapted by tech-savvy users.

“While the model is not open to everyone, and some have attempted to circumvent the approval process, we believe the current release strategy allows us to balance responsibility and openness,” a Meta spokeswoman said in an email.

In the months since, language models have been developed to reflect far-right talking points or to create more sexually explicit content. Image generators have been that modified by 4chan users to produce nude images or offer racist memes, bypassing the controls of larger tech companies.

Leave A Reply

Your email address will not be published.