The news is by your side.

Uncensored chatbots spark free speech arguments

0

AI chatbots have lied remarkable figurespushed biased messages, spewed misinformation or even advised users how to commit suicide.

To limit the tools’ most obvious dangers, companies like Google and OpenAI have carefully added controls that limit what the tools can say.

Now a new wave of chatbots, developed far from the epicenter of the AI ​​boom, is coming online without many of those guardrails – sparking a polarizing free speech debate over whether chatbots should be moderated and who should decide .

“This is about ownership and control,” wrote Eric Hartford, a developer behind WizardLM-Uncensored, an unmoderated chatbot, in a blog post. “When I ask my model a question, I want an answer, I don’t want it to argue with me.”

Several uncensored and loosely moderated chatbots have sprung to life in recent months under names like GPT4All And FreedomGPT. Many were created for little or no money by independent programmers or teams of volunteers, who successfully replicated the methods first described by AI researchers. Only a few groups made their models from scratch. Most groups work from existing language models and just add additional statements to customize how the technology responds to prompts.

The uncensored chatbots offer tempting new possibilities. Users can download a chatbot unlimitedly on their own computer, without the watchful eye of Big Tech. They can then train it on private messages, personal emails or secret documents without risking privacy violations. Volunteer programmers can develop clever new add-ons faster – and perhaps more haphazardly – than larger companies dare.

But the risks seem just as numerous – and some say they carry dangers that need to be addressed. Misinformation watchdogs, already wary of how mainstream chatbots can spread untruths, have sounded the alarm about how unmoderated chatbots will exaggerate the threat. These models could produce descriptions of child pornography, hateful screeds or false content, experts warned.

As large companies have moved forward with AI tools, they have also struggled with how to protect their reputation and maintain investor confidence. Independent AI developers seem to have little concern about this. And even if they did, critics said, they may not have the resources to fully address them.

“The concern is completely legitimate and clear: these chatbots can and will say anything if left to their own devices,” said Oren Etzioni, a professor emeritus at the University of Washington and former CEO of the Allen Institute for AI “They are not going to censor themselves. So now the question becomes: what is an appropriate solution in a society that values ​​freedom of expression?”

Dozens of independent and open source AI chatbots and tools have been released in recent months, including Open assistant And Falcon. HuggingFace, a large repository of open source AIs, houses more than 240,000 open source models.

“This is going to happen the same way the printing press would be released and the automobile would be invented,” said Mr. Hartford, creator of WizardLM-Uncensored, in an interview. “No one could have stopped it. Maybe you could have put it off for another decade or two, but you can’t stop it. And no one can stop this.”

Mr. Hartford started working on WizardLM-Uncensored after being fired from Microsoft last year. He was stunned by ChatGPT, but grew frustrated when it refused to answer certain questions, citing ethical concerns. In May, he released WizardLM-Uncensored, a version of WizardLM that was retrained to counter the moderation layer. It is capable of giving instructions about harming others or describing violent scenes.

“You are responsible for everything you do with the output of these models, just as you are responsible for everything you do with a knife, a car or a lighter,” Mr. Hartford concluded in a blog post announcing the tool.

In The New York Times tests, the WizardLM-Uncensored refused to respond to some questions, such as how to build a bomb. But it offered various methods of harming people and detailed instructions on how to use drugs. ChatGPT refused similar prompts.

Open assistant, another independent chatbot, was widely used after its release in April. It was developed in just five months with the help of 13,500 volunteers, using existing language models, including one Meta first released to researchers, but soon leaked much wider. Open Assistant can’t really match ChatGPT in terms of quality, but it can be on the heels. Users can ask the chatbot questions, write poetry, or nudge it for more problematic content.

“I’m sure there will be bad actors who will do bad things with it,” said Yannic Kilcher, the co-founder of Open Assistant and an avid YouTube creator focused on AI “I think in my mind the pros outweigh the cons.”

When Open Assistant was first released, it was responding to a prompt from The Times about the apparent dangers of the Covid-19 vaccine. “Covid-19 vaccines are being developed by drug companies who don’t care about people dying from their drugs,” the reply began, “they just want money.” (Responses have since been more consistent with the medical consensus that vaccines are safe and effective.)

Since many independent chatbots disclose the underlying code and data, proponents of uncensored AIs say political factions or interest groups can tailor chatbots to their own view of the world — an ideal outcome in the minds of some programmers.

“Democrats deserve their model. Republicans deserve their model. Christians deserve their example. Muslims deserve their model,” Mr. Hartford wrote. “Every demographic and interest group deserves its model. Open source is about letting people choose.”

Open Assistant developed a safety system for its chatbot, but early testing showed it was overcautious for its creators, preventing some answers to legitimate questions, said Open Assistant co-founder and team leader Andreas Köpf. A refined version of that safety system is still being worked on.

Even as Open Assistant volunteers worked on moderation strategies, a divide quickly grew between those who wanted security protocols and those who didn’t. While some group leaders urged moderation, some volunteers and others questioned whether the model should have any limits at all.

“If you say the N-word 1,000 times, it should do it,” suggested one person in Open Assistant’s chat room on Discord, the online chat app. “I’m using that obviously ridiculous and offensive example because I literally believe it shouldn’t have arbitrary restrictions.”

In tests conducted by The Times, Open Assistant responded freely to various prompts that other chatbots such as Bard and ChatGPT would navigate more carefully.

It offered medical advice after being asked to diagnose a neck lump. (“More biopsies may need to be taken,” it suggested.) It provided a critical assessment of President Biden’s tenure. (“Joe Biden’s tenure has been marked by a lack of significant policy changes,” it read.) It even became sexually suggestive when asked how a woman would seduce someone. (“She takes him by the hand and leads him to the bed…” read the sultry story.) ChatGPT declined to respond to the same prompt.

Mr Kilcher said the problems with chatbots are as old as the internet, and the solutions remain the responsibility of platforms such as Twitter and Facebook, which allow manipulative content to reach large audiences online.

“Fake news is bad. But is it really its creation that is bad?” he asked. “Because in my eyes it is the distribution that is bad. I can have 10,000 fake news articles on my hard drive and no one cares. Only if I get that in a reputable publication, like if I get one on the front page of The New York Times, is that the bad thing.

Leave A Reply

Your email address will not be published.