Can AI make you less naive or is it a conspiracy?
AI chatbots may have trouble hallucinating made-up information, but new research has shown that they could be useful in combating unsubstantiated and hallucinatory ideas in the human mind. Scientists from MIT Sloan and Cornell University have developed a paper in Science claim that conversing with a chatbot driven by a large language model (LLM) reduces belief in conspiracy theories by about 20%.
To see how an AI chatbot might influence conspiracy theories, the scientist arranged for 2,190 participants to discuss conspiracy theories with a chatbot running OpenAI’s GPT-4 Turbo model. Participants were asked to describe a conspiracy theory they found credible, including the reasons and evidence they believed supported it. The chatbot, which was primed to be persuasive, provided answers tailored to these details. As they spoke to the chatbot, it provided tailored counterarguments based on the participants’ input. The study examined the perennial AI hallucination problem by having a professional fact-checker evaluate 128 claims made by the chatbot during the study. The claims were 99.2% accurate, which the researchers attributed to extensive online documentation of conspiracy theories represented in the model’s training data.
The idea behind harnessing AI to debunk conspiracy theories was that its deep information reservoirs and adaptive conversational approaches could reach people by personalizing the approach. Based on follow-up assessments 10 days and two months after the initial conversation, it worked. Most participants had lower beliefs in the conspiracy theories they had held, “from classic conspiracy theories related to the assassination of John F. Kennedy, aliens, and the Illuminati, to those related to current events such as COVID-19 and the 2020 U.S. presidential election,” the researchers said. found.
Factbot fun
The results came as a real surprise to the researchers, who had assumed that people would be largely unreceptive to evidence-based arguments that debunk conspiracy theories. Instead, it shows that a well-designed AI chatbot can effectively present counterarguments, leading to measurable change in belief. They concluded that AI tools could be a boon in the fight against misinformation, but one that requires caution due to the way they can also further mislead people with misinformation.
The research supports the value of projects with similar goals. For example, the fact-checking site Snopes recently released an AI tool called FactBot to help people figure out whether something they’ve heard is real or not. FactBot uses Snopes’ archive and generative AI to answer questions without having to sift through articles using more traditional search methods. Meanwhile, The Washington Post created Climate Answers to clear up confusion about climate change, with the channel using climate journalism to directly answer questions on the topic.
“Many people who strongly believe in seemingly infallible conspiracy theories can change their minds when confronted with compelling evidence. From a theoretical perspective, this paints a surprisingly optimistic picture of human reasoning: conspiracy theories may have an exit,” the researchers wrote. “In practice, by demonstrating the persuasive power of LLMs, our findings highlight both the potential positive impact of generative AI when deployed responsibly and the urgent importance of minimizing the chances that this technology will be used irresponsibly.”
You may also like
- This AI bot checks for you if Bigfoot is real
- This AI chatbot answers all your climate change questions
- Character.ai lets you talk to your favorite (synthetic) people on your phone – which isn’t weird at all