MIT study shows AI chatbots can reduce belief in conspiracy theories by 20%
The rise of conspiracy theories on the internet has become a major problem, with some theories causing significant harm and misinformation. A recent study from MIT Sloan School of Management and Cornell University suggests that AI chatbots could be a powerful tool in the fight against these false beliefs. The research, published in Science, shows that having conversations with large language models (LLMs) like GPT-4 Turbo can reduce belief in conspiracy theories by around 20%.
How AI Chatbots Work
Researchers including Dr. Yunhao Zhang of the Psychology of Technology Institute and Thomas Costello of MIT Sloan, tested the effectiveness of AI chatbots by engaging 2,190 participants in text conversations about their favorite conspiracy theories. The AI was programmed to deliver compelling, fact-based counterarguments tailored to each theory. Participants who interacted with the chatbots reported a significant decrease in their belief in these theories, according to the study.
Accuracy and future implications
The study also ensured the accuracy of the chatbot’s responses by having a professional fact-checker review the claims. Almost all (99.2%) of the claims were accurate, demonstrating the reliability of the information provided by the AI. The findings suggest that AI chatbots can be used across platforms to tackle misinformation and encourage critical thinking among users.
Next steps
While the results are promising, more research is needed to examine the long-term effectiveness of chatbots in changing beliefs and tackling different types of misinformation. Researchers such as Dr. David G. Rand and Dr. Gordon Pennycook highlight the potential of integrating AI into social media and other forums to improve public education and counter harmful conspiracy theories.
unsplash.com/photos/a-computer-chip-with-the-word-gat-printed-it-Fc1GBkmV-Dw