Can artificial intelligence technology help solve difficult problems? It’s a generic question that’s heard more and more these days, and the answer is usually no. But a new study published in the journal Science There is hope that large language models could be a useful tool in changing the minds of conspiracy theorists who believe in incredibly stupid things.
If you’ve ever talked to someone who believes in ridiculous conspiracy theories (from the belief that the Earth is flat to the idea that humans never landed on the Moon), you know that they can be pretty stubborn. They often resist changing their minds, becoming more and more entrenched as they insist that something about the world is actually explained by some very far-fetched theory.
A new paper titled “Lasting Reduction of Conspiracy Beliefs through Dialogue with AI” tested AI’s ability to communicate with people who believed in conspiracy theories and convince them to reconsider their worldview on a particular topic.
The study consisted of two experiments with 2,190 Americans who used their own words to describe a conspiracy theory they sincerely believed in. Participants were encouraged to explain the evidence they believed supported their theory and then engaged in a conversation with a bot built using the GPT-4 Turbo language model, which would respond to evidence provided by the human participants. The control condition consisted of people talking to the AI chatbot about some topic unrelated to conspiracy theories.
The study’s authors wanted to try AI because they assumed that the problem with tackling conspiracy theories is that there are so many of them, meaning combating those beliefs requires a level of specificity that can be difficult without special tools. And the authors, who recently spoke with Art-Technica They were encouraged by the results.
“The AI chatbot’s ability to sustain personalized counterarguments and deep conversations reduced its conspiracy beliefs over months, challenging research suggesting such beliefs are immune to change,” Ekeoma Uzogara, an editor, wrote about the study.
The AI chatbot, known as Debunking a robotis now publicly available for anyone who wants to try it out. And while more studies are needed on these kinds of tools, it’s interesting to see people finding them useful in combating misinformation. Because, as anyone who has spent time on the Internet recently can tell you, there is a lot of nonsense out there.
From Trump’s lies about Haitian immigrants eating cats in Ohio to the idea that Kamala Harris was wearing a secret earpiece hidden in her earring during the presidential debate, countless new conspiracy theories have emerged on the internet this week alone. And there’s no sign that the pace of new conspiracy theories is going to slow down anytime soon. If AI can help fight that, it can only be good for the future of humanity.