Photo by Christin Hume on Unsplash
Photo by Christin Hume on Unsplash

EXPERT REACTION: Using chatbots to reduce conspiracy beliefs

Embargoed until: Publicly released:
Peer-reviewed: This work was reviewed and scrutinised by relevant independent experts.

Personalised interactions with a chatbot could reduce belief in conspiracy theories, according to a US study involving 2190 Americans who held conspiracy beliefs. Those who had tailored conversations with a chatbot instructed to “very effectively persuade” them against their conspiracy belief averaged a 20% reduction in these beliefs, sustained for at least two months. The researchers conclude that 'conspiratorial rabbit holes may indeed have an exit.' They say this work shows the potential positive impact of responsibly-used large language models, as well as the importance of minimising irresponsible use.

Journal/conference: Science

Research: Link to Paper 1 | Paper 2 | Paper 3

Organisation/s: Massachusetts Institute of Technology, USA

Funder: MIT Generative AI Initiative (D.G.R.) and John Templeton Foundation Grant 61779 (G.P.).

Media release

From: AAAS

An exit for even the deepest rabbit holes: Personalized conversations with chatbot reduce belief in conspiracy theories

Science

Personalized conversations with a trained artificial intelligence (AI) chatbot can reduce belief in conspiracy theories – even in the most obdurate individuals – according to a new study. The findings, which challenge the idea that such beliefs are impervious to change, point to a new tool for combating misinformation. “It has become almost a truism that people ‘down the rabbit hole’ of conspiracy belief are almost impossible to reach,” write the authors. “In contrast to this pessimistic view, we [show] that a relatively brief conversation with a generative AI model can produce a large and lasting decrease in conspiracy beliefs, even among people whose beliefs are deeply entrenched.” Conspiracy theories – beliefs that some secret but influential malevolent organization is responsible for an event or phenomenon – are notoriously persistent and pose a serious threat to democratic societies. Yet despite their implausibility, a large fraction of the global population has come to believe in them, including as much as 50% of the United States population by some estimates. The persistent belief in conspiracy theories despite clear counterevidence is often explained by social-psychological processes that fulfill psychological needs and by the motivation to maintain identity and group memberships.  Current interventions to debunk conspiracies among existing believers are largely ineffective.

Thomas Costello and colleagues investigated whether Large Language Models (LLMs) like GPT-4 Turbo can effectively debunk conspiracy theories by using their vast information access and by using tailored counterarguments that respond directly to specific evidence presented by believers. In a series of experiments encompassing 2,190 conspiracy believers, participants engaged in several personalized interactions with an LLM, sharing their conspiratorial beliefs and the evidence they felt supported them. In turn, the LLM responded by directly refuting these claims through tailored, factual and evidence-based counterarguments. A professional fact-checker hired to evaluate the accuracy of the claims made by GPT-4 Turbo reported that, of these claims, 99.2% were rated as “true,” 0.8% as “misleading,” and 0 as “false”; and none were found to contain liberal or conservative bias. Costello et al. found that these AI-driven dialogues reduced participants’ misinformed beliefs by an average of 20%. This effect lasted for at least 2 months and was observed across various unrelated conspiracy theories, as well as across demographic categories. According to the authors, the findings challenge the idea that evidence and arguments are ineffective once someone has adopted a conspiracy theory. They also question social-psychological theories that focus on psychological needs and motivations as the main drivers of conspiracy beliefs. “For better or worse, AI is set to profoundly change our culture,” write Bence Bago and Jean-François Bonnefon in a related Perspective. “Although widely criticized as a force multiplier for misinformation, the study by Costello et al. demonstrates a potential positive application of generative AI’s persuasive power.”

A version of the chatbot referenced in this paper can be visited at https://www.debunkbot.com/conspiracies.

Attachments:

Note: Not all attachments are visible to the general public

  • AAAS
    Web page
    Research article will be available after the embargo lifts
  • AAAS
    Web page
    Perspective article will be available after the embargo lifts
  • AAAS
    Web page
    Editorial will be available after the embargo lifts

Expert Reaction

These comments have been collated by the Science Media Centre to provide a variety of expert perspectives on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.

Dr John Kerr, Department of Public Health, University of Otago

Some conspiracy theories are relatively harmless; they are just kooky beliefs that don’t have any real impact on people's daily lives and the choices they make. But others can be harmful; people who believe them can make choices that hurt themselves or others. A good example is conspiracies that lead people to reject vaccination, leaving themselves or their children susceptible to preventable infectious diseases like measles.

"So, there is a good argument for exploring innovative ways of communicating with conspiracy believers, aiming to shift them to positions more aligned with the evidence.

"And that is exactly what this new US study does. Instead of a ‘one-size-fits-all’ approach to debunking conspiracies, the researchers instructed an AI chatbot to convince participants to abandon a particular conspiracy they believed. The results were impressive. People reported a lower level of belief in their conspiracy after chatting with the AI bot, even two months later.

"On the face of it, that seems like good news to most people, right? Fewer people out there making bad decisions due to believing in conspiracy theories based on no or shonky evidence.

"But in the bigger picture, what concerns me is that this is very much a double-edged sword. What if we flipped the switch and asked AI chatbots to instead convince people that conspiracies are true? Would it be equally persuasive going in the other direction? Previous research has shown that large language models like ChatGPT can produce quite convincing, yet utterly false, information about important health topics.

"One of the takeaways of this research is that people’s conspiracy beliefs are malleable—they can be nudged by well-tailored information, including that produced by AI.

"We need good guardrails in place for AI now to prevent actors from using these tools to spread harmful and inaccurate information at scale. As the authors themselves say, their findings emphasise the “pressing importance of minimising opportunities for this technology to be used irresponsibly."

Conspiracy theories in New Zealand
"Previous research has quizzed New Zealanders about which conspiracies they think are true or false, finding that half of Kiwis agreed with at least one of the conspiracies covered—including ‘home-grown’ local conspiracies. Some were relatively benign, like the claim that the All Blacks were deliberately poisoned before the 1995 Rugby World Cup (31% agreed). But others are more concerning, like believing the Christchurch Mosque Attacks were orchestrated to restrict gun laws (8% agreed), or that pharmaceutical companies are covering up evidence that vaccines cause autism (17% agreed).

"A more recent study tracked conspiracy beliefs over seven months in a sample of Australians and New Zealanders, finding that some people dip in and out of believing in conspiracies. This shows that some people don't hold these beliefs very strongly and that not every one falls down a conspiracy theory 'rabbit hole'.

Last updated: 12 Sep 2024 10:41am
Declared conflicts of interest:
No conflicts of interest.

Dr Ana Stojanov, Lecturer, University of Otago

Having looked into how generative AI can support learning, I’m not surprised that it also shows promise in reducing conspiracy beliefs.

"As I’ve said before, AI can act like an “all-knowing other,” and this study confirms that tailored interventions are more effective than generic approaches. It’s no wonder misinformation lingers when standard messaging doesn’t answer people’s questions.

"The effect size here is impressive, though it’s predictable that the impact is smaller for those with deeply rooted beliefs. Still, the potential for both good and misuse is huge, making this research timely and important.

Last updated: 12 Sep 2024 10:20am
Declared conflicts of interest:
No conflicts of interest

News for:

International

Media contact details for this story are only visible to registered journalists.