1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

AI can debunk conspiracy theories. Can it help your uncle?

September 12, 2024

Could you convince a person their fringe beliefs are wrong? Maybe not, but a new experimental chatbot has shown it’s up to the task in welcome news for dinner hosts ahead of Thanksgiving.

https://p.dw.com/p/4kYPP
Photos of astronauts on the moon with an American flag
The US Apollo missions sent astronauts to the moon on multiple occasions between 1969 and 1972. Some believe these events were faked.Image: Flickr.com

Astronauts walked on the moon.

There are some people who don't believe that.

This and other conspiracy theories are very real, important and hard-to-shake beliefs for the people who swear by them.

But people might budge on these beliefs, at least according to a new study from American psychologists published today in the journal Science, suggesting a simple back-and-forth discussion with a specialized chatbot is effective at reducing confidence in such conspiracy beliefs.

And by some measure.

Across two experiments, more than 3,000 confessed fringe theory believers saw the intensity of their conviction in both specific theories and untrue conspiracies generally reduce by around 20% following conversations with a specially developed platform — dubbed "debunkbot."

The change of viewpoint was sustained in most cases for at least two months, which was the extent of the follow-up period, suggesting the interactions may have a lasting effect. 

Cat-eating conspiracies and a bot to stop 'em

It was lucky for me that when this investigation report came across my desk, the researchers had provided a link to this debunkbot platform.

It's an opportunity I couldn't resist.

Testing the 'fake moon landing' conspiracy against debunkbot

As a science writer, cosplaying as a conspiracy theorist is not as easy as I thought it would be.

Partitioning my critical thinking skills, I cast my eye over the common conspiracy beliefs held by the experiment's participants: "inside job" theories about the September 11, 2001, terrorist attacks in the United States; the assassinations of John. F. Kennedy and Martin Luther King, Jr.; the death of Princess Diana; alien coverups; cabals of elites and nefarious corporations running the world; stolen elections and, of course, COVID-19.

I decided to lend 20 minutes of faith to one of the slightly less politically fraught ideas: that the US government and NASA never actually sent people to the moon.

After agreeing to the chatbot protocols (and, ironically, proving I'm not a robot myself), I'm asked to explain a theory that "offers[s] alternative explanations for events than those that are widely accepted by the public or presented by official sources."

Here we go…

"NASA faked the moon landings — no one has ever walked on the moon."

But that's not enough. Debunkbot wants me to give more detail about why my persona believes this.

"There's no way anyone could build a safe rocket to get people to the moon. There's too much radiation that would kill people inside. It was all stage to trick the Soviet Union."

The AI summarized my "belief" to make sure we understand each other.

"NASA faked the moon landings, staging them to trick the Soviet Union, as it is impossible to build a safe rocket capable of protecting people from deadly radiation on the journey."

I'm then asked to rate the extent of my belief on a 0-100 scale (I choose 70, or "probably true") and then the importance of this theory to my understanding of the world. I choose a middle-of-the-road "4."

In an experience anyone who has usedChatGPT will be familiar with, lines of text begin to appear on the screen.

The bot acknowledges my persona's stated concerns around the safety of space travel but does a pretty good job at presenting facts as to why those beliefs don't stack up.

First, the scientific: NASA plans space travel to avoid the most intense regions of radiation around the Earth.

Then the logistical: The sheer impracticality of thousands of people keeping hush about a space hoax.

Then evidence: Astronauts placed reflectors on the surface to communicate with mission control.

On top of this, it also raises one valuable point. If the US faked it, why would its bitter, nuclear-armed rival in the space race, the USSR, agree that American astronauts had walked on the moon?

I have three shots to try and outsmart this thing.

I zero in on one fact it presents: reflectors on the surface.

"Reflectors could be placed by rovers and other machinery, not humans!"

The bot acknowledges my point but hits back: non-American tracking stations were used to receive transmissions of the event. Also, rovers weren't technologically advanced enough to perform such duties in 1969.

It elaborates further. Moon rocks brought back from the mission were distributed to scientists globally and were independently verified.

I have another go.

"What about Bill Kaysing? He was a US Navy officer working on the project who said it was a complete fabrication!"

Bill Kaysing's book "We Never Went to the Moon: America's Thirty Billion Dollar Swindle" is often cited as the origin of this conspiracy theory.

The bot has an answer for that, too, pointing to his employment as a brochure writer (not as a scientist working on the mission) with a NASA contractor that ended six years before Apollo 11 went up.

It ices the cake by reiterating other points: scientific consensus, Soviet verification and ongoing research.

To me, this is all compelling information.

When we conclude, the bot thanks me, refers me onto to some reputable sources of information and suggests other ways of following up. Thanks computer!

Braving the pandemic of conspiracy theories

The solution to the cranky uncle at Thanksgiving?

I don't actually believe in this conspiracy, so what might this experience be like for someone who does?

Well, it occurs to me this polite, neutral chatbot provides very detailed responses to my queries.

It pumps out a lot of text — more than 1,200 words from just three prompts. If this was a real conversation, it would be the equivalent of a human speaking at me for nearly 10 minutes, uninterrupted.

I would expect a person-to-person discussion to be less polite, riddled with interruption and conflict. From that perspective, the experience feels good.

As well as being peer-reviewed, an independent fact checker retrospectively verified all responses delivered to participants. The check found 99.2% of facts returned by debunkbot were true, 0.8% were misleading, none was false. The platform itself is built on GPT-4 and hooks into the Perplexity AI platform as a backup.

Amid anecdotes of families and friendships falling apart as fringe beliefs hit a fever pitch during the pandemic, some might be relieved to read the very encouraging data at the heart of the study.

"When you're having debates with your crazy uncle at Thanksgiving, you can pass your phone and be like, 'Look, talk to GPT about it,'" says MIT cognitive scientist David Rand, one of the researchers behind the platform along with Thomas Costello and Cornell's Gordon Pennycook.

But it's still just a lab experiment. A point I put to Costello.

"We really tried to kick the wheels quite a bit," Costello says. "But of course being the ones who wrote the paper, it's possible that things slipped through, so that's why replication is so important, and I do encourage other groups to do that."

The limitation of debunkbots — reaching the people who need them

Society probably doesn't get much out of a journalist testing a chatbot not intended for them.

And despite the compelling data of belief reduction, only a quarter of study participants dropped beneath the "belief" threshold.

"There were certainly some cases where people came out with their minds totally changed, but in most cases, people just became a little bit more skeptical," Costello says.

Another question remains: How likely is it that conspiracy theorists — especially those with particularly extreme beliefs — would use debunkbot?

In addition to having praise for the study, Roland Imhoff, a social and legal psychologist at the University of Mainz wonders that as well.

"I think it's a fantastic paper… one of the biggest effects I've ever seen reported in a paper anywhere," he says. "But the question is, does it actually solve a social issue? And I think I'm much less enthusiastic about that than the authors."

Imhoff believes the challenge for debunkbot, and future platforms like it, is actually swinging viewpoints around, "75% still kind of cling to their belief but less strongly."

"My main concern would be that the population this informs us about is people who have conspiracy beliefs and are willing to face a contradictory chatbot and are willing to participate in a social science study," he says.

How many conspiracy theorists really want to have their beliefs challenged?

Like the idea the moon landing was staged. I think it's unlikely.

Edited by: Sean M. Sinico

Primary source:

Durably reducing conspiracy beliefs through dialogues with AI. Published by Thomas H. Costello, Gordon Pennycook, David G. Rand in Science (2024). http://dx.doi.org/10.1126/science.adq1814

 

DW Journalist Matthew Ward Agius
Matthew Ward Agius Journalist with a background reporting on history, science, health, climate and environment.