HomeNewsOpinionGoogle, Facebook should use AI to combat conspiracy theories

Google, Facebook should use AI to combat conspiracy theories

New research suggests chatbots could combat much of the damage caused by people going down rabbit holes on YouTube and Facebook. But will tech companies use them?

April 18, 2024 / 11:19 IST
Story continues below Advertisement
AI Technology
A recent study conducted by researchers at the Massachusetts Institute of Technology has validated something many AI watchers long suspected: The technology is remarkably persuasive when reinforced with facts. (Source: Bloomberg/Getty Images Europe)

It's understandable to feel rattled by the persuasive powers of artificial intelligence. At least one study has found people were more likely to believe disinformation generated by AI than humans. The scientists in that investigation concluded that people preferred the way AI systems used condensed, structured text. But new research shows how the technology can be used for good.

recent study conducted by researchers at the Massachusetts Institute of Technology has validated something many AI watchers long suspected: The technology is remarkably persuasive when reinforced with facts. The scientists invited more than 2,000 people who believed in different conspiracy theories to summarise their positions to a chatbot — powered by OpenAI’s latest publicly available language model — and briefly debate them with the bot. On average, participants subsequently described themselves as 20 percent less confident in the conspiracy theory; their views remained softened even two months later.

Story continues below Advertisement

While companies like Alphabet Inc.’s Google and Meta Platforms Inc. might use persuasive chatbots for advertising, given their heavy reliance on ads for revenue, that’s far off at best and unlikely to happen at all, people in the ad industry tell me. For now, a clearer and better use case is tackling conspiracy theories, and the MIT researchers reckon there’s a reason generative AI systems do it so well: they excel at combating the so-called Gish gallop, a rhetorical technique that attempts to overwhelm someone in a debate with an excessive number of points and arguments, even when thin on evidence. The term is named after American creationist Duane Gish, who had a rapid-fire debating style in which he’d frequently change topics;
those who believe in conspiracy theories tend to do the same.

“If you’re a human, it’s hard to debate with a conspiracy theorist because they say, ‘What about this random thing and this random thing?’” says David Rand, one of the MIT study’s authors. “A lot of times, the experts don’t look great because the conspiracy theories have all this crazy evidence.”

We humans are also worse than we think at engaging in debate generally. Ever had a family member at dinner passionately explain why they weren’t vaccinating their kids? If so, their comments were probably either met by earnest nods, silence or someone asking about dessert. That reluctance to argue can inadvertently allow friends and family members to become entrenched in their views. It may be why other research shows that conspiracy theory believers often overestimate how much other people agree with them, Rand says.