A study conducted on a broad language model shows that facts and evidence can really change people's minds.
- Юджин Ли
- May 19
- 3 min read
Updated: Jun 19

Researchers have shown that artificial intelligence (AI) can become a valuable tool in the fight against conspiracy theories by developing a chatbot that can expose false information and make people question their thinking.
In a study published in Science on September 12, 1, participants spent a few minutes communicating with a chatbot that provided detailed answers and arguments, and experienced a shift in thinking that lasted several months. This result suggests that facts and evidence can really change people's minds.
"This article has indeed challenged many existing literatures that we live in a post-truth society," says Catherine Fitzgerald, who studies conspiracy theories and misinformation at the Queensland University of Technology in Brisbane, Australia.
Previous studies have shown that conspiracy theories attract people because of their desire for security and certainty in a troubled world. But "what we discovered in this article contradicts this traditional explanation," says co-author of the study Thomas Costello, a psychology researcher at the American University in Washington. "One of the potentially interesting applications of this study is that you can use AI to expose conspiracy theories in real life."
Bad ideas
Polls show that about 50% of Americans believe in conspiracy theory - from the falsification of the moon landing in 1969 to COVID-19 vaccines containing microchips that allow mass surveillance. The emergence of social media platforms that make it easy to share information has aggravated the problem.
Although many conspiracies do not have much social impact, those that have taken root can "cause real harm," says Fitzgerald. As examples, she cites the attack on the U.S. Capitol building on January 6, 2021, which was partly caused by claims of falsification of the 2020 presidential election, as well as anti-vaccination rhetoric that is influencing the use of the COVID-19 vaccine.
It is possible to convince people to change what they think, but it can take a lot of time and effort, and the huge number and variety of conspiracy theories make it difficult to solve this problem on a large scale. But Costello and his colleagues wanted to explore the potential of large language models (LLMs) that can quickly process huge amounts of information and generate human-like responses to combat conspiracy theories. "They were trained on the Internet, know all the conspiracies and know all the refutations, so it seemed quite natural," says Costello.
Believe it or not
Researchers have developed a special chatbot using GPT-4 Turbo - the latest LLM from the creator of ChatGPT OpenAI, based in San Francisco, California - who was trained to convincingly oppose conspiracies. Then they recruited more than 1,000 participants, whose demographic data corresponded to the U.S. Census quota on characteristics such as gender and ethnicity. Costello says that by typing "people who had different life experiences and who bring their own points of view", the team could assess the chatbot's ability to expose various conspiracies.
Each participant was asked to describe the conspiracy theory, explain why they believe it to be true, and express the strength of their belief in percentages. These details were transferred to the chatbot, which then entered into a conversation with the participant, in which he pointed out the information and evidence that undermined or debunked the conspiracy, and answered the participant's questions. The chatbot's answers were detailed and detailed, often reaching hundreds of words. On average, each conversation lasted about 8 minutes.


















Comments