How do we reduce the influence of AI misinformation?
Embargoed until:
Publicly released:
2025-06-25 09:01
AI-generated misinformation is likely to influence people regardless of whether they know it's AI-generated or they have been given reminders not to trust the source, according to Australian research. The team tested a series of strategies for reducing the influence of a biased AI-generated article - including giving study participants content beforehand designed to lower their trust in AI, giving them a simple disclaimer similar to those seen on real-world AI platforms, or debunking the specific article after the participants had read it. The researchers say the biased article influenced participants' reasoning whether they were told it was AI or not, the disclaimer had no impact, and while the pre-emptive content reduced general trust in AI-generated information, it did not seem to reduce the influence of the specific article in the study. The researchers say debunking the article afterwards did help reduce the article's influence.
Journal/conference: Royal Society Open Science
Research: Paper
Organisation/s: The University of Western Australia, The University of Adelaide
Funder: This research was supported by Australian Research Council grant DP240101230 to U.K.H.E., S.L. and
B.S.T.; S.L. acknowledges financial support from the European Research Council (ERC) under the European Union’s
Horizon 2020 research and innovation programme (Advanced Grant agreement No. 101020961 PRODEMINFO),
and the Humboldt Foundation through a research aarwd.
Media release
From: The Royal Society
Countering AI-Generated Misinformation With Pre-Emptive Source Discreditation and Debunking
Despite concerns over AI-generated misinformation, little research has examined its impact on reasoning and the effectiveness of countermeasures. Across two experiments, a misleading AI-generated article influenced reasoning regardless of its alleged source (human or AI). A source-discreditation that highlighted issues with AI reliability reduced general trust in AI-generated information but did not reduce the article’s impact on reasoning. A simple disclaimer also had no impact. A content-focused correction effectively reduced misinformation influence, but only a combination of content- and source-focused interventions eliminated it entirely. Findings demonstrate that AI-generated misinformation can be persuasive, potentially requiring multiple countermeasures to negate its effects.
Attachments:
Note: Not all attachments are visible to the general public
-
The Royal Society
Web page