When leaders frame aggression in conflicts as defence, ChatGPT and the public are easily fooled

Publicly released:
International
CC-0. https://pixabay.com/photos/soldier-war-guns-special-forces-1447008/
CC-0. https://pixabay.com/photos/soldier-war-guns-special-forces-1447008/

European scientists say that when political leaders frame unprovoked attacks as acts of defence, their followers fight harder, and both people and artificial intelligence (AI) chatbots easily fall for the false rhetoric. The team looked at 261 manifestos from real-world conflicts, finding that leaders frequently misrepresented their strategic intentions, often reframing attacks as defence. They then asked ChatGPT (2,162 times) and two groups of people, one group of 252 and the other of 312, to respond to the manifestos, finding false narratives were readily believed, and support for the leaders increased. This could potentially lead to deceptive leaders winning public support for increasingly intense and wasteful conflicts, the authors say. The findings help explain why deception, propaganda, and 'fake news’ are commonly used political tools in conflicts, and explain how leaders portraying themselves as victims can escalate conflict, the authors conclude.

Attachments

Note: Not all attachments are visible to the general public. Research URLs will go live after the embargo ends.

Research Cell Press, Web page The URL will go live after the embargo ends
Journal/
conference:
iScience
Research:Paper
Organisation/s: University of Groningen, the Netherlands, Leibniz Institute for Primate Research, Germany
Funder: This project has received funding from the Netherlands Science Foundation (NWO SPI-57- 242) to C.K.W.D.D., and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program to C.K. W.D.D. (AdG agreement no. 785635) and to J.G. (StG agreement, Q11 SBFI no. MB23.0003).
Media Contact/s
Contact details are only visible to registered journalists.