News release
From:
Biased AI writing assistants influence users’ opinions – even when users know the AI is biased
When people used a biased AI assistant to write about a social issue, their attitudes converged towards the AI’s position, according to new experiments involving 2,582 people. This phenomenon happened even when people received proactive or retroactive warnings about the AI assistants’ bias. People are increasingly using AI tools like autocomplete and “smart replies” when writing. Yet, the large language models (LLMs) that make these suggestions encode a number of well-documented biases. Now, Sterling Williams-Ceci and colleagues have investigated how using biased AI writing assistants can affect users’ opinions. They conducted two experiments with a total of 2,582 participants, assigned to write about one of five sociopolitical topics. Afterward, participants completed post-task surveys on how they came to view the issues they wrote about. The group that used the AI helpers ended up moving their attitudes closer to those of the biased digital assistants. However, these participants did not notice that they had been influenced. Williams-Ceci et al. also found that neither prior warnings nor retroactive debriefings – two mitigation interventions – about the AI assistants’ partiality influenced participants’ attitudes at the end of the experiment. “Our study shows the alarming risk associated with such use of AI, and suggests that further work is necessary to investigate the potential and impact of this influence vector, and to design interventions that can successfully mitigate the influence of biased AI suggestions on users’ attitudes,” the authors write.
Expert Reaction
These comments have been collated by the Science Media Centre to provide a variety of expert perspectives on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.
Dr Andrew Lensen is a Senior Lecturer in Artificial Intelligence at the School of Engineering and Computer Science, Victoria University of Wellington
"As large language models (LLM) become normal technology, new concerns have emerged about their effects on society. One question that we have been asking is how the use of LLMs in writing changes people’s writing styles and world view over time. This new study suggests that the use of AI for auto-completion (e.g. when writing a document or email) can actually chance our attitudes on societal issues. For example, in one experiment the authors showed that using an LLM designed to be biased against the death penalty would make participants become increasingly against the death penalty themselves over time.
"While this study performed a controlled experiment for research purposes, it also uncovers a larger issue, that goes beyond just auto-completion. The Big Tech companies who own the LLMs that many of us use (e.g. ChatGPT, Copilot, Gemini, Claude) are in a position of power to influence attitudes of their users. For example, if a conservative company (such as xAI, who makes Grok) wanted to promote acceptance of the death penalty, they could choose to bias their model so that users’ attitudes became more positive to the death penalty over time. This can be done in quite subtle ways that may be hard to detect.
"This is a good reminder of the influence that AI and other technology can have on our society — especially when the technology is controlled by a few, very rich American companies. It is especially relevant in an election year, where these tools could be used to try and sway voters in what is forecast to be a very tight election in New Zealand. How do we fix this? We could begin by regulating this technology, perhaps by requiring independent testing and auditing of LLMs in New Zealand. Another promising area is 'sovereign AI', where New Zealand has its own LLM that we train ourselves and retain control over. This would help to prevent potentially unwanted Americanisation of our culture."