Will AI make us all the same?

Publicly released:
International
CC-0
CC-0

How we express ourselves in writing, speech and thought is being influenced by artificial intelligence (AI) chatbots, which could reduce humanity's collective wisdom and ability to adapt, according to US scientists. To help preserve diversity of human thought, AI developers should use more real-world diversity - in language, perspectives and reasoning - to train chatbots, the authors say. As well as saving us from losing our ability to write, speak and think like a human being, this would also improve the chatbots' reasoning abilities, they say.

News release

From: Cell Press

AI is homogenizing human expression and thought, computer scientists and psychologists say

AI chatbots are standardizing how people speak, write, and think. If this homogenization continues unchecked, it risks reducing humanity’s collective wisdom and ability to adapt, computer scientists and psychologists argue in an opinion paper publishing March 11 in the Cell Press journal Trends in Cognitive Sciences. They say that AI developers should incorporate more real-world diversity into large language model (LLM) training sets, not only to help preserve human cognitive diversity, but also to improve chatbots’ reasoning abilities.

“Individuals differ in how they write, reason, and view the world,” says first author and computer scientist Zhivar Sourati of the University of Southern California. “When these differences are mediated by the same LLMs, their distinct linguistic style, perspective, and reasoning strategies become homogenized, producing standardized expressions and thoughts across users.”

Within groups and societies, cognitive diversity bolsters creativity and problem solving, say the researchers. However, cognitive diversity is shrinking worldwide as billions of people are using the same handful of AI chatbots for an increasing number of tasks, they say. When people use chatbots to help them polish their writing, for example, the writing ends up losing its stylistic individuality, and people feel less creative ownership over what they produce.

“The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning,” says Sourati.

The team points to multiple studies showing that LLM outputs are less varied than human-generated writing and that LLM outputs tend to reflect the language, values, and reasoning styles of Western, educated, industrialized, rich, and democratic societies.

“Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience,” says Sourati.

Though studies show that individuals often generate more ideas with more details when they use LLMs, groups of people produce fewer and less creative ideas when they use LLMs than when they simply combine their collective powers, note the researchers.

“Even if people are not the first-hand users of LLMs, LLMs are still going to affect them indirectly,” says Sourati. “If a lot of people around me are thinking and speaking in a certain way, and I do things differently, I would feel a pressure to align with them, because it would seem like a more credible or socially acceptable way of expressing my ideas.”

Beyond language, studies have shown that after interacting with biased LLMs, people’s opinions become more similar to the LLM that they used. LLMs also favor linear modes of reasoning such as “chain-of-thought reasoning,” which requires models to show step-by-step reasoning. This emphasis reduces the use of intuitive or abstract reasoning styles, which are sometimes more efficient than linear reasoning, the researchers say. They also note that LLMs can alter people’s expectations, which can subtly change the direction of a person’s work.

“Rather than actively steering generation, users often defer to model-suggested continuations, selecting options that seem ‘good enough’ instead of crafting their own, which gradually shifts agency from the user to the model,” says Sourati.

The researchers say that AI developers should intentionally incorporate diversity in language, perspectives, and reasoning into their models. They emphasize that this diversity should be grounded in the diversity that exists within humans globally, rather than introducing random variation.

“If LLMs had more diverse ways of approaching ideas and problems, they would better support the collective intelligence and problem-solving capabilities of our societies,” said Sourati. “We need to diversify the AI models themselves while also adjusting how we interact with them, especially given their widespread use across tasks and contexts, to protect the cognitive diversity and ideation potential of future generations.”

Multimedia

Implications of LLM homogenisation
Implications of LLM homogenisation
LLM diversity
LLM diversity

Attachments

Note: Not all attachments are visible to the general public. Research URLs will go live after the embargo ends.

Research Cell Press, Web page The URL will go live after the embargo ends
Journal/
conference:
Trends in Cognitive Sciences
Research:Paper
Organisation/s: University of Southern California, USA
Funder: This research was supported by the Air Force Office of Scientific Research A9550-23-1-046.
Media Contact/s
Contact details are only visible to registered journalists.