We're more likely to blame AI for moral misdeeds if they seem human

Publicly released:
International
Image by BrownMantis from Pixabay
Image by BrownMantis from Pixabay

People are more likely to blame AI for real-life moral transgressions when they see the AI as having more human-like minds, according to an international researcher. The researcher presented participants with various real-world scenarios of moral transgressions involving AI – such as racist auto-tagging of photos – and asked how much they blamed AI, the programmer, the company behind it, or the government. AI was occasionally made to seem more human-like by describing a name, age, height, and hobby for the AI. Participants tended to assign more blame to AI when it was perceived as having a more human-like mind, and when asked to distribute blame, they tended to assign less blame to the company. The author says the findings suggest AI mind perception is a critical factor in who we decide to blame, and adds the consequences could be harmful if AI are misused as scapegoats when things go wrong.

Media release

From: PLOS

Peer-reviewed Experimental study People

Human-like artificial intelligence may face greater blame for moral violations

Having human mind-like qualities may make AI more likely to become a scapegoat

In a new study, participants tended to assign greater blame to artificial intelligences (AIs) involved in real-world moral transgressions when they perceived the AIs as having more human-like minds. Minjoo Joo of Sookmyung Women’s University in Seoul, Korea, presents these findings in the open-access journal PLOS ONE on December 18, 2024.

Prior research has revealed a tendency of people to blame AI for various moral transgressions, such as in cases of an autonomous vehicle hitting a pedestrian or decisions that caused medical or military harm. Additional research suggests that people tend to assign more blame to AIs perceived to be capable of awareness, thinking, and planning. People may be more likely to attribute such capacities to AIs they perceive as having human-like minds that can experience conscious feelings.

On the basis of that earlier research, Joo hypothesized that AIs perceived as having human-like minds may receive a greater share of blame for a given moral transgression.

To test this idea, Joo conducted several experiments in which participants were presented with various real-world instances of moral transgressions involving AIs—such as racist auto-tagging of photos—and were asked questions to evaluate their mind perception of the AI involved, as well as the extent to which they assigned blame to the AI, its programmer, the company behind it, or the government. In some cases, AI mind perception was manipulated by describing a name, age, height, and hobby for the AI.

Across the experiments, participants tended to assign considerably more blame to an AI when they perceived it as having a more human-like mind. In these cases, when participants were asked to distribute relative blame, they tended to assign less blame to the involved company. But when asked to rate the level of blame independently for each agent, there was no reduction in blame assigned to the company.

These findings suggest that AI mind perception is a critical factor contributing to blame attribution for transgressions involving AI. Additionally, Joo raises concerns about the potentially harmful consequences of misusing AIs as scapegoats and calls for further research on AI blame attribution.

The author adds: “Can AIs be held accountable for moral transgressions? This research shows that perceiving AI as human-like increases blame toward AI while reducing blame on human stakeholders, raising concerns about using AI as a moral scapegoat.”

Attachments

Note: Not all attachments are visible to the general public. Research URLs will go live after the embargo ends.

Research PLOS, Web page The URL will go live after the embargo lifts.
Journal/
conference:
PLOS ONE
Research:Paper
Organisation/s: Sookmyung Women’s University, South Korea
Funder: This research was supported by 2024 Sookmyung Women's University HUSS Research Grants. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Media Contact/s
Contact details are only visible to registered journalists.