Can an AI trick you into trusting it?

Publicly released:
International
Image Caption: An example of agents interacting with humans in the study.    Image Credit: Takahiro Tsumura, CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)
Image Caption: An example of agents interacting with humans in the study. Image Credit: Takahiro Tsumura, CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)

Artificial intelligence (AI) has taken another step towards its eventual Skynet takeover of society it seems, as Japanese researchers have found that we are more empathetic towards them if they seem to disclose personal information. The researchers instructed participants to have a text-based chat with an online AI agent and play out a scenario between two co-workers. In each conversation, the agent appeared to self-disclose either highly work-relevant personal information, less-relevant information about a hobby, or no personal information at all. The team says that, compared to the less-relevant sharing or no sharing at all, the bots that chatted about work elicited more empathy from the participants.

News release

From: PLOS

When A.I. discloses personal information, users may empathize more

New study suggests that self-disclosure could be used to boost people’s acceptance of AI technologies

In a new study, participants showed more empathy for an online anthropomorphic artificial intelligence (A.I.) agent when it seemed to disclose personal information about itself while chatting with participants. Takahiro Tsumura of The Graduate University for Advanced Studies, SOKENDAI in Tokyo, Japan, and Seiji Yamada of the National Institute of Informatics, also in Tokyo, present these findings in the open-access journal PLOS ONE on May 10, 2023.

The use of A.I. in daily life is increasing, raising interest in factors that might contribute to the level of trust and acceptance people feel towards A.I. agents. Prior research has suggested that people are more likely to accept artificial objects if the objects elicit empathy. For instance, people may empathize with cleaning robots, robots that mimic pets, and anthropomorphic chat tools that provide assistance on websites.

Earlier research has also highlighted the importance of disclosing personal information in building human relationships. Stemming from those findings, Tsumura and Yamada hypothesized that self-disclosure by an anthropomorphic A.I. agent might boost people’s empathy toward those agents.

To test this idea, the researchers conducted online experiments in which participants had a text-based chat with an online A.I. agent that was visually represented by either a human-like illustration or an illustration of an anthropomorphic robot. The chat involved a scenario in which the participant and agent were colleagues on a lunch break at the agent’s workplace. In each conversation, the agent seemed to self-disclose either highly work-relevant personal information, less-relevant information about a hobby, or no personal information.

The final analysis included data from 918 participants whose empathy for the A.I. agent was evaluated using a standard empathy questionnaire. The researchers found that, compared to less-relevant self-disclosure, highly work-relevant self-disclosure from the A.I. agent was associated with greater empathy from participants. A lack of self-disclosure was associated with suppressed empathy. The agent’s appearance as either a human or anthropomorphic robot did not have a significant association with empathy levels.

These findings suggest that self-disclosure by A.I. agents may, indeed, elicit empathy from humans, which could help inform future development of A.I. tools.

The authors add: “This study investigates whether self-disclosure by anthropomorphic agents affects human empathy. Our research will change the negative image of artifacts used in society and contribute to future social relationships between humans and anthropomorphic agents.”

Journal/
conference:
PLOS ONE
Research:Paper
Organisation/s: The Graduate University for Advanced Studies, SOKENDAI, Tokyo, Japan
Funder: This work was partially supported by JST, CREST (JPMJCR21D4), Japan. This work was also supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2136. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Media Contact/s
Contact details are only visible to registered journalists.