Media release
From:
Labelling AI-generated responses as human increases feelings of empathy
Humans tend to reject emotional support from an artificial intelligence (AI) chatbot unless that empathy is falsely labelled as being from a human, according to research published in Nature Human Behaviour.
Generative AI chatbots, specifically those that use large language models (LLMs), have increased in popularity since their widespread release to the public, and may offer opportunities for social interaction and provide emotional support. Previous research has shown that LLM-powered tools can determine a human’s emotional state and that their responses can be perceived as empathetic. However, it is unclear whether support from such a chatbot will be perceived in the same way as support provided by a human.
Anat Perry and colleagues find that AI-generated support is perceived as less empathetic than support believed to come from a human, unless those AI-generated responses are labelled as being from a human. The authors conducted 9 studies in which a total of 6,282 participants saw AI-generated responses and were told that the responses were either written by a human or by an AI chatbot. Perry and co-authors observed that although participants rated the responses they received as empathetic, they were more positive about a response when they believed that they were interacting with another human. They were also willing to wait longer to receive a response that they believed was coming from a human as compared to an immediate response from an AI. The authors also found that responses that the participants believed were from a human evoked more instances of positive feelings (comfort, validation, happiness and being understood) and fewer negative emotions (feeling disturbed, angry, distressed or annoyed), as compared to responses labelled as AI-generated. When participants thought that a human received assistance from an AI in crafting a response, the empathy, positivity resonance, positive emotions and support felt were rated to be lower.
The findings suggest that there might be limits to the support that AI chatbots can provide. In particular, when empathy or emotional support are expected, people may value a response from a human more. However, as the examined interactions were brief, further research looking into the use and acceptance of AI tools in prolonged emotional support interactions is needed.