Is ChatGPT a better person than you?

Publicly released:
International
CC-0. https://www.rawpixel.com/image/6033267/robot-toy-free-public-domain-cc0-photo
CC-0. https://www.rawpixel.com/image/6033267/robot-toy-free-public-domain-cc0-photo

US scientists say ChatGPT-4 has aced their Turing test, proving itself indistinguishable from a real human, even when statistical methods were used to try and detect it. In fact, ChatGPT-4 displayed more humanity than some of the humans it was tested against, as it was more cooperative, altruistic, trusting, generous, and likely to return a favour than the average human included in the trial. The team asked ChatGPT to answer psychological survey questions and play interactive games that assess trust, fairness, risk aversion, altruism, and cooperation. Next, they compared ChatGPTs’ choices to the choices of 108,314 humans from more than 50 countries. Statistically speaking, ChatGPT was indistinguishable from randomly selected humans, and it mirrored human responses such as becoming more generous when it was told someone else was watching. The team says this suggests artificial intelligence (AI) could be employed in negotiation, dispute resolution, customer service, and caregiving.

Media release

From: PNAS

Comparing humans and AI in psychological tests

A study explores behavioral similarity between humans and AI. As some roles for AI involve decision-making and strategic interactions with humans, it is imperative to understand AI behavioral tendencies. Qiaozhu Mei, Matthew Jackson, and colleagues evaluated the personality and behavior of a series of AI chatbots. The authors asked variations of ChatGPT to answer psychological survey questions and play interactive games that assess trust, fairness, risk aversion, altruism, and cooperation. Next, the authors compared ChatGPTs’ choices to the choices of 108,314 humans from more than 50 countries. ChatGPT-4 passed a Turing test, displaying behavioral and personality traits that could not be statistically distinguished from randomly selected human responses. For example, both humans and chatbots became more generous when told that their choices would be observed by a third party and modified their behaviors after experiencing different roles in a game or in response to different framings of the same strategic situation. However, the chatbots’ behaviors tended to be more cooperative and altruistic than the median human behavior, exhibiting increased trust, generosity, and reciprocity. According to the authors, the findings suggest that such tendencies may make AI well-suited for roles requiring negotiation, dispute resolution, customer service, and caregiving.

Attachments

Note: Not all attachments are visible to the general public. Research URLs will go live after the embargo ends.

Research PNAS, Web page
Journal/
conference:
PNAS
Research:Paper
Organisation/s: University of Michigan, USA, Stanford University, USA
Funder: No information provided.
Media Contact/s
Contact details are only visible to registered journalists.