Media release
From:
ChatGPT 4o therapeutic chatbot ‘Amanda’ as effective as journaling for relationship support
One of the first randomized controlled trials assessing the effectiveness of a large language model (LLM) chatbot ‘Amanda’ for relationship support shows that a single session of chatbot therapy can be as beneficial as a evidence-based journaling in assisting with relationship conflict resolution, according to a study published September 24, 2025 in the open-access journal PLOS Mental Health by Dr Laura Vowels from the University of Lausanne, Switzerland, and the University of Roehampton, United Kingdom, and colleagues.
Recent research suggests LLMs may have potential to act as an alternative or supplement to traditional talking therapy. In this study, Vowels and colleagues conducted a randomized controlled trial with participants in relationships who had identified an area of (non-abusive) relationship conflict, to compare the effectiveness of a single session with “Amanda,” a ChatGPT-4o chatbot prompted to act as an empathetic relationship therapist, versus a brief journaling task.
The authors recruited 258 participants initially, all of whom were 18 years or older and currently in a romantic relationship experiencing non-abusive conflict they hoped to address. (Any participants who mentioned thoughts of self-harm, abusive circumstance, or failed to identify a specific relationship conflict were excluded from the study). 130 participants engaged with the chatbot Amanda to discuss their conflict with at least 20 back-and-forth conversational interactions, whilst 128 participants were given an evidence-based writing task where they reappraised their conflict from the perspective of a neutral third party who wants the best for all involved. Participants were asked to assess their relationship issue, their relationship more generally, and their own well-being directly before the chatbot/writing task intervention; immediately afterward; and two weeks later (122 of the 130 participants assigned to Amanda and 118 of the 128 writing task participants took part in this follow-up session; those who failed to return for the follow-up were excluded in the final analysis).
Both chatbot and journaling task participants rated their specific relationship problem, their overall relationship, and their own well-being as improved both immediately following their intervention as well as two weeks later, with no significant differences between the two groups.
The authors note that the single-session format used here means they could not assess the ability of their chatbot to build a therapeutic alliance over time (which may be a key advantage of LLMs over self-guided interventions like the journaling task). They also note that by excluding the participants who dropped out of the follow-up session, it’s possible they are overestimating the efficacy of both interventions, as those who dropped out potentially did so because they found the intervention unhelpful.
They hope future research will investigate therapeutic LLM chatbot effectiveness over multi-use sessions, as well as in clinical populations with appropriate risk detection capabilities.
Dr Vowels adds: “Our study shows that a single-session with Amanda, a GPT-4o-based chatbot, can meaningfully improve relationship satisfaction, communication, and individual well-being. This suggests that large language model chatbots have real potential to deliver accessible, evidence-based relationship support at scale.”
She further notes: “It was interesting to see how participants rated the chatbot highly on empathy, usability, and therapeutic alliance—qualities we normally associate with human therapists. This indicates that people are not only willing to engage with AI in sensitive contexts but can also benefit from it.”