Recently, Lilian Weng, a manager at OpenAI, sparked a discussion when she shared her emotional interaction with the viral chatbot ChatGPT, suggesting it felt like therapy. However, research published in the journal Nature Machine Intelligence indicates that her experience might be attributed to a form of the placebo effect.
A team comprising members from the Massachusetts Institute of Technology (MIT) and Arizona State University conducted a study involving over 300 participants to evaluate interactions with mental health AI programs. Participants were primed with different expectations: some were informed that the chatbot was empathetic, others were told it was manipulative, and a third group was informed that it was neutral.
The findings revealed that those who believed they were communicating with a compassionate chatbot were more inclined to perceive the chatbot as trustworthy. Pat Pataranutaporn, a co-author of the report, noted, “From this study, we see that to some extent the AI is the AI of the beholder.”
AI startups have been introducing AI apps designed for therapy, companionship, and mental health support, creating a substantial market. However, this field remains controversial, with concerns that bots might ultimately replace human therapists rather than complement them.
Critics argue that AI bots are unlikely to provide effective therapy, emphasizing that therapy is demanding work that goes beyond mere conversation. Some mental health apps have faced criticism and scrutiny, adding to concerns over AI’s role in this domain.