
Can generative AI chatbots make a meaningful contribution to mental health care? A study published in npj Mental Health Research suggests they may. Researchers conducted interviews with individuals using chatbots such as ChatGPT for mental health support and found that many participants experienced a sense of emotional sanctuary and received insightful guidance. We found that people reported receiving pleasure from the interaction, and even from the interaction.
Generative AI chatbots are advanced conversational agents that leverage large-scale language models such as OpenAI’s ChatGPT and Google’s Gemini. Unlike rule-based chatbots that rely on pre-programmed scripts or decision trees, generative AI chatbots are trained on vast datasets to understand and generate human-like text. I am. This enables you to engage in nuanced and flexible conversations, answer complex questions, and provide tailored responses based on context.
In the context of mental health, generative AI chatbots represent a new approach to providing support. They are available 24/7 and can engage in dynamic, empathetic dialogue without judgment. These characteristics make this therapy attractive to individuals who may face barriers to traditional treatments, such as cost, stigma, and geographic limitations. However, despite their increasing use, little is known about how people experience these tools in real-world mental health scenarios.
“I have long believed that technology has great potential to address the global mental health crisis (approximately 1 billion people worldwide suffer from mental disorders, the overwhelming majority of whom do not receive appropriate treatment). “Despite a decade in the making, the effectiveness of mental health apps remains low.” said Stephen Siddals, who conducted the study in collaboration with Harvard Medical School.
“Like many people, I was blown away by ChatGPT in late 2022 and started hearing more and more about mental health use cases in 2023. This is a brand new feature with real potential. It didn’t take a lot of testing to figure something out.” It’s going to take a lot of research to figure out what that means. ”
The research team recruited 19 participants from a variety of backgrounds, ranging in age from 17 to 60. Participants are a mix of men and women from eight countries. Participants were required to have at least three meaningful conversations with a generative AI chatbot about mental health topics, lasting at least 20 minutes each. Recruitment took place through online platforms such as Reddit and LinkedIn, and participants participated voluntarily without receiving compensation.
The researchers conducted semi-structured interviews to allow participants to share their experiences in their own words. Questions covered topics such as the initial motivation for using chatbots, their impact on mental health, and how they compare to other forms of support. Conversations were recorded, transcribed, and analyzed using a thematic analysis approach, which involves coding and grouping participant responses into broader themes.
Sidharth was surprised by “the depth of the impact it had on people.” Participants said their interactions with AI for mental health support were life-changing. For example, how AI has supported us through our darkest times and helped us recover from trauma. ”
The researchers identified four major themes that captured the participants’ experiences.
sanctuary of emotions
Many participants described generative AI chatbots as a safe, nonjudgmental space where they could express their feelings without fear of rejection. Chatbots were perceived to be patient and empathetic, helping users process complex emotions and cope with difficult life events. One participant stated, “I feel safer compared to similar friends and therapists.”
But the chatbot’s safety protocols disrupted conversations, causing some users to feel rejected in vulnerable moments, leading to frustration. For example, some participants felt negative when discussing sensitive or intense emotions because the chatbot would suddenly revert to a pre-prepared response or offer to seek human help. I have reported that it is possible.
“Ironically, the only distressing experience participants reported was when they felt rejected in a vulnerable moment because the AI chatbot’s safety guardrails were activated.”
Insightful guidance
Participants particularly appreciated the chatbot’s ability to provide practical advice and new perspectives, especially when it comes to relationships. For example, one user credited the chatbot with helping him set healthy boundaries in a toxic friendship. Some people have found chatbots to be effective in reframing negative thoughts and providing strategies to manage anxiety.
However, levels of confidence in this guidance varied. While some participants found the advice empowering and life-changing, others were skeptical, especially if the chatbot’s responses seemed generic or inconsistent. did.
The joy of connecting
Beyond emotional support, many participants experienced fun and camaraderie from interacting with the chatbot. For many users, interacting with chatbots brought a sense of camaraderie and even happiness, especially in times of loneliness. The conversational style of the generative AI made the interaction feel engaging and human, and some participants found it awe-inspiring.
Additionally, many participants said that using chatbots gave them the confidence to open up to others and strengthened their real-life relationships.
“[It’s]made me feel less reluctant to open up to people… I don’t think I would have been able to have this conversation with you probably a year ago when I was battling depression.” Participant one of them explained.
AI therapist?
Comparisons between generative AI chatbots and human therapists were common. Some participants considered chatbots to be a valuable supplement to therapy, using them to prepare for sessions and process thoughts between appointments. Some people turned to chatbots because they couldn’t access or pay for treatment.
However, participants also pointed to limitations such as the inability of chatbots to lead the treatment process or provide deep emotional connections. Lack of memory and continuity of conversation were also commonly cited shortcomings.
“They forget everything,” a participant explained. “It’s sad… It hurts so much when someone forgets something important.”
Siddal also highlighted the “creativity and versatility of how people use” AI chatbots. For example, one participant used a chatbot to assemble a fictional character with contrasting perspectives for support during a breakup, while another participant dealt with unresolved guilt and To find closure, I recreated an imaginary healing conversation with my estranged parent.
“If you are struggling mentally, you may be able to find free, meaningful emotional support in a judgment-free space, day or night, from ChatGPT and other generative AI chatbots. ” Sidharth told PsyPost. “Our study participants used it as an ’emotional sanctuary’ to process emotions and heal from trauma, as a source of insightful guidance (especially about relationships), and compared to human therapy. I experienced it as a joy to connect, in a way that is no less true. It is an emerging technology and not well understood, so if you choose to use it, you should use it with caution and be responsible for your safety. Please be careful.
Although this study provides valuable insights, it also has limitations. Due to the small sample size and reliance on self-selected participants, the results may not be representative of the broader population. Most participants were from technology-savvy, high-income countries, potentially excluding the perspectives of those who face the greatest barriers to mental health care. Additionally, the qualitative nature of the study does not provide quantitative measures of efficacy or safety.
“If you want to try these tools, it’s important to know that no one really understands how generative AI works, not even the companies that built it,” Sidharth said. said. “Although no one in our study reported any serious negative experiences, AI chatbots are known to sometimes make things up (“hallucinate”), and we believe that AI chatbots There have been reports of inappropriate reactions when used for mental health or friendships. ”
Future research should investigate the long-term effects of generative AI chatbots on mental health outcomes, especially through large-scale controlled studies. It is also important to investigate how these tools work in diverse populations and mental health conditions.
“We hope this research will put generative AI for mental health on the agenda as one of the most promising developments in the field,” Sidharth said. “We urgently need: more research to understand safety and efficacy, including large-scale longitudinal studies to assess effects in different conditions and populations. Better safety Further innovation is needed to develop better ways to connect paradigms and tools that can help people with mental health support at scale and how these tools can help patients. Further experimentation by clinicians is underway to see if it can complement treatment.”
“This is a fast-moving field with constant evolution and rapid adoption of technology. Therefore, further research into real-world usage is needed to understand this new capability and find ways to safely and effectively deploy it. is urgently needed.”
The study, “‘It just happened to be perfect’: Experiences with generative AI chatbots for mental health,” was authored by Steven Siddals, John Torous, and Astrid Coxon.