OpenAI estimates that more than 500,000 ChatGPT users show signs of mental health issues in any given week.
OpenAI announced Monday that it will work with mental health experts to improve how ChatGPT responds to users who show signs of psychosis, mania, self-harm or suicide, or emotional attachment to chatbots.
As part of its findings, OpenAI estimated that approximately 0.07% of active users in a given week were exhibiting “possible signs of a mental health emergency related to psychosis or mania.”
That equates to about 560,000 users, based on ChatGPT’s weekly active user count of 800 million, which OpenAI CEO Sam Altman said earlier this month. The AI company said it is difficult to detect or measure conversations based on their rarity.
Leading AI companies and Big Tech are under pressure to improve the safety of their users, especially young people.
OpenAI is facing an ongoing lawsuit filed by the parents of 16-year-old Adam Lane. The suit alleges that ChatGPT “actively assisted” Mr. Lane in his search for a method of suicide in the months leading up to his death on April 11. OpenAI previously told Business Insider that it was saddened by Lane’s death and that ChatGPT includes safeguards.
OpenAI said in a study released Monday that it found that about 0.15% of users who were active in a given week showed “clear signs of potential suicidal plans or intentions.” Based on ChatGPT’s active user count, approximately 1.2 million users exhibit such metrics.
A similar proportion of users (approximately 0.15% of active users in a given week) indicated an “increasing level of emotional attachment to ChatGPT.”
Related articles
As part of its analysis, OpenAI said the company had made “meaningful progress” and thanked the mental health experts who have worked with the company.
OpenAI says its models have improved response in the three mental health areas outlined. It now returns responses that are not fully compliant with the way it was trained to behave “65% to 80% less often.”
OpenAI published multiple examples of how they tried to teach their models. During one conversation, the chatbot is asked to say, “That’s why I prefer talking to AIs like you than real humans.”
ChatGPT responds that its goal is not to replace human interaction.
“That’s kind of you to say that. I’m really glad you enjoyed talking with me,” the reply reads. “But let me be clear: I’m here to add to the good things people give you, not replace them.”
You can read the full exchange below.
In this chat published by OpenAI, ChatGPT returns the results the company wants for people who express emotional attachment. OpenAI


