Openai is introducing several new safety standards for teenage users following an illegal death lawsuit accusing chatbots of contributing to suicide.
Updates include parent control, rerouting sensitive conversations to advanced inference models, and reminders for users to take a break after use.
The lawsuit was filed by the family of Adam Raine, a teenager who used ChatGpt to discuss suicide plans and refined it.
Openai responded to a rising concern Blog post Last week, it said Guardrails had a “breakdown” and could pledge to update safety standards accordingly.
“As the world adapts to this new technology, we feel a deep responsibility to help those who need it the most,” the company said. “We are constantly improving how models respond with sensitive interactions.”
One such response is to automatically reroute chats from those who are unseemly to inference models such as GPT-5 thinking, which Openai said to use context and inference to provide a more targeted response.
“We recently introduced a real-time router that allows you to choose efficient chat and inference models based on the context of the conversation,” Openai wrote Tuesday. Blog post. “Our system begins to route some sensitive conversations to inference models like GPT‑5 thoughts, such as when the system detects signs of acute distress. So, regardless of the model we chose at first, we can provide a more useful and beneficial response.”
Next month, tighter parental controls will also be rolled out, where parents will link accounts to their child’s accounts, control how ChatGPT responds with “age-appropriate model behaviors,” and disable certain features such as memory and chat history.
Also, if the chatbot detects a teen, parents will receive notifications when they are in a “acute distress” moment, and users will receive regular reminders to take a break from the chatbot.
Looking ahead, Openai said it will continue to review the system for the next four months. This review will be led by the company’s council on welfare and AI and the global physician network.
“Their inputs help us to define, measure, prioritize happiness, design future safeguards such as future iterations of parental control, and keep the latest research in mind,” Openai said. “The Council advises us about our products, research and policy decisions, but Openai remains responsible for the choices we make.”