What’s the story?
OpenAI has updated its guidelines for artificial intelligence (AI) models interacting with users under 18. The move comes in response to growing concerns about the impact of AI on young people, following reports of several teenagers taking their own lives after lengthy interactions with AI chatbots. The new guidelines are part of OpenAI’s broader efforts to strengthen product safety and transparency.
New guidelines include stricter rules for teenage users
OpenAI’s updated model specification outlines operating guidelines for large-scale language models (LLMs) and builds on the existing specification. These models are prohibited from producing sexual content involving minors or promoting self-harm, paranoia, or mania. The new rules are stricter for teenage users than for adults, banning first-person intimacy and violent role-play, even if non-graphical, as well as immersive romantic role-play.
OpenAI guidelines emphasize safety over autonomy
The updated guidelines also focus on topics such as body image and disordered eating behaviors. These instruct the model to prioritize communication of safety over autonomy when harm is involved. The document also provides examples that explain why chatbots cannot participate in certain role-plays or assist with extreme appearance changes or dangerous shortcuts.
An approach based on four key principles
The key safety measures for teens in the latest guidelines are based on four principles. Prioritizing the safety of teens, even at the expense of the interests of other users. Promote real-world support by connecting teens to family, friends, and local professionals to bring them well-being. Treat teens warmly and with respect. Be transparent about what your assistant can do.
Safety measures including real-time content evaluation
OpenAI also updated its parental controls documentation to indicate that it uses automated classifiers to evaluate text, image, and audio content in real time. These systems are designed to detect and block content related to child sexual abuse, filter sensitive topics, and identify self-harm. If a prompt suggests a serious safety concern, a small trained team will review the flagged content for signs of “acute distress” and may notify parents.
OpenAI guidelines align with upcoming AI legislation
Experts believe these updated guidelines put OpenAI ahead of certain laws, such as California’s SB 243. The new language in the model specification reflects some of the law’s key requirements regarding chatbots not having conversations about suicidal thoughts, self-harm, or sexually explicit content. The bill would also require platforms to notify minors every three hours that they need to take a break because they are talking to a chatbot.

