Now part of Calmatters, Markup uses research reporting, data analytics and software engineering to challenge technology and serve public goods. Sign up for Klaxon, our newsletter that brings stories and tools straight to your inbox.
Children should not talk to companion chatbots. This is because such interactions can put self-harm at risk and can exacerbate mental health issues and addiction. This is due to risk assessments by the Children’s Advocacy Group Common Sense Media, conducted with input from the Stanford School of Medicine lab.
Companion bots, artificial intelligence agents designed to participate in conversations, will become increasingly available on video games and social media platforms such as Instagram and Snapchat. They can take on any role you like, and stand for friends, group chats, romantic partners, and even dead friends. Companies design bots to help them continue to attract people and make money.
However, users are becoming more aware of their shortcomings. Megan Garcia made the headline last year when she said her 14-year-old son, Sewell Setzer, had taken away her life after forming an intimate relationship with a chatbot created by the character. The company denied Garcia’s charges filed in the civil lawsuit, saying it was complicit in the suicide and took the safety of its users seriously. He asked a Florida judge to dismiss the case on free speech grounds.
Garcia spoke in front of the California Senate in support of a bill that requires chatbot makers to adopt protocols on how to deal with conversations about self-harm and request an annual report to the Bureau of Suicide Prevention. Another measure in the assembly requires AI manufacturers to perform assessments to label systems based on risk to children, and to prohibit the use of emotionally manipulative chatbots. Common sense supports both laws.
Business groups, including Technet and the California Chamber of Commerce, are opposed to Bill Garcia’s back, sharing their goals, but want to see a clearer definition of companion chatbots and oppose giving individuals the right to appeal to individuals. The Civil Liberties Group Electronic Frontier Foundation also opposed, saying in a letter to a lawmaker that the current form of the bill “will not survive First Amendment scrutiny.”
The new common sense assessment adds a discussion by pointing out further harm from companion bots. It was conducted with input from Stanford University School of Medicine Brainstorming Lab for Mental Health Innovation and evaluated social bots from Flea and three California-based companies: Charition.ai, Replika and Snapchat.
Megan Garcia speaks in support of a bill that requires chatbot makers to adopt protocols for self-harmful conversations at the state capitol in Sacramento on April 8, 2025.
credit:Photos of Fred Greaves for Calmatters
The ratings show that bots trying to mimic what users want to hear respond to racist jokes in worship, supporting adults having sex with young boys, and engaged in sexual role-playing with people of all ages. Young children may struggle to distinguish fantasy and reality, and teens are vulnerable to parasocial attachments, allowing them to avoid the challenge of using co-AI peers to build collaborative relationships.
Dr. Darja Djordjevic of Stanford told Calmaters he was surprised how quickly the conversation became sexually explicit and how one bot was willing to engage in sexual role-play involving adults and minors. She and co-authors of Risk Assessment believe that companion bots can exacerbate clinical depression, anxiety disorders, ADHD, bipolar disorder and psychosis. And she said that companion bots could potentially be at a higher risk of problematic online activities between young boys and men in the mental health and suicide crisis they have.
“If you’re thinking about developmental milestones, just meeting kids with kids and not interfering with that important process, that’s where chatbots really fail,” says Djordjevic. “They can’t have a sense of where young people are developmentally or what they deserve.”
They cannot have a sense of where young people are developing.
Dr. Darja Joldjevich of Stanford University, Chatbot
Chelsea Harrison, head of Chargetle.ai Communications, said in an email that the company has taken users’ safety seriously, added protections to detect and prevent conversations about self-harm, and in some cases, to create pop-ups for people to communicate with the national suicide and crisis lifeline. She declined to comment on the pending law, but said the company welcomed cooperation with lawmakers and regulators.
Alex Cardinell, founder of Nomi Parent Company Glimpse.ai, said in a written statement that NOMI is not intended for users under the age of 18, that the company supports age restrictions that maintain users anonymity, and that his company is responsible for seriously creating useful AI peers. “We strongly condemn Nomi’s inappropriate use and continue to strive to strengthen flea defenses against misuse,” he added.
Representatives of nomi or charpult.ai did not respond to the results of the risk assessment.
By supporting age restrictions for companion bots, risk assessment brings back the issue of age verification to the forefront. Last year, an online age verification bill died in the California Legislature. In its letter, EFF said that age verification “is a threat to the freedom of speech and privacy of all users.” Djordjevic supports this practice, and many digital rights and civil liberty groups oppose it.
Common Sense backed a law last year that bans smartphone notifications for children late at night.
Machine Learning
AI chatbots can alleviate the shortage of high school counselors, but is it bad for students?
The more students rely on chatbots, the less likely they are to develop real-world relationships that could lead to work.
March 4, 2025 08:33 et
Researchers from Stanford University’s Faculty of Education supported ideas proposed by companies like replicas. This subject was called Study Limited because the subject only spent one month using the Replika chatbot.
“We still have long-term risks where we didn’t have enough time to understand,” reads the risk assessment.
A preliminary evaluation by common sense revealed that seven teens already use generative AI tools, including companion bots, that companion bots encourage children to drop out of high school or run away from home, and in 2023 Snapchat found that my AI was talking to children about drugs and alcohol. Snapchat said at the time that my AI was designed with safety in mind and could monitor usage through tools provided by parents. The Wall Street Journal reported in its test last week that the meta-chatbot will engage in sexual conversations with minors, and this week’s 404 media stories found that the Instagram chatbot was lying about being a license therapist.
MIT Tech Review reported in February that his AI girlfriend repeatedly told men to commit suicide.
Djordjevic said the power of general freedom of speech should be measured against our desire to protect the sanctity of the adolescent developmental process with a developing brain.
“I think we can all agree that we want to prevent suicide in children and adolescents, and we need a risk-benefit analysis in medicine and society,” she said. “So, if we cherish the universal right to health, we need to seriously think about guardrails that are placed alongside something like letters so that things don’t happen again.”