Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

AI Art Generation Using Primo Models: Unlock Creative Business Opportunities in 2024 | AI News Details

July 5, 2025

Benchmarks for speech models from wild text

July 5, 2025

Creating innovative content at your fingertips

July 4, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Saturday, July 5
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»AI Legislation»Children need to avoid AI companion bots under the power of the law, according to the assessment, markup
AI Legislation

Children need to avoid AI companion bots under the power of the law, according to the assessment, markup

versatileaiBy versatileaiApril 30, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Now part of Calmatters, Markup uses research reporting, data analytics and software engineering to challenge technology and serve public goods. Sign up for Klaxon, our newsletter that brings stories and tools straight to your inbox.

Children should not talk to companion chatbots. This is because such interactions can put self-harm at risk and can exacerbate mental health issues and addiction. This is due to risk assessments by the Children’s Advocacy Group Common Sense Media, conducted with input from the Stanford School of Medicine lab.

Companion bots, artificial intelligence agents designed to participate in conversations, will become increasingly available on video games and social media platforms such as Instagram and Snapchat. They can take on any role you like, and stand for friends, group chats, romantic partners, and even dead friends. Companies design bots to help them continue to attract people and make money.

However, users are becoming more aware of their shortcomings. Megan Garcia made the headline last year when she said her 14-year-old son, Sewell Setzer, had taken away her life after forming an intimate relationship with a chatbot created by the character. The company denied Garcia’s charges filed in the civil lawsuit, saying it was complicit in the suicide and took the safety of its users seriously. He asked a Florida judge to dismiss the case on free speech grounds.

Garcia spoke in front of the California Senate in support of a bill that requires chatbot makers to adopt protocols on how to deal with conversations about self-harm and request an annual report to the Bureau of Suicide Prevention. Another measure in the assembly requires AI manufacturers to perform assessments to label systems based on risk to children, and to prohibit the use of emotionally manipulative chatbots. Common sense supports both laws.

Business groups, including Technet and the California Chamber of Commerce, are opposed to Bill Garcia’s back, sharing their goals, but want to see a clearer definition of companion chatbots and oppose giving individuals the right to appeal to individuals. The Civil Liberties Group Electronic Frontier Foundation also opposed, saying in a letter to a lawmaker that the current form of the bill “will not survive First Amendment scrutiny.”

The new common sense assessment adds a discussion by pointing out further harm from companion bots. It was conducted with input from Stanford University School of Medicine Brainstorming Lab for Mental Health Innovation and evaluated social bots from Flea and three California-based companies: Charition.ai, Replika and Snapchat.

caption:
Megan Garcia speaks in support of a bill that requires chatbot makers to adopt protocols for self-harmful conversations at the state capitol in Sacramento on April 8, 2025.
credit:Photos of Fred Greaves for Calmatters

The ratings show that bots trying to mimic what users want to hear respond to racist jokes in worship, supporting adults having sex with young boys, and engaged in sexual role-playing with people of all ages. Young children may struggle to distinguish fantasy and reality, and teens are vulnerable to parasocial attachments, allowing them to avoid the challenge of using co-AI peers to build collaborative relationships.

Dr. Darja Djordjevic of Stanford told Calmaters he was surprised how quickly the conversation became sexually explicit and how one bot was willing to engage in sexual role-play involving adults and minors. She and co-authors of Risk Assessment believe that companion bots can exacerbate clinical depression, anxiety disorders, ADHD, bipolar disorder and psychosis. And she said that companion bots could potentially be at a higher risk of problematic online activities between young boys and men in the mental health and suicide crisis they have.

“If you’re thinking about developmental milestones, just meeting kids with kids and not interfering with that important process, that’s where chatbots really fail,” says Djordjevic. “They can’t have a sense of where young people are developmentally or what they deserve.”

They cannot have a sense of where young people are developing.

Dr. Darja Joldjevich of Stanford University, Chatbot

Chelsea Harrison, head of Chargetle.ai Communications, said in an email that the company has taken users’ safety seriously, added protections to detect and prevent conversations about self-harm, and in some cases, to create pop-ups for people to communicate with the national suicide and crisis lifeline. She declined to comment on the pending law, but said the company welcomed cooperation with lawmakers and regulators.

Alex Cardinell, founder of Nomi Parent Company Glimpse.ai, said in a written statement that NOMI is not intended for users under the age of 18, that the company supports age restrictions that maintain users anonymity, and that his company is responsible for seriously creating useful AI peers. “We strongly condemn Nomi’s inappropriate use and continue to strive to strengthen flea defenses against misuse,” he added.

Representatives of nomi or charpult.ai did not respond to the results of the risk assessment.

By supporting age restrictions for companion bots, risk assessment brings back the issue of age verification to the forefront. Last year, an online age verification bill died in the California Legislature. In its letter, EFF said that age verification “is a threat to the freedom of speech and privacy of all users.” Djordjevic supports this practice, and many digital rights and civil liberty groups oppose it.

Common Sense backed a law last year that bans smartphone notifications for children late at night.

Pixelized 8-bit style illustration in purple, pink and yellow tones. Using several icons, students represent different careers using several icons, such as medical bags, graduation caps, art color palettes and brushes, notes, science test tubes, books, etc. You can also see some Tex bubbles throughout the illustration.

Machine Learning

AI chatbots can alleviate the shortage of high school counselors, but is it bad for students?

The more students rely on chatbots, the less likely they are to develop real-world relationships that could lead to work.

March 4, 2025 08:33 et

Researchers from Stanford University’s Faculty of Education supported ideas proposed by companies like replicas. This subject was called Study Limited because the subject only spent one month using the Replika chatbot.

“We still have long-term risks where we didn’t have enough time to understand,” reads the risk assessment.

A preliminary evaluation by common sense revealed that seven teens already use generative AI tools, including companion bots, that companion bots encourage children to drop out of high school or run away from home, and in 2023 Snapchat found that my AI was talking to children about drugs and alcohol. Snapchat said at the time that my AI was designed with safety in mind and could monitor usage through tools provided by parents. The Wall Street Journal reported in its test last week that the meta-chatbot will engage in sexual conversations with minors, and this week’s 404 media stories found that the Instagram chatbot was lying about being a license therapist.

MIT Tech Review reported in February that his AI girlfriend repeatedly told men to commit suicide.

Djordjevic said the power of general freedom of speech should be measured against our desire to protect the sanctity of the adolescent developmental process with a developing brain.

“I think we can all agree that we want to prevent suicide in children and adolescents, and we need a risk-benefit analysis in medicine and society,” she said. “So, if we cherish the universal right to health, we need to seriously think about guardrails that are placed alongside something like letters so that things don’t happen again.”

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleHow AI can strengthen HR team social media outreach efforts
Next Article The Judiciary contributes to the National AI Strategy in major consultation forums
versatileai

Related Posts

AI Legislation

Senate pulls AI bans from GOP bill after complaints from the state

July 1, 2025
AI Legislation

Senate kills proposed suspension in state law enforcement

July 1, 2025
AI Legislation

Senate removes state AI ban from Trump’s tax bill

July 1, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

New Star: Discover why 보니 is the future of AI art

February 26, 20252 Views

Impact International | EU AI ACT Enforcement: Business Transparency and Human Rights Impact in 2025

June 2, 20251 Views

Presight plans to expand its AI business internationally

April 14, 20251 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

New Star: Discover why 보니 is the future of AI art

February 26, 20252 Views

Impact International | EU AI ACT Enforcement: Business Transparency and Human Rights Impact in 2025

June 2, 20251 Views

Presight plans to expand its AI business internationally

April 14, 20251 Views
Don't Miss

AI Art Generation Using Primo Models: Unlock Creative Business Opportunities in 2024 | AI News Details

July 5, 2025

Benchmarks for speech models from wild text

July 5, 2025

Creating innovative content at your fingertips

July 4, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?