(TNS) – On Tuesday, California lawmakers were one step closer to placing more guardrails around an artificial intelligence-powered chatbot.
The Senate passed a bill aimed at making chatbots used in dating safer after parents raised concerns that virtual characters had hurt the child’s mental health.
The legislation, heading to the California Legislature, shows how state legislators are addressing safety concerns surrounding AI as tech companies release more AI-powered tools.
“We’re watching California take the lead again,” said Sen. Steve Padilla, one of the lawmakers who introduced the bill on the Senate floor.
At the same time, lawmakers are trying to balance concerns that they may be hampering innovation. A floor analysis of the bill shows that groups opposing the bill, such as the Electronic Frontier Foundation, say the law is too broad and they run into the issue of free speech.
Under Senate Bill 243, operators of companion chatbot platforms remind users at least every three hours that virtual characters are not human. They also reveal that companion chatbots may not be suitable for some minors.
The platform should also take other steps, such as implementing protocols to address suicidal ideation, suicide, or self-harm expressed by the user. This includes displaying users’ suicide prevention resources.
Operators of these platforms report the number of companion chatbots that raised suicidal thoughts or actions with the user, along with other requirements.
Dr. Akilah Weber Pierson, one of the bill’s co-authors, said that while he supports innovation, it also needs to come with “ethical responsibility.” According to the senator, the chatbot is designed to attract the attention of people, including children.
“It’s very concerning when kids start to prefer interacting with AI over real relationships,” said Sen. Weber Pearson (D-La Mesa).
The bill defines companion chatbots as AI systems that can meet the social needs of users. The chatbots that businesses use for customer service are excluded.
The law has attracted support from parents who lost their children after they started chatting with chatbots. One of those parents is Megan Garcia, a Florida mom who sued Google and Charture.AI after his son Sewell Setzer III died of suicide last year.
In the lawsuit, she alleges that the platform’s chatbots have hurt her son’s mental health and have failed to notify or offer her help when she expresses suicidal thoughts to these virtual characters.
Based in Menlo Park, California, Charatos.ai is a platform where people can create and interact with digital characters that mimic real and fictional people. The company says it has taken teenage safety seriously and deployed the ability to provide more information about the time children spend on chatbots on the platform.
Charition.ai asked federal court to dismiss the case, but a federal judge in May allowed the case to proceed.
©2025 Los Angeles Times. Distributed by Tribune Content Agency, LLC.