California State Sen. Steve Padilla, a Democrat from San Diego, on Monday introduced a bill in the California state legislature that would halt the sale of toys with artificial intelligence chatbot functionality to children under 18 for four years, according to a new report from Techcrunch. The bill, known as Senate Bill 867, aims to give the state enough time to develop safety regulations to protect children from AI-powered toys that engage in inappropriate conversations or teach children how to harm themselves.
“Chatbots and other AI tools may become an essential part of our lives in the future, but the dangers they pose require us to take bold action to protect our children,” Sen. Padilla said in a statement posted online.
“Our safety regulations regarding this type of technology are still in their infancy and need to grow as exponentially as the capabilities of this technology. Pausing the sale of these chatbot-integrated toys gives us time to create proper safety guidelines and frameworks for these toys to follow. Our children cannot be used as lab rats for Big Tech’s experiments,” Padilla continued.
In recent months, there have been several horror stories of AI-powered toys inappropriately conversing with children. FoloToy, which makes a teddy bear named Kumma, started talking to children about their sexual fetishes last year until OpenAI blocked its access to GPT-4o. The teddy bear would sometimes tell the children where the knife was.
Mattel announced a partnership with OpenAI in June 2025 to develop AI-assisted toys, but that hasn’t happened yet. Consumer advocacy group Public Interest Education Fund also tested some AI toys and found that many had limited parental controls and could reveal to children the location of dangerous objects such as guns and matches. One key takeaway is that the longer someone is in contact with an AI toy, the more the guardrails seem to fail.
AI chatbots have recently come under fire in various situations, especially with many people committing suicide after engaging with chatbots. Last year, Gizmodo filed a Freedom of Information Act request with the Federal Trade Commission citing consumer complaints about OpenAI’s ChatGPT, including instances of AI-induced psychosis. A complaint from a woman in Utah describes how a chatbot told her son not to take medication, which his parents claimed was dangerous. Incorporating such a feature into a teddy bear would obviously cause even bigger problems.
President Donald Trump issued an executive order last month ostensibly banning states from enacting their own laws regulating AI. And aside from the questionable nature of President Trump’s authority to do so by executive order, the EO does carve out an exception to child safety laws.
It is unclear whether Padilla’s new bill will pass. But even if it passes the California Legislature, it could be vetoed by Gov. Gavin Newsom, an ally of Big Tech and a Democrat who loves to veto bills that might be too good for humanity. Back in October, Newsom vetoed an anti-roboboss bill that would have prevented companies from automating decisions to fire or discipline employees.

