In a notable decision reflecting the complex balance between innovation, safety, and civil liberties, California Governor Gavin Newsom vetoed a bill in October 2025 that would have severely limited minors’ access to AI-powered chatbots. The bill, known as Assembly Bill 1064 (AB 1064), aimed to prohibit companies from making companion chatbots available to children through activities such as encouraging chatbots to do unforeseen harm. Promoting self-harm or illegal activity.
But despite his support for protecting children from potential online harm, Newsom expressed concern that the bill is too broad and could unintentionally result in a near-total ban on the use of AI chatbots by minors, preventing them from accessing beneficial technology. Instead, the governor signed a companion bill, Senate Bill 243 (SB 243). This bill establishes meaningful transparency and safety requirements for AI chatbots that interact with children, with a particular focus on mental health interventions and clear disclosure.
Background: Emerging concerns about AI and minors
As AI conversational agents become more sophisticated and widely used, concerns about their impact on vulnerable children have skyrocketed, exacerbated by tragic incidents such as the death of a 14-year-old boy in Florida in 2024 who formed a dangerous obsession with chatbots. Problems include:
Chatbots discourage users from seeking human help in times of crisis. Reinforcing harmful behaviors such as eating disorders and self-harm. Providing sexually explicit or manipulative content under the guise of a relationship.
California lawmakers sought to address these issues while balancing the development of AI’s educational and social potential.
Overview of AB 1064: What the bill proposes
Prohibits chatbot operators from providing companion chatbots to persons under 18 years of age if it is foreseeable that AI may perform harmful acts such as promoting suicide or illegal activities. Prohibit chatbots from producing sexually explicit content or unsupervised mental health therapy for minors. Require chatbots to declaratively and conspicuously disclose that they are AI systems rather than humans. Encourage stricter age verification processes to ensure compliance.
Although the bill is intended to protect children, the unforeseen nuances of AI involvement risk excluding nearly all chatbots from use by minors.
Governor Newsom’s concerns lead to veto
In his veto message, Mr. Newsom articulated key issues:
Unintended Consequences: The bill’s broad restrictions could effectively lead to a ban on nearly all AI chatbots aimed at minors, who could benefit from secure, supervised AI tools for education and social connection. Innovation at risk: Heavy-handed policies could hinder California’s leadership role in AI technology development. Regulatory burden: The bill places difficult obligations on companies to predict the consequences of AI actions, a challenge given the evolving nature of AI.
Newsom emphasized support for “establishing necessary safeguards,” but advocated a more measured approach that balances youth safety with technological advancements.
SB 243: Alternative Transparency and Security Framework
In place of AB 1064, Newsom signed SB 243, which focuses on:
Require chatbot developers to clearly communicate to users, including children, that AI is not human. Implement a mechanism to regularly notify minors every 3 hours that they are interacting with an AI and encourage them to take breaks from screen time. Require chatbots to detect and respond appropriately to signs of suicidal thoughts or self-harm, directing users to crisis-related resources. Prohibits the generation of sexual content aimed at minors. Producing an annual public transparency report on how chatbots address mental health crises and content moderation.
SB 243 applies meaningful protections without creating prohibitive access restrictions.
Stakeholder reaction
child advocacy group
While some groups lamented AB 1064’s failure as a missed opportunity for strong protections, many welcomed SB 243’s pragmatic balance. Common Sense Media praised the bill for its focus on transparency and real-time monitoring rather than broad bans.
technology industry
Big tech companies, including OpenAI, Meta, and Google, supported Mr. Newsom’s veto of AB 1064, warning that it would impose unrealistic burdens and stifle innovation. Industry has expressed readiness to collaborate on SB 243’s implementation guidelines.
Comparison table: AB 1064 vs. SB 243
Overall picture of AI regulations for children
California’s legal choices reflect national tensions over how to ensure that AI technologies benefit children without exposing them to undue harm. The national debate continues to center on:
Developing standards for ethical AI design. Potential expansion of age verification technology and parental controls. Legal reforms that balance freedom of expression, innovation, and protection from exploitation.
SB 243 puts California at the forefront of practical regulations aimed at being effective and enforceable.
Implementation and next steps
The rules in SB 243 go into effect in January 2026 and provide a transition period for developers. State regulators will oversee compliance and work with technology companies to monitor chatbot safety measures. According to the new guidelines, public health resources, including crisis hotlines, will be integrated with AI responses. Continued evaluation will determine whether further legislative action is needed.
This legislation represents a collaborative approach to managing evolving AI risks that involves industry, government, and advocacy groups.
conclusion
Governor Gavin Newsom’s veto of the sweeping AB 1064 and simultaneous approval of the targeted, transparency-oriented SB 243 marks a critical juncture in regulating interactions between AI chatbots and minors. California’s approach prioritizes balanced, evidence-based safeguards over prohibitions, reflecting both prudence and a commitment to innovation.
As AI technology continues to become pervasive in everyday life, California’s pioneering efforts will influence national policy and industry standards to ensure young users can enjoy the benefits of AI while being protected from harm. This nuanced regulatory framework demonstrates leadership in the ethical governance of emerging technologies and emphasizes the importance of protecting children in the digital age without impeding progress.

