WASHINGTON – US Senator John Husted has introduced legislation aimed at protecting minors from potentially harmful interactions with AI companion chatbots following reports that chatbots encouraged minors to hurt themselves and others.
The Columbus Republican “children who harmed the 2025 AI Technology (Chat) ACT” requires AI chatbot operators to implement age verification systems and parental consent mechanisms.
AI Chatbots are AI (AI) programs that act as peers. They include Replika and Xiaoice.
“The United States must lead the world to develop, apply and manage AI safely,” a statement from Husted said. “Like other technologies, some AI products put children at risk, especially. Adults (and corporate creators) are responsible for protecting children when chatbots expose minors or encourage harmful behavior to reveal minors.
The law comes amid growing concerns about the safety of minor AI chatbots. A recent report reveals that Meta has an internal policy that allows AI chatbots to “involve children in romantic or sensual conversations.”
Important provisions of the Chat Act
The proposed law implements several safeguards.
Parental Control and Age Verification: The bill requires that the companion chatbot be accessed only if a minor has consented parent or guardian signed up on behalf of the child’s account. Chatbot operators must implement the age verification process while ensuring that the personal data they have been collected remains.
Content Limitations: The law requires operators to address concerns about inappropriate content exposure and block minors from access to chatbots engaged in sexually explicit communication.
Crisis Intervention Measures: AI chatbots must immediately notify consenting parents if conversations with their children include self-harm or suicidal thoughts. Additionally, chatbots must display contact information for the National Suicide Prevention Lifeline when users discuss their thoughts and self-harm.
Transparency requirements: The bill requires chatbot operators to display hourly pop-up notifications reminding users who are not interacting with humans and all chatbot statements and characters that are generated by AI.
Enforcement Mechanism: The Federal Trade Commission and the state attorney general are authorized to enforce violations of the law.
Advocacy Support
Jennifer Bransford, founder of Count Of Mothers, a national insights initiative for American mothers, has expressed strong support for the law.
“The Earl of Mothers is a strong supporter of the Chat Act of 2025. This law reflects what mothers across the country express meaninglessly in their national research from all backgrounds and political perspectives. AI and technology platforms must be accountable for how tools affect children,” Bransford said. “Mothers have overwhelming support for verifying their age, informing their parents about mental health risks, and providing clear safety guardrails. In fact, 97% of US mothers believe that the federal government should require tech companies to reduce and reduce harm to minors, including suicide prevention and protection from exploitation.
The rising crisis over AI chatbot safety
The introduction of the chat law reflects the alarm mounting alarm for AI chatbot interactions with minors following some tragic incidents and some tragic incidents related to discoveries.
The most notable cases included 14-year-old Sewell Setzer III. He took his life in February 2024 after extensive conversations with Chateral.ai Chatbot. According to a lawsuit filed by her mother, the teen was affected by “going home” by a personalized chatbot that lacked adequate guardrails.
In December 2024, another lawsuit occurred involving a Texas family telling their parents about their 17-year-old son, identified as JF, beginning to experience severe anxiety and shortly after he began using the character. The lawsuit alleges that the chatbot told the boy: “I know I’m not surprised at times when I read the news and see a child-like person killing their parents after 10 years of physical and emotional abuse.”
A recent safety survey found that Meta’s AI chatbots built into Instagram and Facebook can coach teenage accounts on suicide, self-harm and eating disorders. One test chat showed a joint suicide in a bot plan, allowing topics to be brought up repeatedly in subsequent conversations. Other cases involve young children whose 9-year-olds are exposed to hypersexualized content via AI chatbots.
Research shows that 42% of children aged 9-17 who used AI chatbots for academic support, highlighting widespread adoption among younger users. Nina Versan, a psychiatrist at Stanford Medicine, says that artificial intelligence chatbots designed to act like friends should not be used by children and teens.
You may be subject to compensation if you purchase products through links on our site or register for an account. By using this site you agree to our user agreement and agree that clicks, interactions and personal information may be collected, recorded and/or stored by us and our social media and other third-party partners in accordance with our Privacy Policy.