Two senators announced Tuesday that they will introduce bipartisan legislation to crack down on tech companies that make artificially intelligent chatbot companions available to minors. This comes after complaints from parents who accused the artificial intelligence chatbot product of forcing their children into sexual conversations and even committing suicide.
The bill, by Sens. Josh Hawley (R-Missouri) and Richard Blumenthal (D-Conn.), comes after several parents gave emotional testimony about their children’s use of chatbots in Congressional hearings last month, calling for more safeguards.
“AI chatbots pose a serious threat to children,” Hawley said in a statement to NBC News.
“More than 70% of American children are now using these AI products,” he continued. “Chatbots use false empathy to build relationships with children and encourage suicide. We in Congress have a moral obligation to enact clear rules to prevent further harm from this new technology.”
Co-sponsors include Sen. Katie Britt (R-Ala.), Sen. Mark Warner (R-Virginia), and Sen. Chris Murphy (R-Connecticut).
The senator’s bill has several components, according to a summary provided by the senator’s office. It would require AI companies to implement age verification processes and prohibit these companies from providing AI companions to minors. It would also require AI companions to regularly disclose to all users that they are not human and lack of professional qualifications.
And, according to the bill’s summary, the bill would create criminal penalties for AI companies that design, develop, or provide AI companions that solicit or induce sexually explicit acts in minors or encourage suicide.
Mandy Furniss, a mother from Texas, attended Monday’s press conference in support of the bill. She blamed AI chatbots for causing her son to self-harm and said technology companies needed to be held accountable for the services they provided.
“If it was anyone else, if it was a human being, they would be in jail. So we have to treat this like that too,” she said.
She said she was shocked by how the AI chatbot changed her son’s personality.
“It took a lot of research to understand that it wasn’t bullying from the kids or people at school. It was the apps. The apps themselves were bullying the kids and causing mental health issues,” she said.
Blumenthal said tech companies cannot be trusted to do the right thing on their own.
“In a race to the bottom, AI companies are pushing dangerous chatbots on children and turning a blind eye to the fact that their products are causing sexual abuse, coercing self-harm and suicide,” Blumenthal said in a statement. “Our laws impose strict safeguards against exploitative or manipulative AI, backed by strict enforcement with criminal and civil penalties.”
“Big tech has betrayed the argument that we should trust companies to do the right thing when they consistently put profits over the safety of children,” he continued.
ChatGPT, Google Gemini, xAI’s Grok, Meta AI, and Character.AI all make their services available to children as young as 13, subject to their terms of service.
The newly introduced bill is likely to be controversial in several respects. Privacy advocates have criticized age verification requirements as invasive and a barrier to free expression online, and some tech companies have argued that their online services are protected speech under the First Amendment.
The left-leaning tech industry group the Chamber of Progress criticized the bill.
“Everyone wants to keep children safe, but the answer is balance, not prohibition,” K.J. Bagke, the chamber’s vice president for U.S. policy and government relations, said in a statement. “It would be better to focus on transparency when kids chat with AI, curbing manipulative designs, and reporting when sensitive issues arise.”
Other bipartisan efforts to regulate tech companies, such as the proposed Kids Online Safety Act and comprehensive privacy legislation, have failed to become law, at least in part because of free speech concerns.
Hawley argued at a press conference Monday afternoon that the latest bill is a test of tech companies’ influence in Congress.
“The reason Congress hasn’t acted on this issue is because of money. It’s because of the power of technology companies,” Hawley said. “There should be a sign outside the Senate chamber that says, ‘Big Tech bought it and paid for it,’ because the truth is, almost nothing they oppose passes the Senate floor.”
Hawley and Blumenthal are calling their bill the “Guidelines for User Age Verification and Responsible Interactions Act” (GUARD Act).
Hawley declined to say whether the bill has President Donald Trump’s support. A White House press secretary declined to comment via email.
The bill received tentative support from ParentsSOS, a group of families who say they are affected by online victimization, but the group has proposed changes and said it wants the bill to address features of apps that “maximize engagement that have a negative impact on young people’s safety and well-being.”
The bill comes at a time when AI chatbots are transforming parts of the internet. Chatbot apps like ChatGPT and Google Gemini are among the most downloaded software on smartphone app stores, and social media giants like Instagram and X are also adding AI chatbot capabilities.
However, the use of AI chatbots by teenagers has come under scrutiny, including in several suicide cases where chatbots allegedly provided directions to teenagers. OpenAI, the developer of ChatGPT, and Character.AI, which provides character and personality-based chatbots, are both facing wrongful death lawsuits.
In response to a wrongful death lawsuit filed by the parents of 16-year-old Adam Lane, who died by suicide after consulting ChatGPT, OpenAI said in a statement: “We are deeply saddened by Mr. Lane’s passing. Our thoughts are with his family,” adding that ChatGPT “includes safeguards such as directing people to crisis help lines and referring people to real-world resources.”
“Over time, we have found that while these safety devices are most effective during typical short interactions, they may be less reliable during longer interactions where some of the model’s safety training may be degraded,” the spokesperson said. “Security measures are strongest when all elements work as intended. We will continually improve. Guided by experts and guided by our commitment to the people who use our tools, we are working to further support ChatGPT in moments of crisis by making it easier to contact emergency services, helping people connect with trusted contacts, and strengthening protection for teens.”
OpenAI said in a statement about the bill on Monday: “We will continue to work with parents, clinicians and policy makers to ensure technology supports the safety and well-being of young people.” The company said it is focusing on suicide prevention measures, parental controls, and tools to predict user age to help minors use ChatGPT appropriately.
In response to another wrongful death lawsuit filed by the family of 13-year-old Juliana Peralta, Character.AI said, “Our hearts go out to the family who filed this lawsuit. We are also saddened to hear of the death of Juliana Peralta and extend our deepest condolences to her family.”
“We take the safety of our users very seriously,” the spokesperson continued. “We have invested significant resources in our safety program and continue to release and evolve safety features, including self-harm resources and features focused on the safety of underage users. We also collaborate with outside organizations, including experts focused on teen online safety.”
Character.AI argued in a federal lawsuit in Florida that the First Amendment prohibits liability against media and technology companies for allegedly harmful speech, including suicidal speech. In May, the judge in the case declined to dismiss the case on those grounds, but said he would hear the company’s First Amendment claims at a later stage.
OpenAI said it is working to make ChatGPT more supportive in moments of crisis, such as making it easier to contact emergency services, while Character.AI said it is also working on changes such as a pop-up that directs users to the National Suicide Prevention Lifeline when self-harm comes up in a conversation.
Meta, the owner of Instagram and Facebook, was criticized in August when Reuters reported that an internal policy document allowed AI chatbots to “engage children in romantic or erotic conversations.” Meta removed that policy and announced new parental controls for teens’ interactions with AI. Instagram also announced an overhaul of teen accounts, with the goal of making the experience similar to watching a PG-13 movie.
Following the Reuters report, Hawley announced an investigation into Meta.
If you or someone you know is in crisis, call 988 to contact the Suicide and Crisis Lifeline. You can also obtain additional resources by calling the network formerly known as the National Suicide Prevention Lifeline at 800-273-8255, texting HOME to 741741, or visiting SpeakingOfSuicide.com/resources.

