A bipartisan bill introduced in the US Senate on October 28th seeks to prohibit companies from providing access to artificial intelligence chatbot companions such as Character.ai and Replika to minors.
The bill, sponsored by Republican Sen. Josh Hawley of Missouri and co-sponsored by Democrats and Republicans, including Sen. Richard Blumenthal of Connecticut, would leave room for schools to continue using AI chatbots developed specifically for learning, such as Khan Academy’s Khanmigo.
But if enacted, it could complicate the potential use of chatbots in career and mental health counseling for students, experts say. It also appears to apply to more general large-scale language models that students often use in class assignments, such as ChatGPT and Gemini.
Meanwhile, a second bill introduced this week by Sen. Bill Cassidy (R-Louisiana), also chairman of the Senate Health, Education, Labor and Pensions Committee, aims to protect the privacy of student data when using AI tools.
Experts say the two bills likely mark the beginning of a series of bills aimed at strengthening the safety of AI tools, as AI technology becomes essential to a wide range of fields, including K-12 education.
“This is just the opening salvo,” said Amelia Vance, president of the Public Interest Privacy Center, a nonprofit focused on protecting the privacy of student data. “We’re having a hard time figuring out how to make this safe, how to hold these companies accountable, and how on earth does this apply to education?”
The bill has not yet been considered in a Senate committee. However, the impact may already be felt. The day after the bill was introduced, Character.ai announced it would voluntarily ban minors from its platform.
Lawmakers accuse corporations of putting profits before children
The legislation introduced by Hawley and Blumenthal that would prohibit companies from providing AI companions to minors reflects growing concerns about how this technology could be misused by young people. The families of at least two teenagers are suing a tech company after chatbots were allegedly involved in their children’s suicides.
The parents of both children spoke at a press conference where Hawley, Blumenthal and others introduced the chatbot bill.
Lawmakers framed the bill as an effort to rein in big tech companies that prioritize their profits over the welfare of children.
Hawley did not mention specific companies, but said, “We must not allow Silicon Valley’s pursuit of profit to consume and destroy America’s children.” “The new AI revolution we are promised will only be good for the American people if it actually protects America’s children.”
“Big tech companies are using our children as guinea pigs in high-stakes high-tech experiments to make their industries more profitable,” Blumenthal added, again without naming any specific companies.
The bill would require tech companies’ AI companions (chatbots designed to develop human-like relationships with users) to provide “reasonable age verification” of users, in addition to asking for their date of birth.
It also requires AI chatbots to clearly disclose to users that they are not human and do not have any professional qualifications, including in areas such as mental health counseling. Companies that knowingly provide companion bots that solicit or produce sexual content to minors will face criminal charges.
Importantly, this law does not apply to chatbots that are part of a broader software application or that are designed solely to respond to questions about a narrow range of subjects. The provision is intended to allow the continued use of “well-designed and secure chatbots that may be appropriate and useful in educational settings,” a Senate aide explained.
But it’s less clear whether the law applies to chatbots that provide services to students, such as career counseling or mental health services, Vance said.
Vance added that it makes sense that lawmakers from both parties want to completely ban children from using certain technologies.
“This is not parental consent. This is a ban, and this is very different from what we have done with almost every other piece of technology,” Vance said. “They don’t think it’s safe and they don’t think companies will fix it.”
Common Sense Media, a research and advocacy group focused on youth and technology, has not yet endorsed the bill, but Danny Weiss, the group’s chief advocacy officer, praised lawmakers for “the first bill in Congress that prioritizes the safety of users of AI products, including the safety of children.”
The measure comes just months after Common Sense reported that about one in three teens who use an AI companion feel their time using technology is more fulfilling than spending time with real-life friends.
The bill also comes as the Federal Trade Commission is investigating potential problems with chatbots designed to simulate human emotions and communicate with users as if they were friends or confidants. The FTC has issued information orders to the companies that own ChatGPT, Gemini, Character.ai, Snapchat, Instagram, WhatsApp, and Grok.
Bill would put education technology companies that violate student privacy on a federal list
Meanwhile, Cassidy’s bill would be a combination of carrots and sticks to protect the privacy of student data, including AI tools. For example, it calls for a new federal Golden Seal of Student Data Privacy Excellence award to be awarded to schools and districts that have strong parental consent systems for education technology tools.
It would also allow parents to review elements of contracts the district signed with technology companies before implementing them in classrooms. Students’ photos will also be prohibited from being used to train facial recognition AI tools without parental consent.
The bill also calls for creating a federal list of education technology vendors that do not comply with student data privacy requirements. Companies that violate these requirements can remain on the list for up to five years. It also aims to strengthen research on how AI can be used to improve teaching and learning and highlight how districts can use federal funding to help teachers better understand AI.
Tammy Whincup, CEO of digital safety platform Securly, said she was pleased with Cassidy’s bill as a “first step” in addressing the safety implications of using AI in education.
She warned that protecting students in the age of AI is a complex challenge.
Districts can block offending websites, but “AI is completely different,” Whincup said. “AI is like water and air. AI is going to be in every tool of the future. So our first step is to understand how[students]are using AI, not just for safety and health reasons, but also for teaching and learning.”

