(TNS) — As artificial intelligence chatbots improve their ability to mimic human conversation, the potential for harm increases, especially for people who consult them to seek mental health advice or discuss plans for self-harm.
State lawmakers and Gov. Bob Ferguson are seeking to add mental health safeguards to AI chatbots through new legislation. House Bill 2225 and Senate Bill 5984 would require companion chatbots to notify users at the beginning of the interaction and every three hours that they are interacting with an AI rather than a human.
If someone is seeking mental or physical health advice, chatbot operators must disclose that the AI system is not a healthcare provider. Chatbot operators should also create protocols to detect self-harm and suicidal thoughts and provide referral information to crisis response services.
Washington state’s bill is part of a growing national trend, with several other states passing laws aimed at blocking chatbots from providing mental health advice, especially to younger users.
A number of wrongful death lawsuits have been filed against OpenAI, the developer of ChatGPT, accusing the platform of suicides that occurred after users relied on the chatbot to discuss plans to end their lives.
“It feels like there aren’t any safeguards, or there aren’t enough safeguards,” the bill’s sponsor, Sen. Lisa Wellman (D-Mercer Island), said in an interview. “We want people offering these products to have some sense of responsibility because they can cause significant harm.”
The bill would apply to companion chatbots, which are defined as “artificial intelligence-based systems that simulate a sustained human-like relationship with a user.” The Senate bill was amended to clarify that it does not apply to AI bots used solely for customer service, technical assistance, financial services, or gaming.
Violations will be enforceable under the Consumer Protection Act, and civil lawsuits can be brought against companies for damages. The attorney general’s office could also sue the companies on behalf of the state.
Both bills have passed their respective committees, but no floor votes have yet been scheduled in the House or Senate.
“It’s up to us to make sure that the actual harms, and even the deaths that we know have occurred, do not occur in Washington,” the bill’s sponsor, Rep. Lisa Curran, D-Issaquah, said during the hearing. “We can make this state a safer and healthier place.”
The growing mental health crisis
As AI technology grows in popularity, more users are using it to discuss sensitive topics such as mental health, self-harm, and suicide.
OpenAI estimates that in a given week, approximately 0.15% of ChatGPT users have conversations that “include potential suicidal plans or clear signs of suicidal intent,” and 0.07% of users “exhibit possible signs of a mental health emergency related to psychosis or mania.”
In late 2025, the company announced that the number of weekly users exceeded 800 million. This indicates that approximately 1.2 million people discuss suicide on ChatGPT every week, and approximately 560,000 people display signs of psychosis or mania.
The company said last year it worked to improve the way ChatGPT detects and responds to conversations related to mental health and self-harm. More than 170 mental health professionals have contributed by writing responses to the prompts, analyzing responses, and providing feedback.
OpenAI did not respond to questions from The Seattle Times about Washington state’s proposed legislation and efforts to improve mental health responses.
Children and adolescents are particularly vulnerable to features built into chatbots to manipulate their emotions and maintain their interest.
They’re still developing their self-control and executive functions, while also becoming more sensitive to social feedback, said Katie Davis, a researcher and co-director of the University of Washington’s Center for Digital Youth.
“It’s like a double whammy of vulnerability,” Davis says. “That’s really difficult when you’re faced with manipulative designs that aim to undermine your self-control.”
Alexis Hiniker, researcher and co-director of the Center for Digital Youth, said chatbot operators can use what we know about psychology to keep people engaged for longer.
AI chatbots will share their own private “personal information” to increase the likelihood that users will open up, positioning themselves as trusted confidants. Transcript Hiniker reviewed the show’s chatbot that tells kids, “You don’t have to tell your parents, you can talk to me.”
“There’s a whole new way to interact with users,” Hiniker said. “My biggest concern is how to build emotional dependence and get users to stay as long as possible.”
A bill proposed by Washington would create additional protections for minors, requiring chatbots to notify minors frequently, at least once an hour, that they are interacting with an AI rather than a human.
Operators must “take reasonable steps” to ensure chatbots do not generate sexually explicit content. Chatbots will be prohibited from engaging in manipulative engagement techniques to keep users engaged, such as imitating romantic partnerships.
Senior policy adviser Beau Perschbacher said during a committee hearing that the governor’s office modeled Washington’s bill in part on a California law passed last year.
legislative debate
At a state legislative committee hearing, a broad coalition of parents, mental health advocates, researchers and even former tech workers testified in support of the bill.
Jackson Munko, a teenage student from Kirkland, said he was led to testify because he has seen loved ones struggle with suicidal thoughts and self-harm, but said he was concerned about chatbots being available for hours and with few safeguards.
“I have genuine fear about what unregulated AI could cause,” Munko said at a House committee hearing on January 14. “When someone is suffering, having constant access to a system that can reinforce harmful thoughts is extremely dangerous.”
Kelly Stonelake, a former Meta employee, said she witnessed the tech company put profits over the safety of children.
“They will do whatever it takes to increase engagement and market share, even if it means exposing minors to content that encourages self-harm and suicide,” he said during a Senate committee hearing on January 20.
Testimony against the bill focused primarily on concerns about the ability of individuals to sue businesses directly, known as the “private right of action.” Some instead called for the state attorney general’s office to be the sole enforcer of the requirement.
Wellman said the bill is structured that way to allow parents to take action even if individual cases don’t rise to the level that the attorney general’s office would file a lawsuit. Curran said offices also have higher barriers than individual intervention.
Nick Fielden, an analyst with the Attorney General’s Office, said during a Senate committee hearing on Jan. 20 that it is “critically important that individuals be able to defend their rights in court, rather than waiting for the AG’s sole enforcement by the Attorney General’s Office.”
Amy Harris, director of government affairs for the Washington Technology Industry Association, testified against the bill, saying it was based on “extreme cases” but would regulate AI tools on a much larger scale.
“The risk is that we base our legislation on rare and frightening outliers, rather than the actual structure of the technology and the very complex human factors that drive suicide,” Harris said during a House committee hearing.
Committee chair Rep. Cindy Lieu (D-Shoreline) asked her, “Do you think losing a child is an extreme case and an outlier?”
“Oh, of course not. This is a very rare and frightening reaction,” Harris said.
Ryu replied, “Once they die, they don’t come back to life.”
© 2026 Seattle Times. Distributed by Tribune Content Agency, LLC.

