The bill moving forward with the California Legislature is trying to address the harmful effects of “companion” chatbots designed to simulate human-like relationships and provide emotional support. They are often sold to vulnerable users like children and emotional distress.
The bill, introduced by state Sen. Steve Padilla, will require companies running companion chatbots to avoid the use of addictive tricks and unpredictable rewards. You need to remind the user at the start of the interaction, and every three hours you need to be talking to the machine rather than the person. You also need to clearly warn users that chatbots may not be suitable for minors.
If passed, regulating AI companions with clear safety standards and user protection will be one of the nation’s first laws.
“Can we introduce common sense protections that help protect children, children and other vulnerable users from predatory and addictive property that the chatbot knows has?
The law is partly inspired by the tragic story of Sewell Setzer III, a 14-year-old Florida boy who took his life last year after forming a parasocial relationship with Chatsbot, a platform that allows users to interact with custom-built AI personas. His mother, Megan Garcia, told the Washington Post that Setzer had used chatbots day and night and told the bots he was considering suicide.
A March survey by MIT Media Lab, examining the relationship between AI chatbots and loneliness, found that more daily usage correlated with increased loneliness, dependence and “problematic” usage. This is the term researchers used to characterize their dependence on chatbot use. The study revealed that companion chatbots are more addictive than social media because users have the ability to hear their feedback, understand and provide what they want to provide.
“She didn’t offer him any help,” Garcia said at a press conference Tuesday. “This chatbot never mentioned it, never referred to him as a suicide crisis hotline. She never broke the character and didn’t say, ‘I’m not human, I’m an AI’.” ”
The bill requires a process by which chatbot operators handle signs of suicidal ideation and self-harm. If a user expresses suicidal ideation, the chatbot must respond to resources such as the suicide hotline and must publish these steps. Additionally, businesses must submit their annual reports multiple times to not revealing personal information, without revealing them, to the fact that they have noticed or started discussions about suicidal ideation.
Under the bill, anyone who harms a violation is permitted to file a lawsuit seeking damages up to $1,000 for each violation.
“The interests are too high and vulnerable users can continue to access this technology without the right guardrails, allowing transparency, safety and, above all, accountability,” Padilla said.
Some companies are opposed to the proposed law and are raising concerns about the potential impact on innovation. Last week, executives from Technet, a statewide network of technology CEOs, drafted an open letter against the law, claiming that companion chatbots are too broad and that annual reporting requirements are too expensive.
“What we are witnessing is not just a political policy effort to suffocate all sorts of regulations on large AI warrants, but also in response to questions about the opposition parties. “We can grasp the positive benefits of this technology deployment, and at the same time protecting the most vulnerable people among us.”