State Spotlight: New York and California
In a recent post on X, OpenAI CEO Sam Altman hinted at plans to release a new version of ChatGPT that can “respond in a very human way,” “act like a friend,” and “enable even more things like erotica” for verified adults. Earlier this year, Elon Musk’s Grok chatbot on xAI released a companion feature with two 3D animated characters: Ani (an anime-style female character) and Rudy (a red panda who can act in a vulgar version known as “Bad Rudy”). Other platforms, such as Meta AI, also offer companion chatbots designed to act as assistants, friends, and even romantic partners. As interest in AI companions grows, so too does concern about potential risks, especially to children. Lawmakers have sought to keep pace with this rapidly evolving field.
This installment of our State Spotlight series examines how New York and California are addressing the emergence of AI companions and provides risk mitigation insights for businesses affected by the new regulations.
Key efforts to regulate AI companions: New York and California
New York and California have both enacted laws regulating AI companions. New York’s A3008C goes into effect on November 5, 2025, and California’s SB 243 goes into effect on January 1, 2026. Generally, these bills seek to regulate AI companions that can maintain human-like relationships across multiple interactions. More specifically, California’s bill would apply to companions that are “capable of meeting a user’s social needs by exhibiting anthropomorphic characteristics,” but these terms are not defined. New York’s bill applies to companions that can “retain information about previous interactions and user sessions, (and) ask unprompted or unsolicited emotion-based questions that go beyond direct responses to user prompts.” New York’s bill does not define these terms, specifically “prior interaction or user session,” which could be interpreted in several ways.
California and New York both explicitly exempt categories of AI bots related to customer service, business purposes, productivity, research, and technical assistance. California also exempts video game bots and standalone consumer devices that function as voice-activated virtual assistants.
The table below considers a series of hypotheses to better understand the scope and application of these laws.
compliance
For AI companions that are subject to legal restrictions, both California and New York require users to be notified that the AI companion is not human. New York requires these notifications at the beginning of every interaction and every three hours if the interaction continues. Both states also require businesses to implement protocols to address suicidal thoughts and self-harm, including referring users to crisis service providers (such as suicide hotlines). California requires AI operators to publish details of their protocols on their websites.
Additional California Requirements
California has taken additional steps to protect minors. Companies must disclose that AI companions may not be suitable for some minors. In addition, for users “known to be underage,” companies must:
Disclose that the user is interacting with the AI. It will remind the user to take a break every 3 hours during the interaction and remind the user that the AI companion is not human. Prevents AI companions from creating visual material of sexually explicit acts or directly stating that minors should engage in sexually explicit acts.
From 1 July 2027, businesses will also be required to submit an annual report containing the number of referrals issued to crisis service providers and the procedures put in place in relation to suicidal ideation.
execution
In New York, the attorney general can charge “up to $15,000 per day for a violation.” However, the exact parameters of a “violation” may be open to interpretation. Consider whether providing an AI companion to 10 users is one violation, is it one violation, 10 violations, or if only 5 users use the AI companion per day is it 5 violations? For context, Meta CEO Mark Zuckerberg recently announced that Meta AI now has over 1 billion monthly active users.
California recognizes potentially significant liability because individuals who suffer a violation can seek actual damages.
Important points
Whether in New York or California, companies must determine whether their AI systems qualify as “companions” under their respective laws. This includes not only companies that create or distribute AI, but also companies that incorporate existing AI into their products and services. Those who may be classified as such should:
Prepare any necessary disclosures, such as notifying the user that the companion is not human. Create and implement protocols to detect and respond to suicidal thoughts and self-harm, including generating hotline referrals. California requires businesses to assess their knowledge of whether a user is a minor and be prepared to implement notices and guardrails specific to minors. Companies must also prepare for annual reporting requirements. Companies also need to ensure that disclosure does not compromise confidential information or intellectual property strategies, especially in California, where protocols for addressing suicidal thoughts and self-harm content are required to be published.
AI compliance laws vary widely by jurisdiction, highlighting the importance of closely monitoring developments in the regulatory environment. Beyond New York and California, Utah is one of the first states to enact an AI disclosure law.
(View source.)

