California’s recent wave of artificial intelligence bills is already shaping the national landscape, with other states moving to adopt similar rules. But they’re not copying the Golden State’s rules word for word, and some are testing the limits of their power in ways not tried here.
“Other states have not adopted California’s law in its entirety, but they are clearly borrowing important principles and emulating California’s efforts,” said Michael W.M. Manoukian, a partner at Lathrop GPM LLP in San Jose, who represents employers. “Once again, California is acting as a legal bellwether, pushing bold and carefully calibrated policies that are shaping the national conversation on AI.”
New York, Colorado, and Texas are among the states that have passed or proposed legislation similar to the one passed or debated in California. These developments come amid a continuing lack of federal legislation and the defeat of an amendment to the federal One Big Beautiful Bill Act that would have circumvented state-level AI laws.
“States like New York are beginning to emulate California’s new AI law, with the RAISE Act requiring developers to create safety protocols, submit incident reports, and disclose risk mitigation plans, similar to California’s SB 53,” said Audan Downey, western regional state policy manager for the Computer and Communications Industries Association.
SB 53 was perhaps the most prominent of several AI bills signed by Gov. Gavin Newsom this year. The legislation, called the Frontier Artificial Intelligence Transparency Act, seeks to establish guardrails around cutting-edge AI systems and put safety protocols and whistleblower protections in place before they become widely available.
The law represents a compromise. Anthropic PBC, one of the leading AI companies, supported this. SB 53 incorporates ideas proposed by an AI task force that Newsom convened after he vetoed a 2024 bill by fellow author Sen. Scott Wiener (D-San Francisco). In a message vetoing Wiener’s SB 1047, Newsom wrote that the bill imposes too strict rules that could stifle the growth of California’s industry.
Newsom also signed SB 243, which would create safeguards for AI chatbots that interact with children, and AB 1043, which would require age verification to use some platforms.
“We hope more states will follow California’s lead,” Adeel Khan, founder and CEO of MagicSchool, which provides AI literacy tools and custom educational chatbots, said in an emailed response. “Illinois, Nevada, and Utah have already restricted the use of AI chatbots as mental health substitutes, and California has set an important precedent with safety standards for these systems. States that act quickly and thoughtfully have an opportunity to define what responsible AI looks like and potentially influence how federal guidelines are developed.”
Mr. Newsom has repeatedly warned that he does not want to alienate a lucrative industry centered in California. The governor has frequently claimed that 32 of the top 50 AI companies are headquartered in the state.
But attorney Patricia Blum said other states are taking different approaches, some of which may be less burdensome for the industry.
“They’re probably a little ahead of us in Colorado,” said Mr. Blum, a partner at Snell & Wilmer law firm in Los Angeles. “I’m not sure we’re a leading jurisdiction on this issue. For example, in Colorado, the attorney general’s office maintains a public list of universal opt-out mechanisms.”
By contrast, California tends to burden technology companies, Blum said. For example, among the AI laws Newsom signed is AB 566, which requires internet browsers to include clear opt-out methods for data collection. These concerns long predate current concerns about generative AI, but proponents of the bill argued that AI makes it easier for companies to collect people’s personal data and also increases the value of that data.
Blum said the state’s aggressive stance on technology laws has opened the door to potential legal challenges that could argue it exceeds the state’s jurisdiction.
“Congress is telling California businesses that they need to implement this because most of the web browsers are in California, so technically it would apply to consumers across the country, not just California,” Blum said.
The possibility of litigation may have influenced Newsom’s decision to veto some bills. These include AB 1064, the flagship law on Ethical AI Development for Children, which Newsom said is very strict in prohibiting certain types of content and could effectively ban children from using AI tools. The bill was introduced by Rep. Rebecca Bauer-Kahan (D-Orinda), a staunch enemy of some in the tech industry and whose bill has been a major lobbying target.
Colorado also passed legislation that directly impacts how businesses must behave. Colorado’s AI Act of 2024 prohibits discriminatory uses of AI, but lawmakers this year delayed implementation until next summer.
New York state is looking inward, with lawmakers focused on transparency in the government’s use of AI, including in determining benefits.
Even regulatory-wary Texas is getting in on the act. In June, Governor Greg Abbott signed the Texas Responsible Artificial Intelligence Governance Act. The organization, better known as TRAIGA, has established the Artificial Intelligence Council and placed significant limits on the use of AI by businesses and governments. But the law also centralizes enforcement in the attorney general’s office without the private right of action contained in California’s SB 243.
AI regulation is a bipartisan issue, although the approach taken in red states may differ. This summer, the U.S. Senate voted 99-1 to remove the AI Enforcement Moratorium (a five-year suspension of most state-level AI laws) from the One Big Beautiful Bill Act.
Many of these red state bills are based on ideas previously proposed in California, Manoukian said, although their authors may not advertise them.
“Tennessee’s ELVIS Act, which addresses unauthorized use of AI-generated sounds and images, is similar to California’s proposed digital likeness protections,” he said. “Wisconsin and Texas have also enacted election-related AI disclosure laws that mirror California’s early focus on transparency in political communications and regulation of deepfakes.”