Gov. Gavin Newsom signed the country’s first widespread law on the safety of artificial intelligence on Monday, putting California in the driver’s seat to regulate rapidly growing industries the federal government couldn’t handle.
“California is proving that we can establish regulations to protect our communities and ensure that the growing AI industry continues to thrive,” Newsom said in a statement. “This law balances that.”
Senate Bill 53 – Written by State Sen. Scott Wiener (D-San Francisco) – Establishes transparency in the Frontier Artificial Intelligence Act. This is the most ambitious initiative in regulating advanced AI systems. The law will be rolled out in stages starting in January.
Newsom’s signature on SB 53 would have imposed severe penalties on bad AI actors after rejecting a more aggressive bill by Wiener last year. The bill opposed many of Silicon Valley’s most powerful tech companies. In his rejection message, Newsom created a task force for AI experts and designed the framework used this year to write SB 53 and other AI-related invoices. The task force’s recommendations focused on transparency and risk reduction rather than punishing businesses.
Many of Tech’s biggest players (Meta, Alphabet, Openai, The Trade Group Technet) lobbyed against SB 53, saying they preferred uniform rules at the federal level.
Colin McKun, head of government affairs at venture capital firm Andreesen Horowitz, said in a social media post that SB 53 has thoughtful provisions, but Newsom’s “bigest danger” is to lead AI regulations from precedents set for more states, not Congress.
Although lawmakers haven’t addressed the issue, this summer, President Donald Trump issued an AI Action Plan calling for a moratorium on state AI regulations.
Supporters of SB 53 said California is responsible for acting in the absence of federal leaders on AI.
“When we use transformative technologies like AI, we have a responsibility to support that innovation while implementing Commonsense Guardrails to understand and mitigate risk,” Wiener said in a statement. “With this legislation, California is stepping up once again as a global leader in both technological innovation and safety.”
San Francisco-based AI safety and research company Humanity and Tech Safety Advocates supported the SB 53.
Co-founder and head of human policy, Jack Clark, issued a statement that the new law will “develop practical safety measures that will create real accountability” for AI systems.
He said, “While federal standards are essential to avoid a patchwork of state regulations, California has created a strong framework that balances public safety with ongoing innovation.”
Sachahaworth, executive director of technology surveillance in California, called it signed “Key Victory” to hold Big Technology CEO accountable while protecting whistleblowers.
While businesses based in other states may not be so deeply affected, supporters said the bill has both national and global impacts.
In a report in May, the governor’s office pointed out that 32 of the world’s top 50 AI companies call California home. The new law will not only create a national law framework, but also preempt the establishment of conflicting rules for California’s city and county agencies.
The law targets the “frontier model.” This is said to pose risks from Wiener and SB 53 supporters, including enabling cyberattacks, creating dangerous weapons, and operating beyond human control. The law also applies to “large frontier developers,” defined as AI companies with annual revenue of more than $500 million.
Under the law, large frontier developers must create and track robust AI safety frameworks that implement national and international best practices. The framework must be published online and updated annually. Before releasing or making significant changes to the frontier model, companies must publish a public transparency report explaining the model’s features, intended use, limitations, and the results of risk assessments.
The bill assigns a primary surveillance role to the California Governor’s Department of Emergency Services. Developers should regularly submit a summary of catastrophic risk assessments to the agency to report important and emergency safety incidents. Starting in 2027, the OES will publish an anonymous annual summary of safety accidents.
Violations, including failure to report false statements, can cause civil penalties of up to $1 million for each violation.
The law includes the protection of whistleblowers that protect employees from retaliation if they disclose safety concerns or violations to state or federal authorities. Large developers should provide an internal system for employees to report concerns anonymously and provide updates on how those concerns are being addressed.
Beyond private enterprise oversight, the bill creates a consortium that designs Calcompute, a state-supported public cloud platform that expands access to powerful computing resources for universities, researchers and public interest projects. The University of California system will be given priority in managing consortiums that are imposed to present frameworks by 2027.
In addition to putting California in the AI-regulated driver’s seat, the new law makes newspapers that could be the presidential candidate for 2028 a key topic on the campaign trail.

