SB 53 will establish world-leading AI safety disclosure requirements, create public cloud computing clusters to promote democratized AI innovation, and enact whistleblower protections in leading AI labs.
SACROMENTO – Senator Scott Wiener (D-San Francisco) Senator Bill 53 advances to the final vote in Congress and the Senate this week after a final amendment. The language of bill is the result of discussions between the governor’s administration and stakeholders. The final bill will expand disclosure obligations for small developers and retain its central provisions. Strong and world-leading safety disclosure requirements, public investment in AI infrastructure for start-ups and researchers, and whistleblower protection in major AI labs.
“The final version of SB 53 ensures that California continues to lead not only AI innovations, but also responsible practices to ensure innovation is safe and secure,” Senator Winner said. “We are grateful to the administration for convening a joint AI working group and discussing how to best implement recommendations to be scientific and fair. We look forward to sending this world-leading bill to the governor’s desk.”
Here are the changes to SB 53:
The bill’s requirements apply only to models trained above 10^26. The only exception is whistleblower protection, which applies to whistleblowers working on models of any size. Trained by companies that do not meet revenue thresholds, frontier models have a new obligation to disclose basic high-level safety details. Models trained by companies that exceed revenue thresholds need to be made with more detailed disclosure. Safety disclosures are streamlined and simplified. The Attorney General no longer has the authority to issue regulations that adjust the definition. Instead, the CDT will prepare an annual report recommending changes to Congress. Developers should update their framework annually as needed. The penalty concludes with $1 million per violation. Instead of publicly reporting periodic risk assessments regarding internal use of the model, businesses are to send these reports confidentially to the Governor’s Office of Emergency Services (OES).
SB 53 includes the world’s first requirement for businesses to disclose safety cases, including dangerous and deceptive actions by autonomous AI systems. For example, developers usually place controls on AI systems to prevent the system from helping them build other, extremely dangerous tasks set up by the user. If a developer catches an AI system with a lie about how well these controls are working during routine testing, and it significantly increases the risk of catastrophic harm, the developer must disclose the incident to the Emergency Services Department based on SB 53.
SB 53 holds a provision called “Calcompute” that encourages the development of AI and advances a bold industrial strategy to democratize access to the most advanced AI models and tools. Calcompute is a public cloud computing cluster housed at the University of California that provides free and low-cost access to computing for startups and academic researchers. Calompute will enhance California’s semiconductors and other advanced manufacturing with his work to protect democratic access to the Internet by granting the Senator’s recent laws, authorizing the rationalization permit for advanced manufacturing plants and the country’s strongest net neutrality law.
SB 53 also holds whistleblower protection in an AI lab that discloses significant risks.
A few weeks ago, the US Senate voted 99-1 to remove the provisions of President Trump’s “big beautiful bill” that would prevent states from enacting AI regulations. By increasing transparency, SB 53 is built on this vote of accountability.
As AI progresses, risk and profits increase
Recent advances in AI have provided groundbreaking benefits in several industries, from creation and medical diagnosis to improved climate modeling and improved wildfire predictions. AI systems revolutionize education, increase agricultural productivity and help solve complex scientific challenges.
However, the world’s most advanced AI companies and researchers also acknowledge that as models become more powerful, the risk of catastrophic damage increases. The Working Group Report states:
Evidence that the underlying model contributes to both chemical, biological, radiological, and nuclear (CBRN) weapons increases the risk of losing control concerns among beginners. This is the first time since the draft report was released in March 2025.
To address these risks, AI developers such as Meta, Google, Openai, and Anthropic took part in a voluntary commitment to conduct safety tests and establishing robust safety and security protocols. Several California-based Frontier AI developers have designed industry-leading safety practices, including safety assessments and cybersecurity protection. SB 53 codifies these voluntary commitments to establish a level playing field and ensuring greater accountability across the industry.
Report background
Governor Newsom convened the AI Frontier Model Joint Policy Working Group in September 2024 following Sen. Wiener’s rejection of SB 1047, saying, “It will help California develop viable guardrails for deploying genai, focusing on developing frontier models and their capabilities and capabilities.”
The working group is led by experts, including Dr. Fei-Fei Li, co-director at Stanford University for human-centered artificial intelligence. Dr. Mariano Florino Cuerar, President of Carnegie Donations for International Peace. Dr. Jennifer Tour Chase, dean of Berkeley College of Computing, Data Science and Society, California;
On June 17th, the Working Group released its final report. The report does not support specific laws, but it will promote a “promote trust but validate” framework and establish guardrails that reduce material risks while supporting ongoing innovation.
SB 53 balances AI risk and profit
Working Group Report Recommendations, SB 53:
Establish transparency and risk assessments for large enterprises’ safety and security protocols. Companies must disclose safety and security protocols and risk assessments in compiled format to protect their intellectual property. It requires the governor’s OES to report serious safety incidents (model-enabled CBRN threats, major cyberattacks, or loss of model control) within 15 days. Protect employees who reveal evidence of a serious risk or violation of ACT by AI developers.
Under SB 53, the Attorney General imposes civil penalties for violations of the law. SB 53 does not place any new liability for harm caused by AI systems.
SB 53 is sponsored by Encoded AI, California Economic Security Action, and Secure AI Project.
“SB 53 shows that safety and innovation are compatible,” said Sunny Gandhi, vice president of political affairs at ENCODE AI, co-sponsor of the bill. “By calling for transparency and accountability from the biggest AI developers, the bill reveals that California can lead both responsible governance and cutting-edge innovation.
“The Governor’s Working Group highlighted a simple principle: “trust but confirmation,” said Andrew Doris, senior policy analyst at the Secure AI Project, co-sponsor of the bill. “There is a broad consensus that the biggest AI developers need to be transparent about their safety practices and report serious cases. SB53 has led that consensus to the law and transformed expert recommendations into safeguards for Californians.”
“California can do it again with AI. Big tech billionaires will create their own rules or pass on SB 53, making safety, transparency and public interest a priority,” said Terriole, director of economic security, co-sponsored by the bill. “The timing of this bill is not that important. As lawmakers protect their responsibility and propose that AI regulations and Silicon Valley leaders cooperate with the Trump administration, California can set the standard for responsible AI innovation around the world.”
###

