California Gov. Gavin Newsom signed the transparency of the Frontier Artificial Intelligence Act, also known as SB 53, on Monday afternoon.
The first US bill implements new AI-specific regulations to top players in the industry, requiring them to meet transparency requirements and report AI-related safety.
Although several states recently passed legislation regulating aspects of AI, SB 53 is the first to explicitly focus on the safety of cutting-edge AI models.
In a statement, Newsom said: “California has proven that it can establish regulations to protect our communities and ensure that the growing AI industry continues to thrive. This law balances that.”
With 32 of the world’s top 50 AI companies based in California, the law could have an impact all over the world. In a signature message to the state Senate, Newsom wrote that California’s “position as a global leader in technology offers a unique opportunity to provide a blueprint for balanced AI policy, especially in the absence of a comprehensive federal AI policy framework.”
The law requires that major AI companies publish public documents detailing how best practices are following in creating secure AI systems. This creates a route to report serious AI-related cases to the California Department of Emergency Services, whistleblowers raising concerns about health and safety risks.
The law is supported by civil penalties for violations, as it is enforced by the state attorney general’s office.
In a statement, Sen. Scott Winner, a Democrat, who authored the bill, said, “We are responsible for supporting that innovation while using transformative technology just like AI to implement Commonsense Guardrails to understand and mitigate risk.”
SB 53 passed just a year after Newsom rejected a similar bill from Wiener. The bill, called SB 1047, sought to assign large liability to major AI companies in the case of adverse events.
Wiener previously told NBC News: “While SB 1047 has more bills focused on responsibility, SB 53 has focused on transparency.”
The SB 53 passage follows a recent announcement about an increase in lobbying from major tech companies to limit the spread and impact of AI regulations. Announced a new Super PAC to combat AI Act on Friday, Meta’s vice president of public policy Brian Rice said, “The Sacramento regulatory environment could thwart innovation, thwart AI advances and put California’s technological leadership at risk.” Previously, Meta had demonstrated soft support for this measure.
SB 53 has attracted intense criticism from industry groups such as Chamber of Progress and The Consumer Technology Association. But the human race, a major AI company, supported it.
While several companies have expressed support for the bill, they have clarified their preferences for federal law to avoid inconsistent state-by-state regulations.
In a statement Monday afternoon, Head of Humanity Jack Clark, co-founder and policy director, said:
“While federal standards remain essential to avoid a patchwork of state regulations, California has created a strong framework that balances public safety with ongoing innovation,” he said.
In a statement Monday afternoon, Openai spokesman Jamie Radice said: “We are pleased that California has created an important pathway towards harmony with the federal government. This is the most effective approach to AI security. This will allow the federal and state governments to cooperate in the safe deployment of AI technology.”
Monday morning, Sen. Josh Hawley, R-Mo. , and D-Conn. Richard Blumenthal proposed a federal bill that required major AI developers to “evaluate advanced AI systems and collect data on the potential for harmful outbreaks of AI.”
As written, the federal bill creates advanced artificial intelligence assessment programs within the Department of Energy. Participation in the assessment program is mandatory, as is the mandatory transparency and reporting requirements of SB 53.
World leaders are increasingly pursuing AI regulations in the face of increasing risks from advanced AI systems.
In a remarks to the UN General Assembly last week, President Donald Trump said, “It could be one of the greatest things ever, but it could be dangerous, but it could be in immense use and enormous benefits.”
Ukrainian President Voldimia Zelenkie, who addressed the United Nations a day after President Trump, said, “Because artificial intelligence is included this time, we live through the most destructive arms race in human history.”

