As Congress is considering imposing a 10-year ban on states passing or enforcing regulations for AI developers this week as part of the big beautiful badge bills in California and New York, it took steps to place guardrails around at least the largest frontier AI models.
It released its final version on Tuesday, California, home to some of the largest AI companies in the United States. Frontier AI Policy Report It was prepared by a working group of academic experts appointed by Governor Gavin Newsom. The report does not support any particular law, but it laid out a blueprint for a comprehensive regulatory framework.
In New York, which is also an AI hub, Rep. Alex Boaz (D-Manhattan) introduced it Responsible AI safety and education methods known as Raise; Albany Times Union reported. The bill requires frontier model developers to implement robust safety and security plans, among other measures.
The California report is the growth of previous efforts by states to regulate AI technology. In 2024, state lawmakers passed SB1047. This required developers of the largest AI models to submit their safety and security plans to the Attorney General. However, Newsom refused the measure in September.
“It’s intentional, but SB 1047 doesn’t consider whether AI systems are deployed in a high-risk environment. It involves important decision-making or the use of sensitive data.” “I don’t think this is the best approach to protecting our citizens from the real threats posed by technology.”
Instead, Newsom convened a working group led by Stanford’s Li Fei-Fei, a leading figure in AI Research and a critic of SB1047.
Related: Senate bills will protect AI developers from civil liability in certain uses of tools
The report sets out the core principles lawmakers should adopt when creating future AI regulations. They include commitments to evidence-based policy decisions. Focus on risk transparency and disclosure. Establishing a disadvantaged event reporting system, third-party validation of risk self-assessment, and examining the functioning of the model, downstream impacts, and risk levels.
The New York bill has already garnered opposition from major technology companies, including IBM and Meta, based in Armonk, New York. Congressman counters that it only applies to the very largest AI companies that spend at least $100 million to train frontier models.
If Congress is pushing the federal preemption, efforts from both California and New York could be controversial. But the proposal to put a 10-year moratorium on state law split Republicans into Capitol Hill.
The House was included in a version of the budget bill. But even Republicans who voted for the overall bill, such as Rep. Marjorie Taylor Green (Georgia) now says they are unaware of the moratorium regulations of the time, and will vote against the bill if they return home from the Senate.
Meanwhile, the Senate included provisions in the first draft of the bill, but changed how it applies. Still, some Senate Republicans have balked that inclusion.
Last week, a group of conservatives sent a letter warning Senate leaders that Congress is still “actively investigating” AI and “doesn’t fully understand the meaning” of technology.
Separately, Sen. Josh Hawley (R-MO) expressed concern about the economic impact of AI and said he would consider introducing amendments to strike provisions when the Senate raises the bill. According to the hill.
“I’m only good for people for AI,” he told reporters. “I think we have to come up with ways to put people first.”
Sen. Ron Johnson (R-WI), an opponent of the Budget Bill Leader GOP, also expressed his skepticism about the moratorium provisions.
“Personally, I don’t think we should set federal standards now and prohibit states from doing what they should do in the federal Republic,” he told Capitol Hill’s publication. “Let me experiment with the state.”