SAN FRANCISCO – Senator Scott Wiener (D-San Francisco) has introduced SB 53, legislation to promote responsible development of large-scale artificial intelligence (AI) systems. AI already offers incredible benefits, and is committed to revolutionizing a vast range of key areas, from mental health diagnosis to wildfire responses. At the same time, most AI researchers warn that the technology presents substantial risks, and celebrities like Nobel Prize winner Jeffrey Hinton are seeking immediate guardrails to manage these risks.
The SB 53 has the need to balance the need for protection against AI development and accelerate progress towards its benefits. Bill is increasing protection for whistleblowers, which is key to warning the public about the development of AI risks. It also takes the first step towards establishing Calcompute, a research cluster that supports researchers developing large-scale AI. The bill will help California ensure global leadership as states like New York establish their own AI research clusters.
When he rejected SB 1047, last year’s groundbreaking AI law written by Senator Wiener last year, he announced he would work with a group of state and civil society leaders to develop safeguards to regulate artificial intelligence. The group will publish a preliminary report in the coming weeks or months and will publish a final summer report.
“The biggest innovation comes when we have the resources we need for our brightest minds and the freedom to speak their minds. SB 53 supports the development of large AI systems by providing low-cost calculations to researchers and startups through Calcompute. At the same time, the bill provides significant protection for workers who need to sound an alarm if something goes wrong when developing these very advanced systems,” Senator Winner said. “We are still in the early stages of the legislative process, and this bill could evolve as the process continues. I am closely monitoring the work of the Governor’s AI Working Group and the development of the AI field for changes that ensure legislative responses. California’s leadership on AI is more important than ever, as the new federal administration is moving forward with shredding guardrails aimed at keeping Americans safe from known foreclosing risks that have advanced AI systems.”
California is sometimes home to the world’s largest AI companies and researchers. Still other jurisdictions have invested heavily in AI, hoping to ensure leadership in emerging spaces. Most recently, New York Governor Kathy Hochul announced that he would win more than $90 million in new funding for the state’s AI consortium aimed at promoting responsible AI innovation.
Research clusters are essential to fostering innovation. Developing today’s advanced AI models requires the use of expensive computing resources. This will prevent advanced AI development from reaching many startups and researchers. SB 53 helps to correct that problem by providing low-cost calculations and other resources to support responsible AI development.
“California has a history of counting important public investments. From stem cell research to the stellar higher education system, we have led the way with public dollars to foster American entrepreneurship.” “We are proud to sponsor SB 53, a bill that will continue this important tradition by creating a public option infrastructure for cloud computing needed for AI development. If you want to see AI be used to promote the public interest and make people’s lives better, you must broaden your access to the computing power needed to promote innovation.”
“With Calcompute, we democratize access to computational resources that drive AI innovation,” said Sunny Gandhi, Vice President of Political Affairs at Encode. California can become a leader by making transformative technologies both more accessible and transparent. ”
The Trump administration has retracted President Biden’s executive order and established guardrails for the most sophisticated AI systems. Employees at the National Institute of Standards and employees, a federal organization that sets safety standards for artificial intelligence, also have the potential for large-scale layoffs.
AI researchers continue to speak out about the need to impose guardrails to manage the risks presented by AI. Last month, AI Godfather Professor Joshua Bengio published a 298-page document created worldwide by 100 independent AI experts, the world’s first AI safety report.
Whistleblower – The biggest lab employees talking about the safety and security risks posed by advanced AI systems under development have emerged as an important window into the safety practices of large AI labs and the emergency capabilities of large AI models.
SB 53 recognizes the need for a multifaceted approach to fostering responsible AI innovation. It does this:
We guarantee that Frontier Model Lab employees will not be retaliated for reporting their belief that the developer’s activities pose a significant risk as defined by them, or will not be retaliated by reporting their belief that they make false or misleading statements about managing significant risks. It also establishes the process of creating public cloud computing clusters (Calcomputes) that conduct research into the safe and secure deployment of large-scale artificial intelligence (AI) models.
SB 53 is sponsored by Encode, Economic Security Action CA, and Secure AI.
###