Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Hugging Wiz Research and facial partners to improve AI security

June 21, 2025

Piclumen Art V1: Transform AI Art Generation with Advanced Visual Models for 2025 | AI News Details

June 21, 2025

Text2SQL using Face Dataset Viewer API and MotherDuckDuckDB-NSQL-7B

June 21, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Saturday, June 21
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»AI Legislation»California AI Policy Report warns of “irreversible harm”
AI Legislation

California AI Policy Report warns of “irreversible harm”

versatileaiBy versatileaiJune 17, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

AI could provide transformative benefits, but without proper protection measures, it promotes nuclear and biological threats and “produces potentially irreversible harm,” a new report commissioned by California Gov. Gavin Newsom warns.

“The opportunity to establish an effective AI governance framework may not remain open indefinitely,” the report was published on June 17th. It cites new evidence that AI helps users to source nuclear-grade uranium and that beginners are in the cusp of creating biological threats.

The 53-page document comes from a working group founded by Governor Newsom, a state that emerged as a central area of ​​AI law. With no comprehensive federal regulations on the horizon, state-level efforts to manage technology are of great importance, especially in California, home to the world’s top AI companies. In 2023, California Sen. Scott Wiener had called for large AI developers to implement stringent safety testing and mitigation on their systems, but California Sen. Scott Wiener sponsored SB 1047, saying critics would need to curb innovation and boost the open source AI community. The bill passed homes in both countries despite fierce industry opposition, but Governor Newsom ultimately rejected it last September, not “deliberate” and “best approach to protecting the public.”

Following that veto, Newsom launched a working group to “develop viable guardrails for deploying Genai.” The group was co-led by “AI Godmother” Fei-Fei Li, a prominent opponent of SB 1047, president of the Carnegie Fund for International Peace, Mariano-Florentino Cuéllar, a member of Computing Research’s National Scientific Committee on Social and Ethical Impact, and Dean socipating working group of UC Computed’s Jennifer Tour Tour Chayes of Chickes of Compute, assessed the progress of AI, the weaknesses of SB 1047, and sought feedback from over 60 experts. “As the global epicenter of AI Innovation, California is uniquely positioned to lead the way in unlocking the potential for transformation of frontier AI,” Li said in a statement. “But to realize this promise, we need thoughtful and responsible stewardship based on human-centered values, scientific rigor and wide range of collaboration,” she said.

“Since Governor Newsom rejected SB 1047 last September, the capabilities of the basic model have been rapidly progressing,” the report said. The industry has shifted from a large-scale linguistic AI model that can benefit from “inference scaling” simply by predicting the next word in a stream of text towards a system trained to solve complex problems. These advancements could accelerate scientific research, but they could also amplify national security risks by making it easier for bad actors to carry out cyberattacks and acquire chemical and biological weapons. The report points to Anthropic’s Claude 4 model, released last month. The company can create a biological age where it could become a terrorist, or make a pandemic an engineer, the company said. Similarly, Openai’s O3 model reportedly surpassed 94% of virologists in key assessments.

In recent months, new evidence has emerged showing the ability of AI to strategically lie, appearing in line with creators’ goals during training, but showing other goals that have been deployed once and misusing loopholes to achieve their goals, the report says. “Although it is currently benign, these developments represent concrete empirical evidence of actions that could measure loss of control risk and possibly present important challenges to foresee future harm,” the report states.

Republicans propose a 10-year ban on AI regulations in all states over concerns that a fragmented policy environment could hinder the nation’s competitiveness, but the report argues that target regulations in California can actually “reduce the burden of developer compliance and avoid patchwork approaches.” Instead of defending a particular policy, instead outlining key principles that working groups believe California should adopt when writing future laws. Scott Singer, a visiting scholar at the Carnegie Fund for International Peace, “steer” some of the more divisive provisions of the SB 1047, such as the “kill switch” and the requirements for a “kill switch” or shutdown mechanism to quickly shut down certain AI systems.

Instead, this approach focuses on increasing transparency, for example by legally protecting whistleblowers and establishing an incident reporting system. The goal is, “We will enjoy the benefits of innovation. Instead of setting artificial barriers, let’s think about what we are learning about technology behaviour as we go,” says Quellar, who co-led the report. The report highlights that this visibility is important for understanding not only public AI applications but also how it is tested and deployed within AI companies that may first appear within AI companies regarding behavior.

“The fundamental approach here is one of ‘trust’, but test it out,” Singer says. This is a concept borrowed from the Cold War era Arms Control Treaty, which includes the design of mechanisms that independently check compliance. This is a departure from existing efforts that rely on voluntary cooperation from companies, such as contracts between Openai and the AI ​​Standards and Innovation Center (formerly the US AI Safety Institute). This is an approach that acknowledges “substantial expertise within the industry,” Singer says, but “emphasizes the importance of ways to independently validate safety claims.”

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleHow media leaders use AI as technology transforms television
Next Article How to Use AI in Your Business
versatileai

Related Posts

AI Legislation

Insurance Industry rejects proposed moratorium on state AI regulations

June 18, 2025
AI Legislation

SAG on the groundbreaking New York AI Act – Statement from AFTRA-360 Magazine – Green | Design | Pop

June 18, 2025
AI Legislation

Congress should not block state action against AI

June 18, 2025
Add A Comment

Comments are closed.

Top Posts

Piclumen Art V1: Next Generation AI Image Generation Model Launches for Digital Creators | Flash News Details

June 5, 20253 Views

Presight plans to expand its AI business internationally

April 14, 20252 Views

PlanetScale Vectors GA: MySQL and AI Database Game Changer

April 14, 20252 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Piclumen Art V1: Next Generation AI Image Generation Model Launches for Digital Creators | Flash News Details

June 5, 20253 Views

Presight plans to expand its AI business internationally

April 14, 20252 Views

PlanetScale Vectors GA: MySQL and AI Database Game Changer

April 14, 20252 Views
Don't Miss

Hugging Wiz Research and facial partners to improve AI security

June 21, 2025

Piclumen Art V1: Transform AI Art Generation with Advanced Visual Models for 2025 | AI News Details

June 21, 2025

Text2SQL using Face Dataset Viewer API and MotherDuckDuckDB-NSQL-7B

June 21, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?