Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

VEO – Google Deep Mind

May 21, 2025

Gemini 2.5 update from Google Deepmind

May 21, 2025

New work on AI, energy infrastructure and regulatory safety

May 20, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Wednesday, May 21
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»AI Legislation»FEI-FEI LI co-authored reports that AI regulations need to consider future risks
AI Legislation

FEI-FEI LI co-authored reports that AI regulations need to consider future risks

By March 20, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

A new report co-authored by artificial intelligence pioneer FEI-FEI LI, encourages lawmakers to predict future risks that have not yet been devised when creating regulations to control how technology is used.

A 41-page report by the Joint California Policy Working Group on Frontier AI Models comes after California Governor Gavin Newsom fired down the state’s original AI safety bill, SB 1047. He said last year that lawmakers need a broader assessment of AI risks before they try to create better laws.

Li (pictured) co-authored the report with President Mariano Frantino Quellar, President of International Peace, and Carnegie Peament for the University of California, Berkeley University Computing Dean Jennifer Tour Chase. In it, they highlight the need for regulations to ensure transparency into the so-called “frontier models” built by companies such as Openai, Google LLC, and Human PBC.

They also urge lawmakers to consider enforcing AI developers to publish information such as data collection methods, security measures, and safety test results. Additionally, the report highlighted the need for stricter standards for third-party assessments of AI safety and corporate policies. It is also recommended that whistleblowers in AI companies should be protected.

The report was reviewed by numerous AI industry stakeholders before it was published, including AI safety advocate Yoshua Bengio and Databricks Inc. co-founder Ion Stoica.

One section of the report points out that there is currently “conclusive level of evidence” regarding the possibility of AI used in cyberattacks and the creation of biological weapons. Therefore, AI policies write that they need to address not only existing risks, but future risks that may arise if sufficient protection measures are not in place.

They use analogy to highlight this point, noting that there is no need to see nuclear weapons explode, predicting the widespread harm it causes. “If the person who speculates about the most extreme risk is right, and if we are unsure whether we will, the interests and costs of omissions in Frontier AI at this moment are very high,” the report states.

Given this unknown fear, the co-authors say the government should implement two extension strategies on AI transparency, focusing on the concept of “trust but validation.” As part of this, AI developers and their employees should have legal methods to report new developments that could pose a safety risk without the threat of legal action.

It is important to note that the current report is still in the interim version, and the completed report will not be published until June. The report does not support any particular law, but the safety concerns it highlights have been well received by experts.

For example, Deanball, an AI researcher at George Mason University, criticised the SB 1047 bill in particular and was pleased to see the veto, posting it as a “promising step” for the industry. At the same time, California Sen. Scott Weiner, who first introduced the SB 1047 bill, noted that the report continues the “urgent AI governance conversation” originally raised in his suspended legislation.

Photo: Steve Jurvetson/Flickr

Your support vote is important to us and it helps us keep our content free.

The clicks below support our mission to provide free, deep, and relevant content.

Join the community on YouTube

Join our community that includes over 15,000 #CubeAlumni experts, including Amazon.com CEO, Andy Jassy, ​​Dell Technologies Founder and CEO, Intel CEO Pat Gelsinger, and more celebrities and experts.

“TheCube is an important partner in the industry. You are truly part of our event and we are truly grateful that you have come.

thank you

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleKuaishou KlingAi adopts Deepseek-R1 to enhance AI content
Next Article Brand safety struggles continue, and AI lawsuits are coming

Related Posts

AI Legislation

Do you need a “nist for the state”? And other questions to ask before preempting decades of state law

May 20, 2025
AI Legislation

Utah has enacted AI fixes targeting mental health chatbots and generation AI | Sheppard Mullin Richter & Hampton LLP

May 19, 2025
AI Legislation

NY lawmakers ask House GOP not to block AI regulations

May 16, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Introducing walletry.ai – The future of crypto wallets

March 18, 20252 Views

Subscribe to Enterprise Hub with your AWS account

May 19, 20251 Views

The Secretary of the Ministry of Information will attend the closure of the AI ​​Media Content Training Program

May 18, 20251 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Introducing walletry.ai – The future of crypto wallets

March 18, 20252 Views

Subscribe to Enterprise Hub with your AWS account

May 19, 20251 Views

The Secretary of the Ministry of Information will attend the closure of the AI ​​Media Content Training Program

May 18, 20251 Views
Don't Miss

VEO – Google Deep Mind

May 21, 2025

Gemini 2.5 update from Google Deepmind

May 21, 2025

New work on AI, energy infrastructure and regulatory safety

May 20, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?