Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Baidu ERNIE multimodal AI outperforms GPT and Gemini on benchmarks

November 12, 2025

EU plans to relax AI laws in response to technology backlash

November 12, 2025

Towards encrypted large-scale language models with FHE

November 12, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Wednesday, November 12
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»AI Legislation»FEI-FEI LI co-authored reports that AI regulations need to consider future risks
AI Legislation

FEI-FEI LI co-authored reports that AI regulations need to consider future risks

By March 20, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

A new report co-authored by artificial intelligence pioneer FEI-FEI LI, encourages lawmakers to predict future risks that have not yet been devised when creating regulations to control how technology is used.

A 41-page report by the Joint California Policy Working Group on Frontier AI Models comes after California Governor Gavin Newsom fired down the state’s original AI safety bill, SB 1047. He said last year that lawmakers need a broader assessment of AI risks before they try to create better laws.

Li (pictured) co-authored the report with President Mariano Frantino Quellar, President of International Peace, and Carnegie Peament for the University of California, Berkeley University Computing Dean Jennifer Tour Chase. In it, they highlight the need for regulations to ensure transparency into the so-called “frontier models” built by companies such as Openai, Google LLC, and Human PBC.

They also urge lawmakers to consider enforcing AI developers to publish information such as data collection methods, security measures, and safety test results. Additionally, the report highlighted the need for stricter standards for third-party assessments of AI safety and corporate policies. It is also recommended that whistleblowers in AI companies should be protected.

The report was reviewed by numerous AI industry stakeholders before it was published, including AI safety advocate Yoshua Bengio and Databricks Inc. co-founder Ion Stoica.

One section of the report points out that there is currently “conclusive level of evidence” regarding the possibility of AI used in cyberattacks and the creation of biological weapons. Therefore, AI policies write that they need to address not only existing risks, but future risks that may arise if sufficient protection measures are not in place.

They use analogy to highlight this point, noting that there is no need to see nuclear weapons explode, predicting the widespread harm it causes. “If the person who speculates about the most extreme risk is right, and if we are unsure whether we will, the interests and costs of omissions in Frontier AI at this moment are very high,” the report states.

Given this unknown fear, the co-authors say the government should implement two extension strategies on AI transparency, focusing on the concept of “trust but validation.” As part of this, AI developers and their employees should have legal methods to report new developments that could pose a safety risk without the threat of legal action.

It is important to note that the current report is still in the interim version, and the completed report will not be published until June. The report does not support any particular law, but the safety concerns it highlights have been well received by experts.

For example, Deanball, an AI researcher at George Mason University, criticised the SB 1047 bill in particular and was pleased to see the veto, posting it as a “promising step” for the industry. At the same time, California Sen. Scott Weiner, who first introduced the SB 1047 bill, noted that the report continues the “urgent AI governance conversation” originally raised in his suspended legislation.

Photo: Steve Jurvetson/Flickr

Your support vote is important to us and it helps us keep our content free.

The clicks below support our mission to provide free, deep, and relevant content.

Join the community on YouTube

Join our community that includes over 15,000 #CubeAlumni experts, including Amazon.com CEO, Andy Jassy, ​​Dell Technologies Founder and CEO, Intel CEO Pat Gelsinger, and more celebrities and experts.

“TheCube is an important partner in the industry. You are truly part of our event and we are truly grateful that you have come.

thank you

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleKuaishou KlingAi adopts Deepseek-R1 to enhance AI content
Next Article Brand safety struggles continue, and AI lawsuits are coming

Related Posts

AI Legislation

EU plans to relax AI laws in response to technology backlash

November 12, 2025
AI Legislation

Laws related to AI, EV, and fintech continue to cause confusion and controversy in Nigeria

November 10, 2025
AI Legislation

Securing digital copyrights in the AI ​​era

November 10, 2025
Add A Comment

Comments are closed.

Top Posts

Latamdate addresses the rising risk of AI with online romance and strengthens its commitment to security

March 12, 20255 Views

AI Security 2025: Why you need to build data protection is why it’s not bolted

March 12, 20255 Views

New bill introduced in the Senate would require US companies to report AI layoffs

November 6, 20254 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Latamdate addresses the rising risk of AI with online romance and strengthens its commitment to security

March 12, 20255 Views

AI Security 2025: Why you need to build data protection is why it’s not bolted

March 12, 20255 Views

New bill introduced in the Senate would require US companies to report AI layoffs

November 6, 20254 Views
Don't Miss

Baidu ERNIE multimodal AI outperforms GPT and Gemini on benchmarks

November 12, 2025

EU plans to relax AI laws in response to technology backlash

November 12, 2025

Towards encrypted large-scale language models with FHE

November 12, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?