A new report co-authored by artificial intelligence pioneer FEI-FEI LI, encourages lawmakers to predict future risks that have not yet been devised when creating regulations to control how technology is used.
A 41-page report by the Joint California Policy Working Group on Frontier AI Models comes after California Governor Gavin Newsom fired down the state’s original AI safety bill, SB 1047. He said last year that lawmakers need a broader assessment of AI risks before they try to create better laws.
Li (pictured) co-authored the report with President Mariano Frantino Quellar, President of International Peace, and Carnegie Peament for the University of California, Berkeley University Computing Dean Jennifer Tour Chase. In it, they highlight the need for regulations to ensure transparency into the so-called “frontier models” built by companies such as Openai, Google LLC, and Human PBC.
They also urge lawmakers to consider enforcing AI developers to publish information such as data collection methods, security measures, and safety test results. Additionally, the report highlighted the need for stricter standards for third-party assessments of AI safety and corporate policies. It is also recommended that whistleblowers in AI companies should be protected.
The report was reviewed by numerous AI industry stakeholders before it was published, including AI safety advocate Yoshua Bengio and Databricks Inc. co-founder Ion Stoica.
One section of the report points out that there is currently “conclusive level of evidence” regarding the possibility of AI used in cyberattacks and the creation of biological weapons. Therefore, AI policies write that they need to address not only existing risks, but future risks that may arise if sufficient protection measures are not in place.
They use analogy to highlight this point, noting that there is no need to see nuclear weapons explode, predicting the widespread harm it causes. “If the person who speculates about the most extreme risk is right, and if we are unsure whether we will, the interests and costs of omissions in Frontier AI at this moment are very high,” the report states.
Given this unknown fear, the co-authors say the government should implement two extension strategies on AI transparency, focusing on the concept of “trust but validation.” As part of this, AI developers and their employees should have legal methods to report new developments that could pose a safety risk without the threat of legal action.
It is important to note that the current report is still in the interim version, and the completed report will not be published until June. The report does not support any particular law, but the safety concerns it highlights have been well received by experts.
For example, Deanball, an AI researcher at George Mason University, criticised the SB 1047 bill in particular and was pleased to see the veto, posting it as a “promising step” for the industry. At the same time, California Sen. Scott Weiner, who first introduced the SB 1047 bill, noted that the report continues the “urgent AI governance conversation” originally raised in his suspended legislation.
Photo: Steve Jurvetson/Flickr
Your support vote is important to us and it helps us keep our content free.
The clicks below support our mission to provide free, deep, and relevant content.
Join the community on YouTube
Join our community that includes over 15,000 #CubeAlumni experts, including Amazon.com CEO, Andy Jassy, Dell Technologies Founder and CEO, Intel CEO Pat Gelsinger, and more celebrities and experts.
thank you