For immediate release
December 11, 2024
Media Contact: Chase Hardin, chase@futureoflife.org
+1 (623)986-0161
Leading AI expert in external safety review says major AI companies have ‘significant gaps’ in safety measures
CAMPBELL, Calif. – Today, the Future of Life Institute (FLI) released its 2024 AI Safety Index. In this index, the world’s leading AI and governance experts evaluated the safety standards of six prominent companies developing AI, including Anthropic, Google DeepMind, and Meta. , OpenAI, x.AI, Zhipu AI. The independent panel evaluated each company in six categories: risk assessment, current hazards, safety framework, existential safety strategy, governance and accountability, and transparency and communication.
The review committee found that while some companies demonstrated commendable practices in certain areas, there were significant risk management disparities between companies. All flagship models have been found to be vulnerable to adversarial attacks, and despite clear ambitions to develop systems that match or exceed human intelligence, companies are They do not have a proper strategy to ensure that the system remains useful and under human control.
“The findings of the AI Safety Index Project suggest that despite much activity being carried out in the name of ‘safety’ in AI companies, it is still not very effective. panelist and computer science professor Stuart Russell said. University of California, Berkeley. “None of the current activities in particular provide quantitative guarantees of safety, and given the current approach to AI via giant black boxes trained on unimaginably vast amounts of data. It seems impossible, then, to provide such a guarantee. And these AIs. As the system grows, it will become more difficult. In other words, the current direction of the technology may never be able to support the necessary safety guarantees, in which case it is really a dead end.”
The final report can be viewed here.
“Recognition efforts like this index are critical because they can provide valuable insight into the safety practices of leading AI companies. They are an essential step in holding companies accountable for their safety efforts. , highlighting new best practices and helping to encourage competitors to adopt more responsible approaches,” said Professor Yoshua Bengio, full professor at the University of Montreal and founder and scientific director of Mira Québec. said. Co-recipient of the AI Institute and the 2018 AM Turing Award.
Grades were evaluated based on publicly available information and companies’ responses to a survey conducted by FLI. The review finds that ongoing competitive pressures have led companies to ignore or avoid questions about the risks posed by the development of this technology, resulting in large gaps in safety measures and serious challenges to improved accountability. Concerns were raised that it was becoming necessary.
“We launched the Safety Index to make the AI Institute’s position on safety clear to the public,” said FLI President Professor Max Tegmark, who conducts AI research at the Massachusetts Institute of Technology. I am. “Assessors have decades of collective experience in AI and risk assessment, so we need to pay close attention to what they say about the safety of AI. there is.”
Panelist reviews:
Yoshua Bengio, professor at the University of Montreal and founder of the Mira-Québec Institute for Artificial Intelligence. He is the recipient of the 2018 AM Turing Award. Atusa Kashirzadeh, assistant professor at Carnegie Mellon University and 2024 Schmidt Science AI2050 Fellow, says: David Krueger is an assistant professor at the University of Montreal and a core member of the Mila and Human Compatible AI Center. Tegan Maharaj, assistant professor at HEC Montreal and core faculty member at Mira University. She leads the ERRATA Lab on Responsible AI. “The AI Security Initiative at the University of California, Berkeley, and co-director of the UC Berkeley AI Policy Hub,” said Jessica Newman, director of the UC Berkeley AI Security Initiative and co-director of the UC Berkeley AI Policy Hub. Sneha Revanur is the founder of Encode Justice, a youth AI advocacy group, and a Forbes 30 Under 30 honoree. Stuart Russell is a professor of computer science at the University of California, Berkeley, where he directs the Center for Human-Compatible AI. He is a co-author of the standard AI textbook used by over 1,500 universities in 135 countries.
The Future of Life Institute is a global nonprofit organization that works to advance the development of innovative technologies that benefit life and avoid extreme risks. If you would like to learn more about our mission or learn more about the work we do, please visit www.futureoflife.org.