Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

AI predictive models target healthcare resource efficiency

February 14, 2026

State Rep. DeSantis disagrees on AI bill

February 14, 2026

Business leaders face critical deadlines for AI adoption as automation divide widens

February 14, 2026
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Saturday, February 14
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Business»Safety report card ranks AI companies’ efforts to protect humanity
Business

Safety report card ranks AI companies’ efforts to protect humanity

versatileaiBy versatileaiDecember 5, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Are artificial intelligence companies protecting humanity from the potential harms of AI? A new report card says don’t bet on it.

As AI plays an ever-increasing role in how humans interact with technology, its potential harms are becoming more apparent. This means that some people have committed suicide by using AI-powered chatbots for counseling, and others have used AI for cyber-attacks. There are also future risks, such as AI being used to manufacture weapons or overthrow governments.

But there isn’t enough incentive for AI companies to prioritize keeping humanity safe, as reflected in the AI ​​Safety Index released Wednesday by the Silicon Valley-based nonprofit Future of Life Institute, which aims to steer AI in a safer direction and limit existential risks to humanity.

“Because these industries are the only industries in the U.S. that are developing powerful technologies that are completely unregulated, they have no incentive to prioritize safety and are in a race to the bottom with each other,” Max Tegmark, director of the institute and a professor at the Massachusetts Institute of Technology (MIT), said in an interview.

The highest overall rating was a C+, which was given to two San Francisco AI companies. OpenAI, which develops ChatGPT, and Anthropic, known for its AI chatbot model Claude. Google’s AI division, Google DeepMind, was given a C rating.

Further down the rankings were Menlo Park-based Meta, Facebook’s parent company, and Elon Musk’s Palo Alto-based xAI, which received a D rating. Chinese companies Z.ai and Deep Seek also received a D rating. Alibaba Cloud received the lowest rating, receiving a D-.

The overall assessment of companies was based on 35 indicators in six categories, including existential security, risk assessment, and information sharing. The index gathered evidence based on publicly available materials and responses from companies through surveys. The scoring was done by eight artificial intelligence experts, a group that included academics and heads of AI organizations.

All companies included in the index performed below average in the survival safety category, which takes into account internal monitoring and control interventions, and survival safety strategies.

“While companies are accelerating their AGI and superintelligence ambitions, no one has demonstrated a reliable plan to prevent catastrophic misuse or loss of control,” according to the institute’s AI Safety Index report (an acronym for artificial general intelligence).

Both Google DeepMind and OpenAI said they are investing in safety efforts.

“Safety is at the core of building and deploying AI,” OpenAI said in a statement. “We invest heavily in cutting-edge safety research, build strong safeguards into our systems, and rigorously test our models both internally and with independent experts. We share our safety frameworks, assessments, and research to help advance industry standards, and continually strengthen our protections for future capabilities.”

Google DeepMind said in a statement that it takes a “rigorous, science-driven approach to AI safety.”

Google DeepMind said, “Our Frontier Safety Framework outlines specific protocols for identifying and mitigating serious risks with powerful Frontier AI models before they materialize.” “As our models become more sophisticated, we continue to innovate safety and governance to align with our capabilities.”

The Future of Life Institute’s report said xAI and Meta “despite having risk management frameworks, lack a commitment to monitoring and control and have not presented evidence of investing more than minimally in safety research.” Other companies, such as DeepSeek, Z.ai and Alibaba Cloud, lack public documentation regarding their survival safety strategies, the institute said.

Meta, Z.ai, DeepSeek, Alibaba and Anthropic did not respond to requests for comment.

xAI responded, “Legacy media lies.” A lawyer representing Mr. Musk did not immediately respond to a request for additional comment.

Tegmark said Musk is also an advisor to the Future of Life Institute and has provided funding to the nonprofit in the past, but was not involved in the AI ​​Safety Index.

Tegmark said he is concerned that without sufficient regulation of the AI ​​industry, it could help terrorists produce biological weapons, manipulate people more effectively than they currently do, and even destabilize governments.

“Yes, we have a big problem and things are going in a bad direction, but I want to emphasize how easily this can be fixed,” Tegmark said. “All we need is binding safety standards for AI companies.”

The government has sought to tighten oversight of AI companies, but some bills have faced pushback from technology lobby groups who say tighter regulations could slow innovation and cause companies to move elsewhere.

However, several laws have been enacted aimed at better monitoring the safety standards of AI companies, including SB 53, signed by Gov. Gavin Newsom in September. It requires companies to share safety and security protocols and report incidents such as cyberattacks to the state. Tegmark said the new law is a step in the right direction, but more needs to be done.

Rob Engdahl, principal analyst at advisory services firm Endahl Group, said he thinks the AI ​​Safety Index is an interesting way to approach the fundamental problem that AI is not well regulated in the U.S., but there are challenges.

“It’s not clear to me that the United States and the current administration are capable of developing elaborate regulations at this point, which means that these regulations could ultimately do more harm than good,” Engdahl said. “It’s also not clear that anyone has figured out how to put teeth into the regulations to ensure compliance.”

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleState laws regulating AI will go into effect in the new year. Here’s what HR professionals need to know:
Next Article Retail investors are worried about Apple’s stock price as Apple lags behind in the AI ​​field
versatileai

Related Posts

Business

Business leaders face critical deadlines for AI adoption as automation divide widens

February 14, 2026
Business

How To AI author Christopher Mims makes AI approachable for business leaders

February 10, 2026
Business

Deploying AI in business events requires data governance and stronger policies

February 9, 2026
Add A Comment

Comments are closed.

Top Posts

CIO’s Governance Guide

January 22, 202611 Views

NVIDIA powers local AI art generation with RTX-optimized ComfyUI workflow

January 22, 20269 Views

Bridging the gap between AI agent benchmarks and industrial reality

January 22, 20269 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

CIO’s Governance Guide

January 22, 202611 Views

NVIDIA powers local AI art generation with RTX-optimized ComfyUI workflow

January 22, 20269 Views

Bridging the gap between AI agent benchmarks and industrial reality

January 22, 20269 Views
Don't Miss

AI predictive models target healthcare resource efficiency

February 14, 2026

State Rep. DeSantis disagrees on AI bill

February 14, 2026

Business leaders face critical deadlines for AI adoption as automation divide widens

February 14, 2026
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2026 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?