Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

AI Art Generation for Primo Models: Exploring Piclumen’s Latest Visual Innovations | AI News Details

June 20, 2025

Former staff claims greed to betray the safety of AI

June 20, 2025

Stroke detection AI company watches China link up

June 19, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Friday, June 20
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Tools»Former staff claims greed to betray the safety of AI
Tools

Former staff claims greed to betray the safety of AI

versatileaiBy versatileaiJune 20, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

The “Openai File” report frames the voices of former staff involved, claiming that the world’s most notable AI labs are betraying safety for profit. What began as a noble quest to ensure that AI is useful to all humanity is now wobbling at the edge of becoming another corporate giant, chasing huge profits while leaving safety and ethics in the dust.

At the heart of it is a plan to tear through all the rulebooks. When Openai began, it made an important promise. It set a cap on how much money investors can make. It was a legal guarantee that if they succeeded in creating world-changing AI, then great profits would flow to humanity, not just a few billionaires. Now, that promise is on the brink of being erased to satisfy investors who want unlimited returns.

For those who built Openai, this pivot, away from AI security, feels like a deep betrayal. “The nonprofit mission has committed to doing the right thing when the interests are high,” says former staff member Carol Wainwright. “Now, the non-profit structure has been abandoned because of high interest. That is, the promise was ultimately empty.”

Deepen the crisis of trust

Many of these deeply concerned voices are CEO Sam Altman. The concerns are nothing new. Reports suggest that even in his previous company, senior colleagues tried to remove him for what they called “deceptive and confused” behavior.

That same sense of distrust was openly chasing him. The company’s co-founder, Ilya Satsukeber, has been working with Altman for years and has since launched his own startup, coming to the cold conclusion that “I don’t think Sam is the guy who should have a finger on the buttons of an AGI.” He felt that Altman was cheating and created confusion. Chaos is a scary combination for those who could be in charge of our collective future.

Former CTO Mira Murati felt equally uneasy. “We’re not used to Sam leading us to AGI,” she said. She explained the toxic patterns in which Altman tells people what they want to hear and weakens them if they get in his way. This suggests an operation that Tasha McCauley, a former Openai board member, says “it should not be accepted” if AI is highly secure.

This crisis of trust has had real-world consequences. Insiders say Openai’s culture has changed and the critical work on AI safety is taking the back seat to release “shiny products.” Jan Leike, who led the team responsible for long-term safety, said they were “sailing against the wind” and that they struggle to get the resources they need to do their important research.

Another former employee, William Sanders, even testified horrifyingly to the US Senate, revealing that for a long time, hundreds of engineers have stolen the company’s most advanced AI, including the GPT-4.

Desperate plea to prioritize AI safety in Openai

But those who leave don’t just leave. They laid out a roadmap to pull Openai back from Brink, the last effort to save the original mission.

They are urging that the iron-covered veto over safety decisions will once again be given genuine power to the company’s nonprofit mind. They demand clear and honest leadership, including a new and thorough investigation into Sam Altman’s actions.

Openai doesn’t just mark their own homework on AI safety, as they want authentic, independent surveillance. And they plead for a culture where people can talk about their concerns without fear of their work or savings.

Finally, they insist that Openai sticks to its original financial promise. The profit cap must remain. The goal must be in the public interest, not unlimited private wealth.

This isn’t just internal dramas from Silicon Valley companies. Openai is building technology that can reconstruct our world in ways that we almost imagine. The questions that the previous employee forces us to ask all of us are simple but profound. Who do we trust to build our future?

As former board member Helen Toner warned from his own experience, “when money is lined up, internal guardrails are easily broken.”

Nowadays, people who know Openai best are telling us that their safety guardrails are broken.

See: AI adoption will mature, but the deployment hurdles remain

Want to learn more about AI and big data from industry leaders? Check out the AI ​​& Big Data Expo in Amsterdam, California and London. The comprehensive event will be held in collaboration with other major events, including the Intelligent Automation Conference, Blockx, Digital Transformation Week, and Cyber ​​Security & Cloud Expo.

Check out other upcoming Enterprise Technology events and webinars with TechForge here.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleStroke detection AI company watches China link up
Next Article AI Art Generation for Primo Models: Exploring Piclumen’s Latest Visual Innovations | AI News Details
versatileai

Related Posts

Tools

Public policy to embrace the face

June 19, 2025
Tools

AI adoption will mature, but the deployment hurdles remain

June 19, 2025
Tools

Official Google release of Code LLMS

June 18, 2025
Add A Comment

Comments are closed.

Top Posts

Piclumen Art V1: Next Generation AI Image Generation Model Launches for Digital Creators | Flash News Details

June 5, 20253 Views

Presight plans to expand its AI business internationally

April 14, 20252 Views

PlanetScale Vectors GA: MySQL and AI Database Game Changer

April 14, 20252 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Piclumen Art V1: Next Generation AI Image Generation Model Launches for Digital Creators | Flash News Details

June 5, 20253 Views

Presight plans to expand its AI business internationally

April 14, 20252 Views

PlanetScale Vectors GA: MySQL and AI Database Game Changer

April 14, 20252 Views
Don't Miss

AI Art Generation for Primo Models: Exploring Piclumen’s Latest Visual Innovations | AI News Details

June 20, 2025

Former staff claims greed to betray the safety of AI

June 20, 2025

Stroke detection AI company watches China link up

June 19, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?