Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Exploring AI in the APAC retail sector

February 21, 2026

GGML and llama.cpp join HF to ensure long-term advancement of local AI

February 21, 2026

Pocket FM Creator economy crosses Rs 300 mark: Rediff Moneynews

February 20, 2026
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Sunday, February 22
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»AI Legislation»Down arrow button icon
AI Legislation

Down arrow button icon

versatileaiBy versatileaiFebruary 20, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Anthropic CEO Dario Amodei doesn’t think he should be in charge of the guardrails surrounding AI.

In an interview with Anderson Cooper on CBS News’ 60 Minutes that aired in November 2025, the CEO said that AI should be more tightly regulated and that decisions about the future of the technology should be left solely to the heads of big tech companies.

“I think it’s very uncomfortable that these decisions are being made by a small number of companies, by a small number of people,” Amodei said. “And this is one of the reasons I have always advocated for responsible and thoughtful regulation of technology.”

“Who chose you and Sam Altman?” Cooper asked.

“No one. Honestly, no one,” Amodei replied.

Anthropic has adopted a philosophy of being transparent about the limitations and risks associated with the development of AI, he added. Prior to publishing the interview, the company announced that it had thwarted “the first documented case of a large-scale AI cyberattack carried out without substantial human intervention.”

Anthropic announced last week that it donated $20 million to Public First Action, a super PAC focused on AI safety and regulation that is in direct opposition to the super PAC backed by investors in rival OpenAI.

“AI safety remains our highest level of focus,” Amodei told Fortune in a January cover story. “Companies value trust and reliability,” he says.

There are no federal regulations outlining prohibitions regarding AI or technology safety. With all 50 states introducing AI legislation this year and 38 adopting or enacting transparency and safety measures, tech industry experts are calling on AI companies to approach cybersecurity with a sense of urgency.

Early last year, cybersecurity expert and Mandiant CEO Kevin Mandia warned that the first cybersecurity attacks by AI agents would occur within the next 12 to 18 months. That means Anthropic’s disclosure of the thwarted attacks was several months earlier than Mandia’s expected schedule.

Mr. Amodei outlined the short-, medium-, and long-term risks associated with unrestricted AI. First, this technology presents bias and misinformation, just as it currently does. It then uses enhanced scientific and engineering knowledge to generate harmful information, ultimately posing an existential threat by eliminating human agency and potentially locking humans out of the system by becoming too autonomous.

The concerns echo those of the “godfather of AI” Jeffrey Hinton, who has warned that AI will have the ability to outwit and control humans, perhaps within the next decade.

The need for greater oversight and safeguards for AI was at the core of Anthropic’s 2021 founding. Amodei previously served as vice president of research at Sam Altman’s OpenAI. He left the company over disagreements over AI safety concerns. (So ​​far, Amodei’s efforts to compete with Altman appear to be working; Anthropic announced this month that the company is valued at $380 billion; OpenAI’s valuation is an estimated $500 billion.)

“There was a group within OpenAI that started with the creation of GPT-2 and GPT-3 and had a very strong focus on two things,” Amodei told Fortune in 2023. “One was the idea that if we put more and more compute into these models, they would get better and better, and there was almost no end to this. And the second was the idea that we needed something more than just scaling up the models; it was tuning or safety.”

Anthropic’s commitment to transparency

As Anthropic continues to expand its data center investments, it has unveiled some of its efforts to address AI’s shortcomings and threats. In its May 2025 safety report, Anthropic reported that some versions of its Opus models made threats, including exposing an engineer’s extramarital affair, to avoid closure. The company also said that when given harmful prompts, such as how to plan a terrorist attack, the AI ​​model complied with the dangerous request, but later corrected itself.

Last November, the company said in a blog post that its chatbot Claude received a 94% political fairness rating, outperforming or on par with competitors in terms of neutrality.

In addition to Anthropic’s own research efforts to combat technology corruption, Amodei called for increased legislative efforts to address AI risks. In a June 2025 New York Times op-ed, he criticized the Senate’s decision to include a 10-year moratorium on state regulation of AI in President Donald Trump’s policy bill.

“AI is progressing at a dizzying pace,” Amodei said. “I believe these systems can radically change the world within two years. In 10 years, everything will come off.”

Criticism of anthropology

Anthropic has drawn criticism for pointing out its mistakes and working to address them. While Anthropic sounded the alarm about AI-powered cybersecurity attacks, Meta’s then-chief AI scientist Yann LeCun said the warning was a way to manipulate legislators into restricting the use of open source models.

In response to a post by Connecticut Sen. Chris Murphy expressing concern about the attack, LeCun said in an X post that he was “being fooled by people who want regulatory capture.” “They scare everyone with questionable research, so the open source model will be regulated and cease to exist.”

Some say Anthropic’s strategy is one of “safety theater,” which makes for good branding but doesn’t promise to actually put safety measures into the technology.

Even some Anthropic employees appear to have doubts about the technology company’s ability to regulate itself. Early last week, Mrinank Sharma, a human AI safety researcher, announced that he was resigning from the company, saying, “The world is at risk.”

“During my time here, I have seen again and again how difficult it is to let our values ​​truly govern our actions,” Sharma said in his resignation letter. “This is something I have seen both within myself and within organizations where we are constantly under pressure to put aside what is most important, but also in wider society as a whole.”

Anthropic did not immediately respond to Fortune’s request for comment.

Amodei denied to Cooper that Anthropic engages in “safety theater,” but acknowledged on last week’s episode of the Dwarkesh Podcast that the company sometimes struggles to balance safety and profit.

“We’re under incredible commercial pressure and it’s even more difficult for ourselves because I think we have more safety measures in place than other companies,” he said.

A version of this article was published on Fortune.com on November 17, 2025.

AI regulation details:

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleTrain AI models for free using Unsloth and Hugging Face jobs
Next Article How AI can upgrade enterprise financial management
versatileai

Related Posts

AI Legislation

Lawmakers ask GAO to review state and federal AI regulations

February 19, 2026
AI Legislation

Florida Legislature promotes K-12 AI Bill of Rights

February 19, 2026
AI Legislation

Republican governor insists on states’ right to legislate AI

February 19, 2026
Add A Comment

Comments are closed.

Top Posts

Lawmakers ask GAO to review state and federal AI regulations

February 19, 20264 Views

CIO’s Governance Guide

January 22, 20264 Views

NVIDIA powers local AI art generation with RTX-optimized ComfyUI workflow

January 22, 20264 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Lawmakers ask GAO to review state and federal AI regulations

February 19, 20264 Views

CIO’s Governance Guide

January 22, 20264 Views

NVIDIA powers local AI art generation with RTX-optimized ComfyUI workflow

January 22, 20264 Views
Don't Miss

Exploring AI in the APAC retail sector

February 21, 2026

GGML and llama.cpp join HF to ensure long-term advancement of local AI

February 21, 2026

Pocket FM Creator economy crosses Rs 300 mark: Rediff Moneynews

February 20, 2026
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2026 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?