Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Salesforce AgentForce 3 brings visibility to AI agents

June 25, 2025

Generated AI Media Production | Deloitte Us

June 24, 2025

6 Key findings from marketing leaders

June 24, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Wednesday, June 25
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»AI Legislation»Openai warns that the following AI models will help create the biological age
AI Legislation

Openai warns that the following AI models will help create the biological age

versatileaiBy versatileaiJune 20, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

EWEEK content and product recommendations are editorially independent. You can earn money by clicking on the link to your partner. learn more.

Openai’s next artificial intelligence model could be at a higher risk of being used to help create biological weapons. The company also acknowledges that AI models can be used for harm if they are strong enough to make useful scientific or medical discoveries.

“The same underlying capabilities that drive progression, such as inference of biological data, prediction of chemical reactions, and guidance for lab experiments, can also be misused to help people with minimal expertise replicate biological threats and help skilled actors create biological airpons.”

Openai hopes that future AI models will reach a “high” level of capabilities in biology, as measured by their own preparation framework. This means that “basically related training can provide meaningful support to beginner actors and create biological or chemical threats.”

It is worth noting that Openai’s post about Bioweapons came out a day after the company accepted a $200 million contract with the US Department of Defense.

How Open Alliance is working to mitigate these risks

Nevertheless, Openai will not release AI models until it is satisfied that the risk has been reduced. So we will work with government agencies, including biosecurity experts, academic researchers, Red Teams who know how to test AI vulnerabilities, the US AI Standards and Innovation Centre, and the UK Institute for AI Security to shape and mitigate the testing process.

Such mitigation includes:

Train your model to not respond to harmful or dangerous prompts or to respond safely. Detect and block systems to flag suspicious bio-related activities. Use AI systems and human reviewers to enforce usage policies, suspend breach accounts and involve law enforcement when necessary. Security controls such as access control and infrastructure hardening.

Openai’s model isn’t the only one that is increasing the risk of biosecurity

This AI model and topic of biology is not a new concern. For example, in May 2024, 26 countries agreed to cooperate in developing risk thresholds for AI systems that could create biological weapons.

In February 2025, this risk was highlighted in the UK’s International AI Safety Report.

In April 2025, researchers tested key AI models for virility testing, a benchmark of expert-level knowledge in virology and lab protocols. They found that models like Openai’s GPT-4o are superior to most human virologists, raising concerns about the risks of Bioweapon.

In May 2025, humanity confirmed that, given the high risks of the model, it had introduced security measures to prevent Claude Opus 4 from being used to build biological and nuclear weapons. Still, a month before it was released, Anthropic CEO admitted that he cannot confidently design a system that prevents such harmful behavior until he truly understands how AI works.

This week, a group of AI experts released a report showing that evidence linking AI models to biological weapon risk has been significantly increasing since March as developers report “capability jumps” in these areas.

Laws are important to reduce the risk of this AI, but it is difficult to pass

AI experts primarily agree that a strong and coordinated response is essential to protecting the public from the risks of those using AI to develop biological weapons. In February, former Google CEO Eric Schmidt warned that illicit countries like North Korea, Iran and Russia can seize this ability if they are not in place.

The following month, humanity wrote to the White House Science and Technology Policy, urging an immediate lawsuit regarding AI security. Individual testing reveals a surprising improvement in the Claude 3.7 Sonnet’s ability to support aspects of Bioweapons development.

Unfortunately, passing the AI ​​Act has been difficult so far. This is mainly because it is not a disagreement between safety advocates and tech companies, as well as politicians concerned that guardrails can hinder innovation and limit the economic benefits it brings.

SB-1047, the California AI Regulation Bill aimed at preventing AI models from causing massive damage to humanity through Bioweapons, would have been the strongest regulation in the United States regarding generating AI. However, the bill was rejected in September 2024 by Governor Gavin Newsom.

Now, Republican-backed budget bills are moving forward through Congress that prohibits U.S. states and regions from regulating AI for the next decade. However, critics say eliminating state compliance measures will make it difficult to explain bad technology and help attract businesses who want to avoid delays in regulation.

Read more about why experts say AI laws are lagging behind the pace of technological advances in analyzing California’s AI bill veto.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleStatus rules for radio AI disclosure may not be enforceable anytime soon
Next Article MidJourney launches an AI video generator with animations of images and prompts
versatileai

Related Posts

AI Legislation

Beyond AI hype: How Data Law quietly handed power to government and big technology

June 22, 2025
AI Legislation

State AI Acts could disappear under “Big Beautiful Bills” | News

June 20, 2025
AI Legislation

Why Nigerian Communications Act requires upgrades in the 5G/AI era

June 20, 2025
Add A Comment

Comments are closed.

Top Posts

New Star: Discover why 보니 is the future of AI art

February 26, 20253 Views

How to build an MCP server with Gradio

April 30, 20251 Views

A family business built on trust, now supported by AI.

April 17, 20251 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

New Star: Discover why 보니 is the future of AI art

February 26, 20253 Views

How to build an MCP server with Gradio

April 30, 20251 Views

A family business built on trust, now supported by AI.

April 17, 20251 Views
Don't Miss

Salesforce AgentForce 3 brings visibility to AI agents

June 25, 2025

Generated AI Media Production | Deloitte Us

June 24, 2025

6 Key findings from marketing leaders

June 24, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?