EWEEK content and product recommendations are editorially independent. You can earn money by clicking on the link to your partner. learn more.
Openai’s next artificial intelligence model could be at a higher risk of being used to help create biological weapons. The company also acknowledges that AI models can be used for harm if they are strong enough to make useful scientific or medical discoveries.
“The same underlying capabilities that drive progression, such as inference of biological data, prediction of chemical reactions, and guidance for lab experiments, can also be misused to help people with minimal expertise replicate biological threats and help skilled actors create biological airpons.”
Openai hopes that future AI models will reach a “high” level of capabilities in biology, as measured by their own preparation framework. This means that “basically related training can provide meaningful support to beginner actors and create biological or chemical threats.”
It is worth noting that Openai’s post about Bioweapons came out a day after the company accepted a $200 million contract with the US Department of Defense.
How Open Alliance is working to mitigate these risks
Nevertheless, Openai will not release AI models until it is satisfied that the risk has been reduced. So we will work with government agencies, including biosecurity experts, academic researchers, Red Teams who know how to test AI vulnerabilities, the US AI Standards and Innovation Centre, and the UK Institute for AI Security to shape and mitigate the testing process.
Such mitigation includes:
Train your model to not respond to harmful or dangerous prompts or to respond safely. Detect and block systems to flag suspicious bio-related activities. Use AI systems and human reviewers to enforce usage policies, suspend breach accounts and involve law enforcement when necessary. Security controls such as access control and infrastructure hardening.
Openai’s model isn’t the only one that is increasing the risk of biosecurity
This AI model and topic of biology is not a new concern. For example, in May 2024, 26 countries agreed to cooperate in developing risk thresholds for AI systems that could create biological weapons.
In February 2025, this risk was highlighted in the UK’s International AI Safety Report.
In April 2025, researchers tested key AI models for virility testing, a benchmark of expert-level knowledge in virology and lab protocols. They found that models like Openai’s GPT-4o are superior to most human virologists, raising concerns about the risks of Bioweapon.
In May 2025, humanity confirmed that, given the high risks of the model, it had introduced security measures to prevent Claude Opus 4 from being used to build biological and nuclear weapons. Still, a month before it was released, Anthropic CEO admitted that he cannot confidently design a system that prevents such harmful behavior until he truly understands how AI works.
This week, a group of AI experts released a report showing that evidence linking AI models to biological weapon risk has been significantly increasing since March as developers report “capability jumps” in these areas.
Laws are important to reduce the risk of this AI, but it is difficult to pass
AI experts primarily agree that a strong and coordinated response is essential to protecting the public from the risks of those using AI to develop biological weapons. In February, former Google CEO Eric Schmidt warned that illicit countries like North Korea, Iran and Russia can seize this ability if they are not in place.
The following month, humanity wrote to the White House Science and Technology Policy, urging an immediate lawsuit regarding AI security. Individual testing reveals a surprising improvement in the Claude 3.7 Sonnet’s ability to support aspects of Bioweapons development.
Unfortunately, passing the AI Act has been difficult so far. This is mainly because it is not a disagreement between safety advocates and tech companies, as well as politicians concerned that guardrails can hinder innovation and limit the economic benefits it brings.
SB-1047, the California AI Regulation Bill aimed at preventing AI models from causing massive damage to humanity through Bioweapons, would have been the strongest regulation in the United States regarding generating AI. However, the bill was rejected in September 2024 by Governor Gavin Newsom.
Now, Republican-backed budget bills are moving forward through Congress that prohibits U.S. states and regions from regulating AI for the next decade. However, critics say eliminating state compliance measures will make it difficult to explain bad technology and help attract businesses who want to avoid delays in regulation.
Read more about why experts say AI laws are lagging behind the pace of technological advances in analyzing California’s AI bill veto.