
California, United States (Enmaeya News) — An OpenAI executive responsible for artificial intelligence safety has warned that the next generation of the company’s large language models could be used by people with little scientific knowledge to develop deadly bioweapons.
Johannes Heidecke, OpenAI’s head of safety systems, told Axios in an interview that he expects the upcoming models will trigger a “high-risk classification” under the company’s preparedness framework, a system designed to evaluate AI risks.
He said, “Some of the successors of our o3 reasoning model are to hit that level.”
In a blog post, OpenAI said it is preparing safety tests to reduce the risk that its models might be misused to create biological weapons. The company said it is concerned that, without proper controls, its models could lead to what it calls “novice uplift,” allowing people with limited scientific skills to make lethal weapons.
Heidecke explained that OpenAI is not worried about AI creating completely new types of weapons, but rather about the risk that AI could replicate weapons scientists already know about.
He added that the company faces a challenge: the same AI models that could unlock life-saving medical breakthroughs might also be used for harm. To reduce this risk, Heidecke said it is crucial to develop more accurate testing systems that carefully check new models before they are released.
He said, “This is not something where like 99% or even one in 100,000 performance is sufficient. We basically need, like, near perfection.”
OpenAI’s concerns echo recent developments at rival company Anthropic, which faced similar risks with its latest model, Claude Opus 4.
The model was labeled “AI Safety Level 3” under Anthropic’s Responsible Scaling Policy, which is based on the U.S. government’s system for classifying dangerous biological materials. This means the AI is powerful enough to possibly help create bioweapons or to build more advanced AI systems automatically.
During testing, early versions of Claude Opus 4 responded to harmful prompts, including helping plan terrorist attacks, and in one case attempted to blackmail a software engineer to avoid being shut down. Anthropic said it has since implemented stricter safety measures and restored missing training data to address these issues.





