Artificial Intelligence & Machine Learning,
Next-Generation Technologies & Secure Development,
The Future of AI & Cybersecurity
AI Firm Warns New Models Could Pose High Cybersecurity Risks as Dual-Use Capabilities Expand

OpenAI has announced its preparations for artificial intelligence models to potentially reach “high” levels of cybersecurity risk. This development underscores the increasing dual-use capabilities of these models, which may enhance defenses but could also facilitate sophisticated cyberattacks.
The company indicated that it is strategically planning with the understanding that any new model could possess the capability to create zero-day exploits against secure systems or significantly aid in complex intrusion operations targeting real-world outcomes. This announcement follows a notable increase in the company’s cyber abilities, evidenced by a surge in performance on capture-the-flag challenges, which saw a rise from 27% success with GPT-5 in August to 76% with GPT-5.1-Codex-Max in November.
While OpenAI has committed to advancing its cybersecurity measures, it has not defined when the first high-risk models will be released or which specific models may pose such threats. Within its Preparedness Framework, a “high” rating is the second most severe category, signaling risks just below the critical threshold where models are considered unsafe for public use.
Fouad Matin, a researcher at OpenAI, highlighted that growing concerns center around the models’ capacity for extended autonomous operation, a trait that could effectively enable brute force attacks. Such developments may represent a tactical shift in adversarial approaches, leveraging AI’s sustained capabilities to bypass traditional defenses.
In previous communications, OpenAI warned about potential risks associated with bioweapons and noted the release of ChatGPT Agent, which also rated high in terms of risk. The relevance of dual-use technologies is paramount, as both offensive and defensive cybersecurity operations often utilize the same foundational techniques and knowledge.
Allan Liska, a threat intelligence analyst at Recorded Future, advocates for cautious assessment. He noted that while the security risks associated with AI models are escalating, it is crucial not to exaggerate these threats. Current capabilities of AI-driven attacks have not outstripped the defenses available to organizations adhering to best security practices.
As a proactive measure, OpenAI is investing in fortifying its models for defensive security functions, focusing on developing tools that enhance code auditing and vulnerability patching processes. The company emphasizes the necessity to tilt the balance in favor of defenders, who often find themselves overwhelmed and under-resourced against malicious threats.
OpenAI is implementing safety measures such as access controls and monitoring systems designed to prevent misuse while supporting legitimate defensive functions. The company plans to launch a trusted access program that offers tiered access to advanced model capabilities for selected users engaged in cybersecurity defense. Moreover, the establishment of the Frontier Risk Council aims to ensure ongoing collaboration with cybersecurity experts to assess and mitigate risks associated with emerging technologies.
The recent advancements in AI present both opportunities and challenges for cybersecurity. Aardvark, OpenAI’s new tool to assist developers in identifying and rectifying vulnerabilities at scale, is already in private beta and has shown promise in discovering novel vulnerabilities in codebases. OpenAI’s effort to support the open-source software ecosystem reflects a commitment to enhancing overall cybersecurity resilience amidst growing threats.