With the announcement of Mythos and Project Glasswing, institutions around the world are grappling with the possibility of ushering in a new era of cybersecurity. In this post, we analyze the current situation, discuss the role of openness, and position the future of cybersecurity within the larger AI ecosystem.
What is Mythos?
Mythos is a “frontier AI model” and is a large-scale language model (LLM) that can be used (among other things) to process software code. This follows a general trend in LLM development, where LLM performance on code-related tasks has recently skyrocketed. What’s especially important about Mythos is the system it’s built into. It’s the system, not just the model, that allows Mythos to quickly find and patch software vulnerabilities. Understanding this distinction is key to understanding the current state of AI cybersecurity.
What Mythos shows us is that the following system recipe is powerful.
Substantial computational power models trained on large amounts of software-related data Scaffolding built to handle software vulnerabilities Speed of investigation and patching (enabled by the computational power and capital behind it) Some degree of system autonomy
By combining these elements, you can discover software vulnerabilities, find exploits, and build patches. It is in this recipe, rather than in any particular model, that both the benefits and the risks come from.
This is important because it allows others to build comparable systems. Smaller models built into systems built on deep security expertise could produce similar results more cheaply, which is particularly promising for defense. AI cybersecurity capabilities are jagged. It does not scale smoothly with model size or typical benchmark performance. The system in which the model is embedded is very important.
In short, Mythos has demonstrated that it is possible to build AI systems that find and address vulnerabilities in software. We already know this is possible, and there is a growing body of research on this, but we are just beginning to explore what it means in the context of agent AI, systems that can perform actions quickly and autonomously.
How openness can be a structural advantage
As autonomous systems that identify software vulnerabilities proliferate (and will continue to do so), open code and tools can help level the playing field. Software security has become a four-stage speed race: discovery, validation, adjustment, and patch propagation. An open ecosystem distributes these across the community. There, more closed source projects centralize knowledge and action across all four stages within a single vendor, representing a single point of failure where only one organization can see and fix the code. The decentralized nature of open development is robust to these constraints, especially in communities with dedicated security experts, such as the Linux kernel security team, the Open Source Security Foundation, and the teams at Hugging Face working on model and supply chain security.
A common argument for more closed systems is their inherent obscurity, with no access to the underlying code of the system. Unfortunately, this leaves you with less protection than before. AI systems are increasingly able to help reverse engineer stripped binaries. This is important because most of the legacy firmware and embedded code is closed, binary-only, and no longer maintained. This code represents a huge attack surface, and as AI tools improve, it becomes more readable and accessible.
There are also risks posed by the way AI is used within closed codebases. If companies deploy AI coding tools under the wrong incentives (for example, evaluating engineers on the amount of features shipped rather than the quality of the code), AI-enabled development can introduce more vulnerabilities into proprietary code than traditional development. While these vulnerabilities exist within closed codebases and only one organization can find and fix them, AI-powered attackers are increasingly able to discover vulnerabilities from the outside. Creating more vulnerabilities faster behind a single organization’s firewall is exactly the kind of imbalance an open ecosystem should avoid.
Underlying all of this is an asymmetry in capabilities between attackers and defenders. Open models and open tools narrow that gap by giving defenders access to the same class of capabilities that attackers have access to. Otherwise, it is the ability to concentrate within a small number of entities with sufficient resources.
Building defenses with open tools and semi-autonomous agents
Open source and AI agents can work together to play an important role in cybersecurity defense. Based on the system card, it appears that Mythos can operate with near-full autonomy, but we have recommended that you avoid this as it can lead to loss of control. Instead, semi-autonomous AI agents are pre-specified in the types of actions they can take and require human approval for certain steps, hitting a sweet spot between benefits and risks. In semi-autonomous systems, humans remain in control and AI agents are responsible for specific subtasks. This can be done using open code that can be run privately within the organization, with the organization specifying the tools, skills, and system access permissions allowed. This configuration allows AI agents to be deployed defensively to discover vulnerabilities and assist with patching under an organization’s own control.
Semi-autonomous approaches depend on humans actually being able to understand what the AI agent did and why. This is much more likely if the system is built on open components such as open agent scaffolding, open rules engines, and auditable decision logs and traces than if the system is a black box. “Human in the loop” only makes sense if humans can see inside the loop.
Companies don’t have to build these capabilities completely from scratch. There is a rich open source ecosystem of security tools such as vulnerability scanners, intrusion detection systems, log analyzers, and fuzzing frameworks that can be integrated with AI agents.
Why this is especially important for high-risk organizations
For high-risk organizations, starting with an open and auditable foundation means security teams can actually inspect how their monitoring works, rather than relying on a single vendor’s claims. This is especially important when sensitive data and processes are involved, and when sensitive materials should not typically flow through external AI providers. Open systems are rigorously analyzed by in-house security experts, fine-tuned based on the organization’s own secure data, modified to generate organization-specific monitoring mechanisms, and run entirely within the organization’s own infrastructure, all kept behind appropriate firewalls.
way forward
Attackers develop models that exploit vulnerabilities. A key part of the answer is leaning toward transparent practices, such as open security reviews, published threat models, shared vulnerability databases, and open tools that any team can adopt. The alternative of each organization trying to ensure its own security using its own tools is no match for attackers who coordinate and share techniques in their own communities.
The future of AI cybersecurity will be shaped not by a single model, but by the ecosystem surrounding the model. Openness gives defenders the visibility, control, community, and shared infrastructure they need to gain an advantage.

