Every year, security researchers spend their careers hunting software vulnerabilities. According to CrowdStrike’s 2026 Global Threat Report, AI-assisted cyberattacks increased 89% year-over-year, and that was before Claude Mythos Preview existed.
Most people learning about Mythos assume it is just another powerful AI model Anthropic has not gotten around to releasing. That framing misses the real story. Anthropic privately warned top US government officials that Mythos makes large-scale cyberattacks significantly more likely in 2026. The company’s own engineers described it as something that “should feel terrifying.” One of its creators said that on the record.
Claude Mythos Preview is Anthropic’s most powerful model ever, and Anthropic has deliberately chosen not to release it publicly because it found thousands of critical zero-day vulnerabilities in weeks, including a 27-year-old flaw in OpenBSD and exploits it built autonomously, without any human input.
Here is everything you need to know.
5 things to know right now:
- Mythos successfully built working exploits on its first attempt 1% of the time in testing.
- It found a 27-year-old vulnerability in OpenBSD that millions of machines still ran
- Anthropic’s previous best model, Opus 4.6, found around 500 zero-days, Mythos found tens of thousands.
- Access is limited to 40 invited organisations including Amazon, Apple, Microsoft, and Google.
- Anthropic has committed $100 million in usage credits for Project Glasswing partners.
Quick Facts: Claude Mythos at a Glance
| What it is | Anthropic’s most powerful frontier AI model to date |
| Who built it | Anthropic |
| Why it is restricted | Poses unprecedented cybersecurity risks if misused |
| Key capability | Finds critical zero-day vulnerabilities faster than any human |
| Availability status | Invitation-only preview – 40 organizations total, no public access |
| Initiative | Project Glasswing – defensive cybersecurity deployment |
| Partners | Amazon, Apple, Microsoft, Google, Cisco, CrowdStrike, Nvidia, and others |
What is Claude Mythos – and What Makes it Different From Other AI Models?
Claude Mythos Preview is Anthropic’s newest and most capable AI model. It sits above the existing Opus, Sonnet, and Haiku tiers in a new fourth tier called “Capybara.” It was not built specifically for cybersecurity, but its advanced coding and reasoning skills make it exceptionally powerful at finding and exploiting software vulnerabilities. As of April 2026, access is limited to around 40 invited organizations under a controlled program called Project Glasswing.
Claude Mythos is what happens when a general-purpose AI model becomes so good at understanding code that it accidentally becomes one of the most capable security tools ever built.
Anthropic describes Claude Mythos capabilities as a genuine step change, not an incremental upgrade over previous models. Compared to Claude Opus 4.6, the current best publicly available model, Mythos scores dramatically higher on tests for software coding, academic reasoning, and cybersecurity. The company calls it the most capable model it has ever developed.
What makes it different from every AI model before it is scale. Over just a few weeks of testing, the Anthropic claude mythos preview found thousands of zero-day vulnerabilities, previously unknown flaws, across every major operating system and every major web browser.
Many of those bugs had survived decades of human review and millions of automated security scans. One discovered flaw in OpenBSD, a highly secure operating system used to protect firewalls and critical infrastructure, had existed undetected for nearly 30 years.
Why is Claude Mythos So Powerful?
Anthropic Claude Mythos was not trained to be a hacking tool. That is actually part of what makes this story so interesting.
The model was developed as a general-purpose reasoning and coding system. Its power in cybersecurity comes entirely from how well it understands code, and what it can do once it reads it.
According to Anthropic’s own Project Glasswing documentation, frontier AI models have now crossed a threshold where they can compete with the most skilled human security researchers at finding and exploiting vulnerabilities.
Three capabilities stand out:
- Autonomous vulnerability discovery: Claude mythos cybersecurity capabilities allow it to scan codebases and identify flaws that have eluded human experts for years. These are not simple surface-level bugs. The exploits it develops are, in Anthropic’s own words, “increasingly sophisticated.”
- Exploit generation: The model does not just find problems, it can reason through how those problems could be exploited. This is what makes AI cybersecurity risks from Mythos so serious. The same feature that helps a defender patch a vulnerability helps an attacker build a working exploit.
- Agentic autonomous operation: Claude Mythos capabilities extend into agentic territory, meaning it can take multi-step actions independently, not just answer questions. CrowdStrike’s 2026 Global Threat Report found an 89% year-over-year increase in attacks by adversaries using AI. Mythos-level autonomous AI agents operating offensively would represent a serious acceleration of that trend.
To put the scale into perspective: in a matter of weeks, this model found more software bugs than many human programmers encounter across an entire career. Security researchers describe this as a capability threshold that fundamentally changes how urgently critical infrastructure needs to be protected.
Why is Claude Mythos Not Available to the Public?
Responsible AI deployment is Anthropic’s founding philosophy. But even by that standard, the decision to withhold Claude mythos preview anthropic’s flagship model from general release is unusually cautious, and the reasoning is unusually direct.
- The cybersecurity risk is asymmetric: Defenders need time, expertise, and resources to patch vulnerabilities. Attackers only need to find one working exploit. A model that can discover thousands of critical flaws in days gives attackers a catastrophic head start if it goes public before those flaws are patched.
- State-sponsored misuse has already happened: This is not theoretical. Anthropic documented the first AI-orchestrated cyber espionage campaign in November 2025, a Chinese state-sponsored group that used Claude Code to infiltrate roughly 30 organizations including tech companies, financial institutions, and government agencies before Anthropic detected the activity. Mythos is significantly more powerful than the model used in that attack.
- The model found critical flaws across systems used by billions of people: Every major operating system. Every major web browser. Bugs that have survived 20 to 30 years of professional security review. If those vulnerabilities became known to attackers before they could be patched, the potential damage would be enormous.
- Cost and infrastructure limits: Claude Mythos benchmarks place it at a level of compute and cost that makes broad deployment impractical for now. Anthropic has committed up to $100 million in usage credits for Project Glasswing partners, which gives a sense of how expensive operating this model at scale actually is.
Boris Cherny, one of Claude’s creators, put it on X: “Mythos is very powerful and should feel terrifying. I am proud of our approach to responsibly preview it with cyber defenders, rather than generally releasing it into the wild.”
What is Project Glasswing and Who Has Access to Claude Mythos Preview?
The name comes from the glasswing butterfly, a creature with wings so transparent you can see straight through them. Anthropic chose it as a metaphor for software vulnerabilities: they are everywhere, they are dangerous, and most of the time they are nearly invisible.
Project Glasswing is Anthropic’s controlled access initiative for deploying claude mythos preview anthropic in the real world. Rather than a public launch, Anthropic has given access to 12 founding partner organizations, with around 40 organizations total included in the preview.
Founding partners include:
- Amazon Web Services
- Apple
- Broadcom
- Cisco
- CrowdStrike
- Google Cloud
- JPMorgan Chase
- Linux Foundation
- Microsoft
- Nvidia
- Palo Alto Networks
Every partner builds or maintains critical software infrastructure. The goal is straightforward, find the vulnerabilities before attackers do, patch them, and share what is learned so the broader tech industry can benefit.
AWS described testing Claude Mythos in its own security operations across critical codebases. Microsoft said the capability “augments our security and development solutions so we can better protect customers.” Cisco called it work that is “too important and too urgent to do alone.”
Anthropic has also briefed senior US government officials, including CISA (the Cybersecurity and Infrastructure Security Agency) and the Center for AI Standards and Innovation, on the model’s full offensive and defensive capabilities. The company has made itself available to support government testing and evaluation as well.
Claude Mythos preview access is currently invitation-only with no self-serve sign-up. According to Anthropic’s API documentation, it is offered separately from its standard model lineup specifically for defensive cybersecurity workflows.
How Does Claude Mythos Change the Future of AI and Cybersecurity?
The AI safety concerns around Mythos are real, but the picture is not entirely dark.
Anthropic’s own Project Glasswing paper makes this point carefully: the same capabilities that make restricted AI models dangerous in the wrong hands make them genuinely valuable for defenders. AI alignment in this context means ensuring the people who access these tools are actively working to protect systems, not exploit them.
The deeper shift is what Mythos represents for the trajectory of AI development. For years, AI progress in cybersecurity was incremental. Models could help with coding, flag obvious patterns, and assist with documentation. Mythos is different, it has crossed the threshold where it is competitive with the best human security researchers on their best days.
That changes the calculus for governments, companies, and individuals. Software that has felt secure for decades is suddenly exposed. The 30-year-old OpenBSD flaw is not an isolated case, it is representative of how many vulnerabilities exist across systems that billions of people rely on every day.
The frontier AI model race is no longer just about which company builds the smartest assistant. It is about which side of the security divide gets access to this capability first.
Is Claude Mythos Dangerous Or is It Being Overhyped?
The genuine answer is both, depending on who has access to it.
- The case for genuine concern: A model that autonomously discovers critical zero-day vulnerabilities, generates working exploits, and operates as an autonomous AI agent is a serious tool. AI cybersecurity risks at this level are not hypothetical. State-sponsored groups were already weaponizing older, weaker Claude models before Mythos existed. The potential misuse case is not speculative, it is documented.
- The case for measured optimism: Anthropic is not hiding Mythos. It is deploying it carefully, with the explicit goal of patching vulnerabilities before they can be exploited. The company is briefing governments, partnering with defenders, and committing real money to responsible AI deployment. Newton Cheng, Anthropic’s Frontier Red Team cyber lead, said the goal is to get defenders familiar with these capabilities before they become widely available, so that when broader access eventually comes, the infrastructure to manage it exists.
What is worth questioning is whether the current model of restricted AI models actually holds. Capability thresholds in AI have a history of spreading faster than organizations expect. Competitors are not standing still. The question of who accesses Mythos-level capability next, and under what conditions, is not settled.
The Claude mythos system card and full technical documentation have not yet been released publicly, which has frustrated some researchers who argue that transparency about AI alignment risks should not be contingent on maintaining a competitive advantage. That debate will likely intensify as the model’s capabilities become more widely understood.
Frequently Asked Questions
What is Claude Mythos used for? Claude Mythos Preview is being used for defensive cybersecurity work, specifically, finding and fixing critical software vulnerabilities in widely used operating systems, browsers, and critical infrastructure. It is part of Anthropic’s Project Glasswing initiative.
Why is Claude Mythos not public? Anthropic has restricted access because the model’s coding and reasoning capabilities make it exceptionally powerful at finding and exploiting software vulnerabilities. If accessed by bad actors, including state-sponsored hacking groups, it could enable cyberattacks too fast and too sophisticated for defenders to block in time.
Is Claude Mythos more powerful than GPT models? Anthropic describes Claude Mythos as the most capable AI model it has ever built, sitting above its own Opus tier in a new fourth category. Direct public benchmarks against GPT models are not yet available, but Claude Mythos benchmarks from internal testing show dramatic improvements over Anthropic’s previous best models in coding, reasoning, and cybersecurity.
Can Claude Mythos be dangerous? Yes, Anthropic says so explicitly. The model identified thousands of previously unknown critical vulnerabilities in just weeks. Its own creator described it as something that “should feel terrifying.” The danger is not the model itself but what happens if it operates without the safety constraints Anthropic has built around it.
Who can access Claude Mythos? Currently, access is limited to around 40 invited organizations through Project Glasswing. These include Amazon, Apple, Microsoft, Google, Cisco, CrowdStrike, Nvidia, the Linux Foundation, JPMorgan Chase, and others. There is no public access or self-serve sign-up. The Claude Mythos preview access is invitation-only.
What is the Claude Mythos release date for general public? Anthropic has not announced a general release date. The company’s stated goal is to learn from the Project Glasswing preview before deciding how to deploy Mythos-class models at scale. Given the documented risks, a broad public launch does not appear imminent.
What is Claude Mythos’s tier name? The model sits in a fourth tier called “Capybara,” above the existing Haiku, Sonnet, and Opus tiers that make up Anthropic’s current public lineup.
Key Takeaways
- Claude Mythos Preview is Anthropic’s most powerful model to date, not publicly available, and intentionally so.
- It found thousands of critical zero-day vulnerabilities in just weeks, including a 30-year-old flaw in OpenBSD.
- Access is restricted to 40 organizations through Project Glasswing, focused entirely on defensive cybersecurity.
- Partners include Amazon, Apple, Microsoft, Google, Cisco, CrowdStrike, and Nvidia.
- Anthropic has briefed US government officials and is engaging with CISA on the model’s offensive and defensive capabilities.
- The core tension, between making this capability available to defenders and keeping it away from attackers, is not fully resolved. How that question gets answered will shape AI security for years.









