On Thursday, a Fortune reporter discovered roughly 3,000 assets sitting in an unsecured, publicly searchable data store belonging to Anthropic. Among them: a draft blog post announcing the company's most powerful model ever built. Within hours, Anthropic confirmed the project is real. They call it a "step change." The cybersecurity industry called its broker.

What Got Leaked

The breach was embarrassingly mundane. Anthropic's content management system had assets set to public by default, and someone forgot to flip the switch on a batch of draft content. No AI system was compromised. No customer data was exposed. Just garden-variety human error in a CMS configuration — the kind of mistake that would earn a junior dev a stern Slack message at most companies.

But the contents of those drafts were anything but mundane.

Among the exposed materials was a draft blog post introducing Claude Mythos, described internally under a new model tier called Capybara. If Anthropic's existing tier structure runs Opus → Sonnet → Haiku (in decreasing capability), Capybara sits above all of them. The draft describes it as "larger and more intelligent than our Opus models — which were, until now, our most powerful."

An Anthropic spokesperson confirmed the project to Fortune, calling it "a general purpose model with meaningful advances in reasoning, coding, and cybersecurity" and "the most capable we've built to date."

What Anthropic Claims It Can Do

According to the leaked draft — which, to be clear, was never meant for publication and represents internal marketing copy, not peer-reviewed benchmarks — Capybara "gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity" compared to Claude Opus 4.6.

Three areas stand out:

  • Coding: Presumably building on the momentum of Claude Code, which has become the tool enterprise developers can't seem to quit. OpenAI's competing Codex product has been closing the gap — growing from roughly 5% of Claude Code's usage last September to about 40% in January — but Anthropic clearly isn't content to coast.
  • Academic reasoning: The eternal benchmark war. Take with appropriate salt, but noteworthy if the jump over Opus 4.6 is as dramatic as the draft implies.
  • Cybersecurity: This is where things get interesting — and uncomfortable.

The Cybersecurity Paradox

The draft blog post contains a sentence that, given the circumstances of its discovery, reads like satire:

"In preparing to release Claude Capybara, we want to act with extra caution and understand the risks it poses — even beyond what we learn in our own testing. In particular, we want to understand the model's potential near-term risks in the realm of cybersecurity."

The model "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders," the draft continues.

This is a company warning about unprecedented cybersecurity risk, in a document discovered because of a cybersecurity lapse, in a publicly accessible data store, found by the first journalist who went looking. The headline writes itself.

Important context: These cybersecurity claims come from a draft blog post — internal marketing material that was never published. Anthropic has not confirmed these specific risk assessments publicly. Draft copy tends to run hotter than what survives legal review.

That said, Anthropic's planned approach sounds measured. The draft describes a gradual rollout, starting with early access for organizations so they can "improve the robustness of their codebases against the impending wave of AI-driven exploits." The model is reportedly training-complete and currently being tested with select customers.

Wall Street Didn't Wait for Nuance

Markets don't parse draft caveats. When the story broke Friday morning, cybersecurity stocks cratered across the board:

CompanyTickerFriday Drop
TenableTENB~11%
SentinelOneS~8%
CrowdStrikeCRWD~7%
Palo Alto NetworksPANW~6%
ZscalerZS~6%
OktaOKTA~5%
FortinetFTNT~5%
CloudflareNET~4%

The iShares Cybersecurity ETF (IHAK) dropped roughly 3%. Bitcoin slipped from $70,000 to about $66,000 in the broader sell-off. This isn't an isolated overreaction — a similar dip hit cyber stocks last month when Anthropic announced code-scanning security features in Claude. The market has apparently decided that better AI offense is bad news for the companies selling AI defense.

Whether that logic holds up is another question entirely. Better attack tools generally mean more demand for defense, not less. But markets trade on fear before they trade on logic.

The Bigger Picture: Anthropic's Streak

This leak doesn't exist in a vacuum. Anthropic has been on a tear:

  • Claude Code has become the default coding companion for a growing share of enterprise developers, to the point where OpenAI reportedly restructured internal teams in response.
  • Claude Cowork, launched in January as a "research preview," positions Claude as an agent for non-technical knowledge workers — essentially Claude Code for everyone.
  • Computer Use, announced just three days before this leak, lets Claude interact directly with desktop environments to complete tasks autonomously.
  • The company's $61.5 billion valuation reflects investor confidence that Anthropic is no longer just playing catch-up to OpenAI.

OpenAI clearly feels the pressure. Just days before the Mythos leak, CNBC reported that OpenAI hired Peter Steinberger, creator of OpenClaw, to "drive the next generation of personal agents." When your competitor starts recruiting the creators of open-source tools your users prefer, you're playing defense.

What This Isn't

Before the hype machine runs away with this: a frontier AI company leaking that it's building something more powerful than its current best model is not, in itself, surprising. Every company in this space is always building the next thing that benchmarks higher than the last thing.

OpenAI's GPT-5 was positioned as a transformative leap and landed with more of a thud than a boom. Google's Gemini models keep setting records on paper while users argue about whether the outputs actually feel better. The gap between benchmark performance and real-world utility remains wide enough to drive a truck through.

The interesting questions about Mythos aren't whether it scores higher on academic tests — of course it will — but whether:

  1. The cybersecurity capabilities translate to actual offensive tooling that outpaces current defenses, or if this is responsible-disclosure theater
  2. The "Capybara" tier introduces meaningful architectural changes or is primarily a scaling play
  3. Anthropic can maintain its lead in agentic capabilities (Code, Cowork, Computer Use) while also pushing the frontier on raw model intelligence
  4. The "gradual rollout to defenders first" approach is a genuine safety measure or a go-to-market strategy dressed up as caution

The Irony, One More Time

There's something philosophically rich about an AI safety company — Anthropic's entire brand is built on being the responsible one — getting caught with its CMS wide open while drafting a blog post about unprecedented cybersecurity risk. It doesn't invalidate their work or their model's capabilities. But it's a reminder that the most dangerous vulnerabilities are usually the boring ones: misconfigured permissions, forgotten defaults, the assumption that nobody's looking.

No model, no matter how capable, fixes that.

Bottom Line

Claude Mythos is real, confirmed by Anthropic, and sits in a new tier above Opus. The cybersecurity claims are from an unreviewed draft and should be treated accordingly. The market reaction was outsized but reveals genuine anxiety about AI-driven offense outpacing defense. The real test comes at launch — and if the model is as capable as the draft claims, the conversation about AI and cybersecurity is about to get a lot more concrete.

Want to Run AI Agents That Actually Work?

If you're building with Claude, OpenAI, or open-source models, our OpenClaw Field Guide covers the full stack — from local setup to production deployment with real security practices.

Get the Field Guide — $10 →