There's a new term floating around right now: Mythos. But the most important thing about Mythos is not the model itself. It is the shape of the door around it.

Anthropic's Claude Mythos Preview is not a normal launch. It is a gated research preview for defensive cybersecurity work through Project Glasswing, and Anthropic says there is no self-serve signup. The first organizations publicly named around that effort are AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. That list matters because it tells you, before you ever get to the technical details, what kind of future this might become. It is not a picture of open access. It is a picture of concentrated access. (Anthropic)

At first, that feels not only understandable but responsible. Anthropic's own writeup says Mythos Preview was able to identify and exploit zero-day vulnerabilities, reverse-engineer exploits for closed-source software, and turn known vulnerabilities into working exploits. The company also says that more than 99% of the vulnerabilities it found have not yet been patched, which is why many details are being withheld. In one example, Anthropic says the model autonomously identified and exploited a remote code execution vulnerability in FreeBSD that allowed complete control of a server. (Red Anthropic)

So yes, of course access is limited. Of course nobody wants to spray a capability like that across the open internet and call it progress. If a system can surface weaknesses faster than the ecosystem can patch them, then caution is not fearmongering. It is basic survival. That is what makes this moment so uncomfortable. The strongest argument for gating access is also the one most likely to make gating feel legitimate for a very long time.

And that is where the story turns.

Because once the most powerful systems are dangerous enough that they cannot be widely distributed, the people who control them stop merely having better tools. They start occupying a different layer of power. They get to see the hidden weaknesses first. They get to harden systems before others even know those systems are fragile. They get to move from reaction to anticipation. Everyone else is still operating in the visible world, while the insiders are increasingly operating in the invisible one, where the structure of reality is revealed a little earlier and manipulated a little faster.

That is not a normal product advantage. It is not the usual gap between companies that adopted the newest software and companies that were slow to modernize. It is closer to a governing advantage. The organizations on the inside would not just be more efficient. They would be more legible to themselves than the rest of the world is legible to itself. They would know where systems break, where pressure accumulates, where failure can be induced, and where it can be prevented. In a world built on software, that starts to look less like capability and more like sovereignty.

And once that kind of sovereignty exists, human nature does the rest. The institutions that have it will justify it as stewardship. The institutions that want it will call it necessary for competitiveness. The institutions that cannot get it will be told to wait, partner, integrate, align, comply, and trust the process. No one will have to say the quiet part out loud. Nobody will need to announce that a new class structure has formed. It will simply emerge through a series of perfectly reasonable decisions, each one made in the name of safety, resilience, and responsible deployment.

That is how hard power usually arrives in modern systems. Not as spectacle. As policy. As gated access. As "temporary restrictions." As "qualified participants only." As invisible permissions that slowly become permanent. The language will remain calm long after the consequences stop being small.

If this trajectory hardens, the gap between the haves and have-nots does not just widen. It becomes uncloseable. The insiders get safer, faster, and more capable precisely because they have access to the systems that help them stay safer, faster, and more capable. They use superior intelligence to defend their lead, and that defended lead gives them more time, more capital, more trust, more integration, and more leverage to secure the next layer of intelligence. The loop closes. The moat starts thinking.

And the people outside that loop are not merely behind. They are trapped in a slower reality. They are still playing by the visible rules while the stronger players are already acting on signals that have not surfaced yet. From the outside, it will look like elite execution. Better teams. Better leadership. Better infrastructure. Better judgment. But underneath that story will be a harsher one: some actors are no longer competing with better effort. They are competing with better access to intelligence itself.

That is the dystopian version of this future, and it is not hard to imagine because it does not require cartoon villains. It only requires asymmetry plus time. Give the most powerful institutions privileged access to the most powerful systems, make that access defensible on security grounds, then let compounding do its work. Eventually the people on the inside do not just win more often. They become the category that defines what winning is. Their continued rule no longer looks like domination. It looks like stability. Their supremacy no longer needs to be imposed. It can be narrated as competence.

That is the point where social mobility breaks. Not because ambition disappears, but because the ladder stops reaching the platform. If the most consequential intelligence remains gated behind the largest firms, the deepest infrastructure, and the tightest trust networks, then smaller players do not just lack resources. They lack entry. They are no longer trying to catch up in the same race. They are trying to qualify for admission to a race whose rules were written by the people already far ahead.

And once a society accepts that arrangement, it starts reorganizing itself around proximity to the gatekeepers. Companies trade independence for access. Governments grow dependent on a shrinking set of firms that can secure the systems everyone depends on. Talent flows toward the institutions with the deepest permissions. Innovation narrows because the frontier itself is no longer broadly reachable. The people with power gain the tools to keep power, and the people without it are told that this is just what responsibility looks like at scale.

That future is dramatic, but not absurd. In fact, what makes it dangerous is how respectable it would sound while it was happening.

To be fair, Anthropic is explicitly framing Project Glasswing as a broader defensive effort. The company says it plans to share what it learns with the wider industry, has extended access beyond the initial launch partners to more than 40 additional organizations that build or maintain critical software infrastructure, and has committed up to $100 million in usage credits plus $4 million in donations to open-source security organizations. Those choices matter because they point in a healthier direction: capability concentrated early, but benefits intentionally diffused outward.

That is the part we should pay attention to now, before the pattern hardens. If we want to avoid a world where the next layer of intelligence simply cements the supremacy of those already closest to power, then we should be asking harder questions early. Not just whether these systems are safe, but safe for whom. Not just who gets access first, but what obligations come with getting it first. Not just whether a narrow rollout is justified, but what concrete mechanisms ensure that justified caution does not calcify into permanent hierarchy.

As humans, we should be thinking about diffusion as seriously as we think about defense. We should want transparency around access criteria, real pathways for broader defensive benefit, meaningful support for open infrastructure, and norms that prevent "responsible deployment" from quietly becoming a euphemism for private control. We should be wary of any future in which the systems capable of protecting the world are controlled so narrowly that they end up protecting advantage more than they protect people.

Because that is how the darkest version of this story unfolds. Not when powerful tools exist, but when powerful tools become the inheritance of the already powerful. Not when the gate appears, but when everyone learns to call the gate inevitable.

And if that happens, we are not just building better AI.

We are building a civilization where some people get to operate with a deeper layer of intelligence, and everyone else is left living downstream of their decisions.