And most of your tools can only see two dimensions.

There's an adage we use at IBM that's aged like fine wine, and not in a good way: malicious actors don't break in — they log in. But it assumes a surface. What we're dealing with now isn't a surface — it's a manifold. And most detection tooling is still doing flatland geometry.

The attack surface has undergone a dimensional explosion. Not a linear expansion — a topological one. The perimeter dissolved years ago, but what's replaced it isn't just "cloud" or "remote work." It's a fundamentally different computational substrate: distributed, heterogeneous, increasingly autonomous, and riddled with trust assumptions getting passed off at supply chain junctions that were never designed to hold under adversarial pressure.

Consider the anatomy of a modern intrusion. A WAN-exposed endpoint; a remote VM accepting user-controllable input that gets interpolated into an environment variable. Looks completely harmless! And therefore passes validation. But somewhere downstream, that string gets folded into a dynamic query. By the time it hits the database layer, the semantic context has shifted entirely. The payload didn't announce itself, it became something else in transit. And it did so while wearing valid credentials, traversing authenticated sessions, and generating telemetry that looks, to a SIEM trained on historical baselines, like normal operational traffic.

That's one hop. Now add lateral movement: a swarm topology where compromised nodes coordinate without a fixed C2. Add a Linux kernel exploited for namespace escape — language-agnostic, container-aware, with a blast radius that reaches the host. None of this is theoretical. The TTPs are documented and the tooling is commoditized.

But the real wildcard — and the one most organizations are still processing emotionally rather than operationally — is the LLM.

The democratization of offensive capability is unlike anything that came before it. Not because LLMs are magic, but because they collapse the knowledge gradient. Someone with a modicum of prompt engineering experience and the right tool scaffolding can now probe web application logic, enumerate dependency trees, surface CVEs in context, and chain exploits with an accuracy and iteration velocity that would have required years of specialized tradecraft even five years ago. The improvement rate is not plateauing. The asymmetry between offense and defense has widened at exactly the moment the attack surface became most complex.

And the most insidious vector of all? The semi-autonomous agent. An AI system with subdomain branching capability, calling client-side code, operating on behalf of a user — possibly injected with adversarial instructions embedded in content it ingested from an external source. Prompt injection at the agentic layer isn't a theoretical concern; it's an architectural reality that most identity and access frameworks weren't designed to handle, because those frameworks were built around human actors. Non-human identities — service accounts, agents, pipelines, LLM orchestrators — are the new shadow IT. They authenticate. They operate. They escalate. And they do all of it within the blast radius of whatever trust was provisioned to them at instantiation.

What ties all of this together is the failure mode at the core of the framework: the fall of authentication as a meaningful trust signal. When the actor is legitimate, the session is valid, the credentials are real, and the intent is adversarial — traditional detection has almost nothing to work with. This is why behavioral approaches, anomaly detection grounded in probabilistic baselines, and zero-trust architectures that enforce least-privilege continuously (not just at login) are no longer aspirational — they're the minimum viable posture.

The 2030 horizon isn't a planning horizon. It's already the present, arriving unevenly.

The organizations that will navigate it aren't the ones with the best perimeter tools. They're the ones that have internalized that trust is a liability to be minimized, not a feature to be extended — and that every identity, human or otherwise, is an attack surface until proven otherwise. The ones that have understood that the fundamental question has shifted to "What should be considered safe, secure — when and why?"