The global fixation on the Claude Mythos is a distraction. In the recent Mexico breach, where multiple government agencies were compromised, including tax, electoral, and civil systems, a single operator used AI to run thousands of commands across hundreds of servers, turning raw government data into structured outputs in parallel. That is the signal. The critical aspect is how AI is already being used to scale cyber operations. The barrier between a single actor and a team-level operation has effectively collapsed.

This extends well beyond cyber. In business environments, single operators using AI are now delivering in days what once took teams months. If your organisation cannot match that shift in execution speed, it is time to confront why.

But back to the Mexico breach. Attention has settled on the behaviour of a specific AI system and how it was prompted, as if the central issue is whether a model can be persuaded to produce certain outputs. That framing obscures what actually happened. The breach is significant because of how AI was used operationally, not because of how it was accessed.

Available reporting shows that a single individual conducted a multi‑agency intrusion using commercially available AI tools as part of the execution layer. Over approximately three weeks, the operator issued more than 12,000 discrete AI‑assisted commands across four government environments. The AI systems generated end-to-end exploitation chains, privilege-escalation scripts, and customised lateral-movement payloads, which the operator executed with minimal manual modification. The operator maintained sustained interaction with AI systems that produced commands used directly against live environments, while also supporting analysis of data extracted from those systems, including triage of roughly 3 TB of exfiltrated logs and configuration data. The activity spanned more than 400 compromised servers, generated more than 25,000 lines of AI‑authored attack code and tooling, and involved a volume and tempo of commands and outputs that would ordinarily imply a large team‑based operation working in shifts.

What distinguishes this case is the integration of AI throughout the attack lifecycle. Each stage from discovery through to post-exfiltration analysis was linked through a continuous workflow in which outputs from one step informed the next with minimal delay. The operator did not need to pause to interpret results, write new tooling, or manually process large datasets. Those functions were delegated to AI systems operating in parallel.

This directly affects how operations scale. In traditional models, complexity imposes a coordination cost. Different stages of an operation require different skills, and synchronising those activities creates friction. That friction acts as a natural limiter on both speed and scope. In the Mexico breach, that constraint was materially reduced. AI absorbed a significant portion of the cognitive and technical workload, allowing one person to manage activities that would otherwise have required multiple specialists working in sequence or in parallel.

It is hard to avoid a sense of repetition here. The breach did not hinge on new or advanced vulnerabilities but on long-standing weaknesses that have persisted across systems for years. What has changed is not the nature of those issues, but the efficiency with which they can be exploited. AI compresses the time between discovery and impact.

A similar shift is visible in how data is handled once accessed. Large-scale exfiltration has historically slowed attackers down, not because of access, but because most organisational data is messy, poorly structured, and time-consuming to turn into something usable. In this case, AI systems were used to impose structure on raw datasets during the operation itself, reducing the lag between compromise and exploitation, even in environments where the underlying data quality would normally slow progress.

While the breach is situated within a cybersecurity context, the underlying pattern is not limited to that domain. The operational model on display is one in which complex, multi-step processes are orchestrated through AI systems that can generate actions, interpret results, and feed outputs back into the workflow. This pattern is equally applicable in other areas that depend on large-scale data processing and iterative decision-making, including statistics and data analytics, intelligence analysis, financial operations, and logistics. The common factor is not the specific task but the workflow structure.

This points to a broader shift in the economics of operations. Historically, the scale and complexity of an activity were closely tied to the number of skilled people required to execute it. AI weakens that relationship by allowing a single operator to extend their effective capacity across multiple functions simultaneously. The result is a change in who can undertake complex operations and at what cost. Activities that once required coordination and resources become accessible to smaller actors with the right tools and intent.

As this shift unfolds, large organisations are slowed by their own complexity, legacy systems, governance layers, and internal friction that make meaningful AI adoption uneven and slow. Smaller organisations, with fewer constraints, can move quickly and scale through AI in ways previously impossible. That shift erodes one of the oldest advantages in business, with size no longer guaranteeing capability.

For the New Zealand public service, this cuts against a deeply held instinct to look to large institutions for direction. In a landscape where capability scales with speed and integration, the organisations setting the pace are unlikely to be the largest.