Or: The only time you can build an army is when you don't need one. Two thousand years later, and enterprises still have not learned the lesson. The time is running out.

Military doctrine thinks in three postures. Peacetime, where you train, provision, and build capability that will not be tested for years. Crisis, where that capability meets its first real load — surge, adversary probing, civilian confusion, supply chain bending under shapes it was not drawn for. And conflict, where capability either exists or does not, and how you could have built it differently is now academic.

Doctrine knows what corporate IT has forgotten: capability exists before the posture that tests it, or it does not exist at all. You do not conjure an army the month before the war. You do not conjure clean data the month before the board wants agents everywhere. You do not conjure a secure platform the month after the first breach. Capability is a compounding debt paid down only in peacetime, and peacetime is the only period in which nobody will thank you for paying. It is a shame we have forgotten to read these words we are writing.

None

The commoditisation of AI ended enterprise IT's peacetime. The crisis posture is already active. Most leadership has not noticed, because the pressure has not arrived as an outage. It has arrived as a shift in load shape — and a shift in load shape is how armies lose without ever seeing the enemy.

The crisis posture has already started

To witness the ongoing change, look at the numbers. Cloudflare's Q1 2026 data puts bots at 31.2% of all HTTP requests globally, up from 20% before generative AI. In the same window, the underlying pie also grew: Cloudflare's network went from ~63 million requests per second in 2024 to 81 million in Q1 2026, and global internet traffic itself grew 17.2% in 2024 and 19% in 2025. Both the slice and the pie expanded, which means bot volume in absolute terms roughly doubled. AI crawlers account for 22% of all bot traffic. Combined with AI-driven search, AI activity is 27.5% of all bot traffic. Applebot alone grew 140% in a single month. Matthew Prince, Cloudflare's CEO, said in March that bots will exceed human traffic in 2027. That is next year.

None

Read that the way a logistics officer reads a supply chain report. The mix has changed. The consumers of your system are no longer the consumers you architected for. They do not behave like humans, do not rate-limit out of courtesy, do not abandon slow pages, do not forget and try tomorrow. They retry in a loop until something returns a 200, and when they get a 200, something else fires a thousand parallel requests at the same endpoint.

The Model Context Protocol, the standard Anthropic released in November 2024 and handed to the Linux Foundation a year later, went from 100,000 downloads in its first month to 97 million monthly SDK downloads by early 2026. That is the connective tissue of the agent economy, already between you and your customers even if you have never deployed it, because your customers are using tools that use it to reach you.

Agents are not the same customers you had last quarter. An agent on behalf of a real person does not browse your pricing page. It hits your pricing API, or it scrapes the page once and never returns. It does not tolerate a login flow designed to convert a browsing human. It gives up on your platform in favour of the one down the road with cleaner documentation, structured endpoints, and rate limits that were written down rather than discovered on impact. The customer arrives through a proxy, the proxy makes a silent decision in milliseconds, and you never find out why the human never converted.

Agents do not care about your churn loops, your retention mechanics, your nurture flows. The entire premise of what it means to be a customer is going through a complete paradigm shift. This is an extinction-level event for every customer-experience model built on the assumption that the buyer is the one making the decision.

This is the load on your wires today, not a forecast for a future board meeting, and it compounds from here. Ten times more next year. A hundred times more the year after. A thousand times more inside the decade. The curve shows no sign of bending.

Four ways the unprepared force loses

An IT foundation not built in peacetime fails on four fronts. Each one is a distinct species of loss. Each one sits on a different foundation.

The first front is the one everyone anticipates: the system cannot hold the line. It does not scale. Throughput collapses under a load shape it was never benchmarked for, and every remediation is emergency work that digs the debt deeper. The architecture was sized for last year's users, not for the agent-multiplied descendants of those users. The failure hits whatever is holding the line. Machines collapse under load they were never sized for. Humans collapse faster — and the organisations most exposed are the ones whose 'system' was already mostly human to begin with.

The second front is quieter and more expensive: the system scales, and it bleeds out through the supply chain. This is not a problem reserved for companies running AI. It is a problem for every company running cloud. A poorly designed cloud system meeting 1000x legitimate agent traffic produces a provider invoice that can do damage faster than a finance team can react. Think of it as a distributed financial attack, except the traffic is not malicious — it is real customers, with real agents, behaving exactly as designed. This will hit smaller and less professional products even harder.

Inference eats 60–80% of operating expenditure at AI-first companies. 79% of enterprises deploying agents are hitting infrastructure cost walls. Unoptimised inference passes $100,000 per month once queries reach ten million. The platform stays up, the CFO starts asking what changed, and the answer is that unit economics quietly inverted. A cost-per-user stable for a decade is now a cost-per-user-plus-their-three-agents, and the three agents retry harder than the human ever did.

The third front looks like a strategic failure and reads like a customer-experience failure. Your platform does not serve customers arriving through a proxy. An agent booking a hotel, reconciling an invoice, triaging a support ticket — each one judges your interface in a way no human focus group ever would. Undocumented APIs, unscriptable auth, session-bound state that assumes a browser — the agent routes around you. The human never knows it was your platform that failed. They only know that the agent recommended a different supplier.

The fourth front is the one military planners think about most and private-sector leadership thinks about least: you were already too brittle to manoeuvre, and the new load makes manoeuvring impossible. Agility is a structural property of systems whose foundations allow them to change shape without shattering, never a cultural virtue declared into existence at an offsite. Brittle systems cannot be secured, instrumented, or adapted quickly, and under new load they fail in ways that were not on any threat model because the threat model assumed yesterday's topology.

Each of these four is a distinct class of foundation debt. Scale is a data and architecture debt. Cost is a unit-economics and observability debt. Customer-by-proxy is an API and identity debt. Fragility is a security and operational-discipline debt. No single investment closes all four.

Why "just move faster" is the instinct that loses the war

Most leadership reads the agent reading list and arrives at the same conclusion: we need to go faster. The conclusion is correct. The instinct behind it is what kills them.

Professionals are fast because they were slow first. This is not a motivational poster. It is the observed operational truth of every high-performance community that has survived contact with the real world.

Roman strategists named it in two words: Festina Lente. Make haste slowly. Augustus took it as a personal motto. Lao Tzu had already phrased it centuries earlier as nature does not hurry, yet everything is accomplished. Navy SEAL instructors rebuilt the same idea from first principles and compressed it into slow is smooth, smooth is fast. Operators walk through the mission at quarter speed until the movement becomes a shape in the body. The speed emerges from the shape. Smoothness is what is fast; velocity without smoothness is panic.

A Formula 1 pit crew works the same way. The sub-two-second tyre change is the outcome. The foundation is months of choreography rehearsed at walking pace until execution is muscle memory. No pit crew has ever hit two seconds by deciding to move faster on race day. The two seconds is the residue of having been slow, deliberately, for a long time.

Construction is the civilian cousin. A twelve-storey building goes up in three months because the foundation was poured, cured, and inspected over the preceding eighteen. Skip the inspection, and the building still goes up in three months. It just does not stay up.

What a foundation actually is

"Foundation" has become a word everybody nods at and nobody can define. If you cannot define it, you cannot fund it. And if you cannot fund it, 2027 is where you find out what you had.

A foundation is whatever has to be true in advance for the capability you want to be possible at all. Context-specific, not a universal checklist. The diagnostic is older than any of the technology: start from the capability you want and ask what must be true for this to work? Keep asking until the answers are things either true today or not, and at the actual scale. The answers that are not true today are your foundation work. If you cannot build them before the capability needs them, the capability cannot be built. Not "harder to build." Cannot be built. That distinction is the line between planning and pretending. When some vibe AI consultant tells you they can build it in two weeks, it is true for the façade, but not the foundation.

The sharpest measurement for the crisis posture is cost-per-user. Not cost-per-request, not cost-per-inference call, nothing individual and detached — cost-per-user, computed honestly, including every cost your platform incurs on behalf of a customer, including staff costs and other overheads. You almost certainly have the number today. You may not have a number for it under the shape of load you are twelve months away from. That projection is the diagnostic. Hold your current cost-per-user against the cost-per-user of a customer whose agents make ten, a hundred, a thousand times the requests the human would. The relationship between the two numbers tells you which of the four fronts will break you first.

If the projected cost-per-user stays flat, you poured concrete at some point. If it grows linearly, you have an infrastructure-cost problem you can engineer around. If it grows super-linearly, the architecture cannot survive the load class already on the wire, and no amount of cleverness will save it. Most teams have never computed this number honestly because honest computation is one of the foundation disciplines — and it has not been built yet either.

Beneath the measurement sits a short practitioner's list of foundation classes. Not a checklist — a map. Every capability a modern organisation wants to build sits on some combination of these seven.

Data quality and access. Can the systems that need data get clean, versioned, contextualised data in the format and shape they need, without a six-month request cycle? Are caching and deduplication handled at the platform layer rather than rebuilt by every team that hits the same wall? The Monday test is to pick one high-value question the business has been asking for a year, and time how long it takes to get an honest answer. If the answer is weeks, the foundation is not poured.

Identity. Can a human, an employee, an agent acting on behalf of a human, and an internal service be told apart by your platform without manual ceremony? The Monday test is to try to authenticate an agent acting on behalf of a real customer, end-to-end, against your own platform. Note how many steps are undocumented, bespoke, or impossible.

API surface. Does every capability your platform exposes to humans also exist as a versioned, stable, documented API that an agent could use without scraping a rendered page? Giving agents a proper machine-to-machine interface is how you control them — rate limits, auth scopes, audit trails, deprecation signals are all cheaper to enforce on an API than on a screen scraper pretending to be a browser. The Monday test is to hand your documentation to a technical reader who has never seen your product and ask them to integrate a single flow in an afternoon. The afternoon does not need to succeed. The failure mode is the diagnostic.

Observability and unit-cost. Can you, today, answer the question what did the last hour cost, and why? with better than folklore? Are your monthly costs actually allocated to the users who caused them, or does every invoice arrive as a single line item with no one accountable? The Monday test is to pull your egress, your inference, and your infrastructure costs at hourly granularity for the last fourteen days and correlate them with load. If the answer involves spreadsheets reconstructed by hand, the foundation is not poured.

Agent-readiness. Does your platform assume that the thing on the other end of the connection is a browser and a mouse, or does it assume the thing on the other end might be a machine? The Monday test is to pick one customer journey and have a capable LLM attempt to complete it against your platform, with no human in the loop. Where the LLM dies tells you the gap.

Security posture. Can your platform be secured quickly under pressure? The Monday test is to take the most recent credible threat-model update and ask your security lead how long it would take to deploy a mitigation to production. The unit of the answer matters more than the number; if the unit is quarters, the foundation is not poured.

Egress discipline. Is anyone accountable for the cost of data leaving your systems, and does anyone on your team know what that number was yesterday? The Monday test is to name the person who would know by tomorrow morning if egress doubled overnight. If there is no person, or the person is a dashboard nobody reads, the foundation is not poured.

These seven are the classes, not the work. Any one of them can absorb years of peacetime investment. All of them have to exist in some form for the crisis posture to be survivable, and none of them can be conjured during the crisis itself — a truth covered at length in my book about digital transformation, The Gap, which names this pattern as the Invisible Decade and documents how consistently it decides the outcome of transformation programmes before anyone has signed a single contract.

The Invisible Decade, now compressed

Every digital capability that seems to have appeared overnight at a competitor was built on a decade of invisible work nobody photographed. The data standardisation nobody celebrated in 2016. The process harmonisation nobody put in the annual report in 2018. The API surface someone insisted on in 2020 and was mocked for. The observability discipline that got funded in 2022 because a single engineer refused to let it be cut. I know these because I have been that person, and still am today.

The MIT NANDA initiative's State of AI in Business 2025 put the number on it: 95% of enterprise GenAI pilots fail to scale into production, and the diagnosis is not model quality — it is enterprise integration. RAND's broader analysis puts the failure rate at 80.3% across the field, most of it concentrated in integration, data, and operational discipline. The 5% that work were built on foundations that were already there. They are not ahead because they are cleverer. They are ahead because they were slower earlier.

The crisis posture compresses the Invisible Decade. Competitors who spent 2015–2025 doing the unglamorous work will appear, in 2026 and 2027, to have sprinted to an AI capability that is ten years in the making. Leadership teams that did not do that work will see the sprint and conclude they can match it in eighteen months. They cannot. Nobody can. What they can do, in the next eighteen months, is start the foundation work so the decade after is a catch-up — or defer it, so the decade after is a collapse. There is no third option.

When you read online that AI is causing unemployment? It is not because it is replacing the people doing the work, it is because organisational collapse due to missing foundations — the explanations will blame other reasons, of course, but now you know the truth.

Peacetime is over

Doctrine does not change when the uniform comes off. Armies are built in peacetime. Platforms are built in peacetime. The military has never been allowed to forget which posture it is in. The civilian side has been comfortable pretending the posture it is in is the posture it would like to be in.

The commoditisation of AI removed that comfort. The crisis posture is already on. The conflict posture — bot traffic exceeding human traffic, every customer arriving through a proxy, unit economics decided at inference time rather than pricing time — is inside the planning horizon of any board not lying to itself.

Most corporate automation is an Excel file, a human rekeying it, and a second human reconciling the first human's mistakes. AI layered on top of that does not replace those humans. It multiplies the work they are already too slow to do.

The next eighteen months decide the posture you will be in when conflict arrives. Not the investment, the posture. Money buys speed. It never buys skipped steps. The foundation gets poured on your timeline, or it gets poured on someone else's, with your customers standing on theirs.

Start slow. It is the only way you get to be fast.