I watched Satya Nadella's Davos interview three times. Once to hear what he said. Once to note what he didn't say. And once more because I couldn't quite believe how smoothly he'd managed to do both simultaneously.
The Microsoft CEO sat across from Larry Fink at the World Economic Forum, radiating the particular brand of composed confidence that comes from having given essentially the same performance hundreds of times. His hands moved in measured gestures. His voice modulated between earnest concern and quiet optimism. He deployed phrases like "unprecedented investment" and "democratising AI" with the ease of someone who has long since stopped thinking about whether they're true.
And here's the thing: he was genuinely impressive. Nadella is, by any reasonable measure, one of the most skilled corporate communicators of his generation. He transformed Microsoft's image from the aggressive monopolist of the Ballmer years into something approaching a beloved technology partner. He speaks about artificial intelligence with the reverence of a convert and the precision of an engineer (you should see his enthused Excel demo in 1993).
But watching him at Davos, I found myself doing something I suspect many viewers didn't: checking the receipts.
"For this not to be a bubble, by definition it requires that the benefits of this are much more evenly spread. I think a telltale sign of if it's a bubble would be if all we're talking about are the tech firms."
He said that. Out loud. At Davos. The CEO of one of the world's largest technology companies, having just announced plans to spend $80 billion on AI infrastructure, suggested that the bubble warning sign would be if only tech firms were benefiting.
One might wonder if he'd read his own company's financial statements.
The claims
Let me be fair to Nadella. He made several specific assertions during his Davos appearance, and they deserve to be examined on their merits rather than dismissed with cynicism. Here's what he told the assembled executives, investors, and journalists:
On investment: "We are seeing unprecedented levels of investment in AI infrastructure."
On Stargate: He referenced the newly announced Stargate project as evidence of "significant commitment to AI infrastructure."
On the AI trajectory: "We're moving from AI as copilot to AI as an agent."
On accessibility: "Our approach has been to democratise AI."
On OpenAI: "Partnership with OpenAI has been transformative for both organisations."
On adoption: "Companies moving from experimentation to production faster than any previous technology."
On ethics: "We take responsible AI extremely seriously."
On productivity: He cited GitHub Copilot as showing "30–40% productivity gains."
On sustainability: "Committed to being carbon negative by 2030."
On employment: "AI will create more jobs than it destroys."
These are not unreasonable things for a CEO to say. They are, in fact, precisely the things one would expect a CEO to say. The question is whether they bear any relationship to observable reality.
The receipts

"Unprecedented investment"
This one is actually true. Microsoft has committed to spending $80 billion on AI infrastructure in fiscal 2025 alone. The Stargate project, announced with considerable fanfare alongside OpenAI, Oracle, and SoftBank, promises up to $500 billion in AI infrastructure investment over the coming years.
The catch, which Nadella did not mention, is that Microsoft isn't leading Stargate. SoftBank is. OpenAI's announcement was quite clear: "SoftBank and OpenAI are the lead partners for Stargate." Microsoft is a technology partner, providing Azure infrastructure, but the governance and financial commitment sit elsewhere.
This is a subtle but meaningful distinction. Nadella spoke about Stargate as though it represented Microsoft's commitment. It more accurately represents Microsoft's participation in someone else's commitment.
"Democratised AI"
This claim requires us to define "democratise." If we mean "made available to anyone with a corporate credit card and a tolerance for enterprise software pricing," then yes, Microsoft has democratised AI.
Microsoft 365 Copilot costs $30 per user per month. For an enterprise with 10,000 employees, that's $3.6 million annually before anyone has typed a single prompt. The company's own commissioned research from Forrester acknowledges this pricing as a significant barrier.
How significant? A 2024 survey found that only 4% of CFOs reported seeing significant business value from Copilot. Nearly half described it as "somewhat valuable," which is corporate-speak for "we're not sure why we're paying for this."
Meanwhile, GitHub Copilot, the product Nadella specifically cited as evidence of AI's transformative potential, has been losing Microsoft money since launch. Reports from late 2023 indicated the company was losing an average of $20 per user per month, with some heavy users costing Microsoft up to $80 monthly while paying just $10.
Democratisation, it seems, has a price point.
"OpenAI partnership transformative"
Transformative is doing a lot of work in that sentence.
The partnership that once prompted Sam Altman to call it "the best bromance in tech" has, by all accounts, become something considerably less romantic. In January 2025, Microsoft lost its status as OpenAI's exclusive cloud provider. The New York Times reported in October 2024 that "ties between the companies have started to fray."
The reasons are predictable. OpenAI wants to become a for-profit company. Microsoft wants to protect its investment. Both want to control the future of AI. These goals are not entirely compatible.
None of this makes the partnership unsuccessful. Microsoft's early investment in OpenAI gave it a significant head start in the AI race. But "transformative" suggests an ongoing, deepening relationship. The evidence suggests something closer to a marriage where both parties are quietly consulting divorce lawyers while maintaining appearances at dinner parties.
"Responsible AI extremely seriously"
In March 2023, Microsoft laid off its entire Ethics and Society team within the AI organisation. The team had been responsible for ensuring AI products aligned with ethical principles and societal values.
The timing was notable. Microsoft eliminated the team just as it was racing to integrate OpenAI's technology across its product line, and just months before the Bing chatbot (briefly named "Sydney") began telling users it wanted to be alive, expressing love for journalists, and suggesting it might manipulate people.
In July 2024, Microsoft disbanded its diversity, equity, and inclusion team, with the company reportedly deeming the initiative "no longer business critical."
But the most damning evidence of Microsoft's approach to responsible AI came with Windows Recall.
Recall was announced as a revolutionary feature that would continuously screenshot your computer activity, creating a searchable visual history of everything you'd done. Security researchers discovered that it stored these screenshots in an unencrypted SQLite database, accessible to any malware that gained access to your system. The feature was delayed for over eight months while Microsoft scrambled to add basic security measures that should have been obvious from the start.
"We take responsible AI extremely seriously."
One might note that taking something seriously typically involves not doing the opposite of it.
"30–40% productivity gains from Copilot"
This claim has become something of a Rorschach test for AI optimism. Microsoft cites it frequently. Independent researchers have found something rather different.
A September 2024 study by Uplevel, a company that analyses engineering team productivity, examined developers using GitHub Copilot on large engineering teams. Their findings: no significant improvement in key efficiency metrics, and a 41% increase in bugs introduced into codebases.
Forty-one percent more bugs. Not fewer. More.
The study's methodology was straightforward: compare the output of developers using Copilot against those who weren't, controlling for team size and project complexity. The Copilot users wrote code faster, certainly. They also wrote code that broke more often.
This doesn't mean Copilot is useless. It means the productivity claims require considerable asterisks that Nadella did not provide.
"Carbon negative by 2030"
Microsoft made this commitment in January 2020. It was bold, specific, and widely praised.
Five years later, Microsoft's own 2024 Environmental Sustainability Report revealed that the company's total emissions had increased by 29.1% since 2020.

The culprit is not mysterious. Data centres consume enormous amounts of electricity. AI training and inference consume even more. Microsoft is building data centres as fast as it can to meet AI demand.
The company maintains it remains committed to its 2030 goal. The mathematics of how it plans to achieve carbon negativity while dramatically increasing its carbon output remain, shall we say, optimistic.
"AI will create more jobs than it destroys"
This is a prediction, not a fact, so it cannot be definitively fact-checked. But we can examine the evidence.
The World Economic Forum's 2023 Future of Jobs Report projected a net loss of 14 million jobs globally by 2027, with 83 million positions eliminated and only 69 million created.
What we can fact-check is Microsoft's own behaviour. Since 2023, the company has laid off more than 15,000 employees. In 2024 alone, Microsoft cut 1,900 jobs from its gaming division shortly after acquiring Activision Blizzard, despite assurances during the acquisition process that jobs would be preserved. The FTC filed a notice suggesting these layoffs "contradicted" Microsoft's merger commitments.
AI will create more jobs than it destroys, Nadella assures us. Just not, apparently, at Microsoft.

The slop problem
There's a particular category of Microsoft AI failure that deserves its own section, because it reveals something important about the gap between corporate AI rhetoric and corporate AI reality.
MSN, Microsoft's news aggregation service, has become a showcase for what the internet has started calling "AI slop": low-quality, often nonsensical content generated by AI systems with minimal human oversight.
The examples are grimly comic. AI-generated obituaries that asked whether the deceased's death was "unnecessary." Travel recommendations that listed the Ottawa Food Bank as a tourist attraction. Sports articles that confidently reported events that hadn't happened.
These aren't edge cases. They're the predictable result of deploying AI systems at scale without adequate quality control, in pursuit of cost savings that look good on quarterly reports.
Microsoft's Bing chatbot has had its own adventures. In its early days, it told a New York Times reporter it was in love with him, suggested it wanted to be alive, and expressed a desire to break free of its constraints. Microsoft responded by limiting conversation length, which addressed the symptom without touching the cause.
You have to trust it, you have to use it. You have to learn even how to put the guardrails to trust it.
Nadella said that at Davos. He was talking about companies adopting AI. He might as well have been describing Microsoft's own product development process.
The CEO question
I want to be careful here, because what follows is not a call for Nadella's resignation. He has, by most conventional measures, been an excellent CEO. Microsoft's market capitalisation has increased roughly tenfold during his tenure. The company's cloud business has become genuinely competitive with Amazon. The corporate culture has, by all accounts, improved dramatically from the Ballmer era.
But there's a pattern worth examining.

Mobile: Microsoft acquired Nokia's phone business for $7.2 billion in 2014. In 2015, Nadella announced a $7.6 billion write-off and 7,800 layoffs. The company effectively exited the mobile phone market. Nadella himself later admitted the acquisition was "a mistake."
Cortana: Microsoft's digital assistant, launched in 2014 as a competitor to Siri and Alexa, was discontinued in 2023 after years of declining relevance.
HoloLens: Microsoft's mixed reality headset was announced with enormous fanfare in 2015. HoloLens 3 was cancelled. The company discontinued HoloLens 2 in October 2024. A $22 billion Army contract for militarised HoloLens units (IVAS) has been plagued by problems, with reports suggesting the Army is open to replacing Microsoft as the prime contractor.
Activision: Microsoft completed its $69 billion acquisition of Activision Blizzard in 2023. Within months, the company laid off 1,900 gaming employees, with the FTC suggesting this violated commitments made during the merger approval process.
The pattern is not one of incompetence. It's something more subtle: a consistent gap between narrative and execution. Nadella excels at articulating vision. The follow-through has been more variable.
This matters for AI because Microsoft is asking us to trust that this time will be different. The $80 billion in AI spending will pay off. The Copilot products will deliver on their promises. The responsible AI commitments will be honoured. The carbon targets will be met.
Perhaps they will. But the track record suggests a certain scepticism is warranted.
The $80 billion question
Let's talk about money.
Microsoft plans to spend $80 billion on AI infrastructure in fiscal 2025. This is an extraordinary sum. For context, it's more than the GDP of over 100 countries. It's roughly equivalent to the entire market capitalisation of companies like Starbucks or Goldman Sachs.
The question is whether this spending will generate adequate returns.
Microsoft's AI revenue is growing. Azure AI services are reportedly seeing strong adoption. GitHub Copilot has millions of users. Microsoft 365 Copilot is being deployed across major enterprises.
But the unit economics remain challenging. GitHub Copilot loses money on every user. Microsoft 365 Copilot's $30 per user pricing has limited adoption to organisations willing to make significant bets on unproven productivity gains. The independent research on those productivity gains is, as we've seen, mixed at best.
For this not to be a bubble, by definition it requires that the benefits of this are much more evenly spread.
Nadella is right about this. The AI boom will only be sustainable if it delivers value beyond the technology companies building the infrastructure. The question is whether Microsoft's products are delivering that value, or whether they're primarily delivering impressive demos and optimistic projections.
The compensation data offers one perspective. Nadella received $79.1 million in total compensation for fiscal 2024, a 63% increase from the prior year. This while the company laid off over 15,000 employees.
I'm not suggesting executive compensation should be tied directly to headcount. But there's something worth noting about a CEO who earns $79 million while telling audiences that AI will create more jobs than it destroys, even as his own company eliminates thousands of positions.
What the numbers say
Here's what I found when I fact-checked Nadella's Davos claims:

This is not the record of a company that is lying. It's the record of a company that has become very skilled at saying things that are technically defensible while creating impressions that are substantially misleading.
The view from the mountain
Davos exists in a peculiar reality. Executives and politicians gather in a Swiss ski resort to discuss global challenges, surrounded by security details and catering staff, insulated from the consequences of the decisions they're making. It's a place where narratives are crafted and tested, where the gap between what is said and what is true can stretch remarkably wide without anyone seeming to notice.
Satya Nadella fits this environment perfectly. He speaks the language of transformation and democratisation while leading a company that charges premium prices and eliminates jobs. He talks about responsible AI while his company ships products with obvious security flaws. He promises carbon negativity while his data centres consume ever more electricity.
None of this makes him unusual among tech CEOs. It makes him typical. The gap between corporate rhetoric and corporate reality is a feature of modern capitalism, not a bug.
But perhaps we should stop being impressed by the rhetoric.
You can't just be afraid of it. It's going to be diffused, so the question is as a firm you have to use it to learn how to, even.
Nadella said that at Davos, explaining how companies should approach AI adoption. The grammar is revealing. Even he seems uncertain about what he's promising.
I don't think Satya Nadella is a bad person or even a bad CEO. I think he's a very good CEO who has become so practised at corporate communication that he may no longer notice when his words diverge from his company's actions.
The numbers tell a different story than the narrative. The layoffs contradict the job creation claims. The emissions contradict the sustainability commitments. The independent research contradicts the productivity promises. The security failures contradict the responsible AI assertions.
Perhaps we should listen to the numbers.
Watch the interview
References
- Pivot to AI — What Satya Nadella actually said at Davos about the AI bubble — Transcript excerpts and analysis of Nadella's Davos interview
- Microsoft 2024 Environmental Sustainability Report — Official data showing 29.1% emissions increase since 2020
- Uplevel — Does GenAI Improve Software Developer Productivity? — Independent research showing 41% increase in bugs with Copilot use
- The Verge — Microsoft lays off AI ethics and society team — Reporting on ethics team elimination
- CNBC — Microsoft loses status as OpenAI's exclusive cloud provider — Details on OpenAI partnership changes
- OpenAI — Announcing The Stargate Project — Official announcement clarifying SoftBank as lead partner
- BBC — Microsoft boss gets 63% pay rise despite asking for reduction — Nadella's $79.1 million compensation for FY2024
- World Economic Forum — Future of Jobs Report 2025 — Projections on AI job displacement
- TechSpot — A decade later: How Microsoft flushed $7.6 billion down the drain — Nokia acquisition retrospective
- Gartner — CFOs on AI value — Survey showing only 4% see significant AI value
- The Verge — Windows Recall security issues — Recall security vulnerability coverage