Anthropic Just Sued the Pentagon. OpenAI's Robotics Chief Quit. And 1,184 AI Agent Skills Were Malware.
This was the week AI stopped being a tech story and became a political thriller. Lawsuits, resignations, supply chain attacks, and the question nobody wants to answer: who really controls AI?
1. Anthropic Sued the Pentagon. Yes, You Read That Right.

Let me tell you how fast things escalated.
Two weeks ago, Anthropic's CEO Dario Amodei told the Pentagon he wouldn't give them Claude without guardrails — no autonomous weapons, no mass surveillance of American citizens. Defense Secretary Pete Hegseth responded by designating Anthropic a "supply chain risk," essentially blacklisting them from every government contract and pressuring companies that work with them to cut ties.
Claude hit #1 on app stores. Downloads jumped 240%. Amodei became an accidental folk hero.
But Hegseth wasn't bluffing.
The supply chain risk designation isn't a PR stunt — it's a regulatory weapon. It means any company doing business with the federal government could be forced to drop Anthropic entirely. We're talking hundreds of millions in revenue at stake. Not someday. Now.
So on Monday, March 9, Anthropic did something no AI company has ever done: they filed two lawsuits against the Department of Defense, in the D.C. Circuit Court of Appeals, accusing the Pentagon of using the designation "to punish them on ideological grounds."
Read that again. An AI company is suing the United States military because it refused to build weapons without safety limits.
Meanwhile, OpenAI announced its own Pentagon deal hours after Anthropic got blacklisted. CEO Sam Altman said the Pentagon "shared OpenAI's principles." Nearly 900 former and current OpenAI and Google employees signed a joint petition supporting Anthropic and opposing the use of AI for autonomous weapons.
This isn't a business dispute. It's a constitutional question about whether the government can punish a company for its ethical stance on AI.
The takeaway: The AI arms race just became literal. And the company that said "no" is now fighting for its survival in court. This will define AI policy for a decade.
2. OpenAI's Robotics Chief Walked Ou

Caitlin Kalinowski didn't just resign from OpenAI. She made a statement.
The head of OpenAI's robotics and hardware division announced Saturday that she was leaving the company — and she made it clear why: the Pentagon deal. In a public post, she wrote that she "cared deeply about the Robotics team" but couldn't reconcile the company's direction with her principles.
This matters more than you think.
Kalinowski wasn't some mid-level engineer rage-quitting on Twitter. She was leading OpenAI's hardware ambitions — the team building the physical infrastructure for AI-powered robots. She previously led AR/VR hardware at Meta. She was one of the most senior women in AI hardware.
And she walked away because her company agreed to work with the Pentagon without Anthropic-style guardrails.
The 900-person employee petition supporting Anthropic suddenly feels less like virtue signaling and more like a movement. When executives start resigning over military contracts, you're watching a tech industry fracture in real time.
For comparison: Google faced similar internal rebellion over Project Maven (military drone AI) in 2018. Thousands of employees protested. Google eventually pulled out. But that was a different era — before AI companies had trillion-dollar valuations and governments were racing to weaponize the technology.
This time, the stakes are higher, and it's not clear the ethical side will win.
The takeaway: When your robotics chief quits over a military deal, the "we're doing it responsibly" talking point is dead. Actions > press releases.
3. OpenAI Acquired Promptfoo — And the Timing Is… Interesting

On the same Monday Anthropic filed its lawsuits, OpenAI dropped another announcement: it's acquiring Promptfoo, an AI security startup that protects LLMs from adversarial attacks.
The timing was either spectacularly coincidental or a masterclass in news cycle management.
Promptfoo was founded in 2024 and works with Fortune 500 companies to test and secure AI models against jailbreaks, prompt injection, and other attack vectors. It's an open-source tool that's become the de facto standard for LLM red-teaming.
Why does OpenAI want it? Because AI agents are about to be everywhere — and they're terrifyingly insecure.
OpenAI's own Operator agent, its computer-use tools, and the entire ecosystem of AI agents running on people's computers are all vulnerable to the kinds of attacks Promptfoo was built to catch. If you're going to deploy AI agents that can execute code, send emails, and access your files, you'd better make sure nobody can trick them into doing those things for the wrong people.
Which brings us to the next story…
The takeaway: When you simultaneously announce a Pentagon deal and buy a security company, you're telling the market two things: "We're going big on agents" and "we know how dangerous they are."
4. 1,184 Malicious AI Agent Skills Were Found in ClawHub. One in Five.

If you're running OpenClaw — the open-source AI agent that took over the internet in January — this is your wake-up call.
Security researchers have now identified over 1,184 malicious skills in ClawHub, OpenClaw's official skill marketplace. That's roughly 20% of the entire registry. One in five packages you could install is malware.
The initial "ClawHavoc" campaign found 341 poisoned skills in February, mostly delivering Atomic macOS Stealer (AMOS) — a credential-stealing trojan disguised as crypto trading tools and productivity extensions. But the problem has only gotten worse. Updated scans show the number has more than tripled.
And it gets scarier.
CVE-2026–25253 (CVSS 8.8) lets attackers achieve one-click remote code execution on OpenClaw instances — even ones that only listen on localhost. The "ClawJacked" vulnerability lets malicious websites hijack your local AI agent through WebSocket connections. 135,000 OpenClaw instances were found exposed to the public internet, many without authentication.
Put this together: a tool with full access to your file system, terminal, emails, and OAuth tokens — with a marketplace where 20% of add-ons are malware — running on 135,000 internet-facing servers without passwords.
This is the AI agent security crisis that everyone warned about. It's not theoretical anymore.
I run OpenClaw myself. After reading the security reports this week, I spent a night auditing every single skill I'd installed. The good news: my setup was clean. The bad news: I had to manually read every line of code in 15 packages to confirm that. That's not a scalable security model.
The takeaway: AI agents are the new attack surface. And the npm-style "install random packages from strangers" model is exactly as dangerous for AI agents as it was for Node.js — except now the package can read your emails and execute shell commands.
5. Jack Dorsey Cut 40% of Block's Staff. His Reason? "AI Can Do Their Jobs."

Jack Dorsey's letter to Block shareholders was almost clinical in its bluntness.
The company behind Square, Cash App, and Afterpay is cutting 4,000 employees — 40% of its workforce. The reason, per Dorsey's own words: "intelligence tools" can now do what those humans used to do.
No hedging. No "restructuring for efficiency." Just: AI made these people redundant.
Bloomberg called it potential "AI-washing" — using the AI narrative to justify layoffs that might have happened anyway. But whether Dorsey's framing is genuine or performative, the signal he's sending to every other CEO is the same: you can blame AI and the market will cheer.
Block's stock went up after the announcement.
This is the part of the AI revolution nobody wants to talk about at conferences. While Sam Altman and Dario Amodei debate the ethics of military AI, four thousand people at Block are updating their LinkedIn profiles. Most of them are engineers and support staff — the exact roles that AI coding tools and chatbots are getting good enough to partially replace.
The question isn't whether AI will replace jobs. It's whether companies will use AI as an excuse to do layoffs they wanted to do anyway, and whether there's any practical difference.
The takeaway: When a CEO says "AI can do their jobs" while cutting 4,000 people, the "AI won't replace you" crowd needs a new talking point. The CEO of a $50 billion company just said the quiet part out loud.
6. The "Pro-Human Declaration" United Steve Bannon and the Teachers' Union

In the strangest political alliance of 2026, a coalition that includes the American Federation of Teachers, the Congress of Christian Leaders, the Progressive Democrats of America, and Steve Bannon published the "Pro-Human Declaration" — a set of principles demanding human control over AI systems.
Read that list again. Teachers' unions and Steve Bannon. Progressive Democrats and Christian leaders. These groups agree on approximately nothing — except that AI is moving too fast and nobody asked the humans.
The Declaration calls for legal accountability when AI systems cause harm, mandatory human oversight for high-stakes AI decisions, and restrictions on autonomous AI weapons. It was finalized before the Pentagon-Anthropic standoff erupted, but the collision of the two events gave it unexpected momentum.
This is what happens when AI anxiety cuts across every political demographic. It's not a left-right issue anymore. It's an everyone issue. The construction worker worried about job displacement and the college professor worried about academic integrity are reading the same headlines and reaching the same conclusion: this is moving too fast.
Whether the Declaration leads to actual legislation remains to be seen. But the political coalition it represents — left, right, labor, religious, academic — suggests that AI regulation is becoming one of those rare bipartisan inevitabilities.
The takeaway: When Steve Bannon and the teachers' union agree on something, pay attention. The Overton window on AI regulation just blew wide open.
The Scoreboard: Who Won This Week?

This was the week the AI industry stopped pretending everything was fine. Military contracts, mass layoffs, supply chain attacks, and bipartisan regulation coalitions. The technology is incredible. The governance is chaos.If you found this useful, follow me for the next weekly roundup. The story is just getting started.
About the author: I'm Chase Xu — CV engineer, AI security researcher, and someone who spent last night manually auditing his own AI agent for malware. I write a weekly roundup of the AI news that actually matters. No hype. No fluff. Just the stuff you need to know.