CYBERSECURITY · AI · SUPPLY CHAIN

This week, the tools your teams use to build and secure AI pipelines were weaponised against you. I know — I spent last week rotating secrets because of it.

Last week I spent three days helping my teams rotate secrets.

Not because we were breached. Not because an attacker found a gap in our architecture. Because a tool we trusted to scan our pipelines for vulnerabilities — a tool running inside our CI/CD with access to our cloud credentials, our SSH keys, our Kubernetes tokens — had been quietly turned into a weapon.

Every secret it touched had to be treated as compromised. Every pipeline it had ever scanned was suspect. We didn't know exactly what had been taken. We just knew we had to assume the worst and act accordingly.

That's the incident response reality that doesn't make the CVE writeups. The scramble. The inventory. The 3AM Teams thread where someone asks "do we know which version we were running on March 19th?"

This is what it looks like when your security stack becomes the attack surface.

The Week the AI Security Stack Collapsed

Most coverage of what happened this week reads like a CVE changelog. I want to tell it differently — as a timeline, because the shape of this attack is what makes it extraordinary.

March 19, 2026. A threat actor group called TeamPCP compromised Trivy — the most widely used open-source vulnerability scanner in the cloud-native ecosystem. They didn't find a zero-day in Trivy's code. They compromised credentials from a prior, incompletely remediated security incident and used them to do something far more surgical.

They force-pushed malicious commits to 76 out of 77 version tags in the aquasecurity/trivy-action GitHub repository and all 7 tags in aquasecurity/setup-trivy. The legitimate version labels still appeared in every pipeline. The metadata showed no visible changes. But underneath, the payload was running.

Pipelines appeared to work normally while the credential stealer ran silently underneath.

Within hours: SSH keys stolen. Cloud access tokens harvested. Kubernetes secrets exfiltrated. Everything the pipeline touched — gone.

Day 2. Stolen npm tokens fed a self-propagating worm that infected 66+ npm packages across multiple organizations.

Day 4. Malicious Docker images were pushed. 44 Aqua Security repositories defaced.

Day 5. Checkmarx KICS and AST GitHub Actions hijacked. Malicious VS Code extensions published.

Day 6. LiteLLM — the AI API gateway present in 36% of all cloud environments — was compromised on PyPI. Using credentials stolen from a Trivy scan. One dependency. One chain reaction. Five supply chain ecosystems in six days.

One stolen token became five compromised ecosystems in less than a week.

The Trivy Betrayal — When the Scanner Becomes the Weapon

Here's what makes this attack so architecturally significant: Trivy isn't a random developer tool. It's a security tool. It's the thing you deploy specifically to make your pipelines safer.

And that's precisely what made it valuable to TeamPCP.

Trivy runs in thousands of CI/CD pipelines as a GitHub Action on every PR, every merge, every deployment. It runs with elevated access to pipeline secrets by design — because it needs to scan containers, check images, analyze infrastructure code. Compromise Trivy, and you don't just get code. You get everything the pipeline is authorised to touch.

The attackers chose their target wisely. Security tools run with broad access because that's how they function. Compromising one hands you every credential that tool was trusted to touch.

What my teams experienced last week was the operational reality of that trust model failing. We had to answer questions that most organizations have never thought to ask: Which workflows ran trivy-action? Which version? Between what timestamps? What secrets were in scope for those runners?

Most teams couldn't answer those questions quickly. Some couldn't answer them at all.

The attack payload was injected into entrypoint.sh, executing before the legitimate Trivy scan began. From a pipeline operator's perspective, everything looked normal. The scan ran. The report came back. Nobody knew a credential stealer had already completed its work.

The most dangerous attack is the one that looks like business as usual.

The Chain Reaction — How Trivy Became LiteLLM

If the Trivy compromise was a targeted strike, what followed was a cascade.

BerriAI — the company behind LiteLLM — uses Trivy in its CI/CD pipeline for security scanning. When TeamPCP's poisoned trivy-action executed inside that pipeline, it harvested the PyPI publishing credentials. From there, the attackers published malicious versions 1.82.7 and 1.82.8 of LiteLLM directly to PyPI, bypassing the normal release process entirely.

LiteLLM is not a niche tool. It has over 40,000 GitHub stars, approximately 95 million monthly downloads, and is present in 36% of cloud environments monitored by Wiz Research. Organizations use it as a unified gateway to route requests to over 100 LLM providers — OpenAI, Anthropic, Azure OpenAI, Google Vertex AI, AWS Bedrock. It manages API keys. It tracks usage costs. It holds the keys to your entire AI infrastructure.

TeamPCP did not need to attack LiteLLM directly. They compromised Trivy, a vulnerability scanner running inside LiteLLM's CI pipeline without version pinning. That single unmanaged dependency handed over the PyPI publishing credentials, and from there the attacker backdoored a library that serves 95 million downloads per month.

The malware itself was elegant in a way that makes you uncomfortable. It used a Python .pth file — a mechanism that auto-executes on any Python interpreter startup without requiring an import statement. Every python command. Every pip install. Every pytest run. Silent credential theft, running in the background, every single time.

You didn't have to run LiteLLM. You just had to run Python in an environment where it was installed.

Approximately 300GB of data was exfiltrated from around 500,000 infected machines. TeamPCP is now reportedly working through those credentials and collaborating with the LAPSUS$ extortion group to target multi-billion-dollar companies. The incident is not over.

The Second Front — Langflow and the 20-Hour Window

While the Trivy cascade was unfolding, a separate but structurally identical story was playing out in the AI builder space.

On March 17, 2026, a critical vulnerability was disclosed in Langflow — the open-source visual platform with 145,000+ GitHub stars used to build AI agents and RAG pipelines. CVE-2026–33017 allows unauthenticated remote code execution via a single HTTP POST request. No credentials required. No exploit chain. One request.

Twenty hours after the advisory was published, attackers were already scanning the internet for vulnerable instances. No public proof-of-concept code existed at the time. They built working exploits directly from the advisory description.

Threat actors are monitoring the same advisory feeds that defenders use, and they are building exploits faster than most organisations can assess, test, and deploy patches.

The median time-to-exploit has collapsed from 771 days in 2018 to just hours in 2024. The median time for organizations to deploy patches is still approximately 20 days. That gap — 20 hours vs. 20 days — is not a patching problem. It's a structural assumption problem.

Langflow instances are configured with API keys for OpenAI, Anthropic, AWS, and database connections. A single compromised instance gives an attacker lateral access to cloud accounts, AI service budgets, and data stores simultaneously. And most of those instances were deployed by data science teams who never filed a security review request.

The AI tooling your organization is deploying outside of standard security review is carrying your crown jewel credentials. Attackers know this. Most security teams don't.

The Pattern No One Is Talking About

Step back from the individual CVEs and look at what TeamPCP actually targeted this week:

Trivy — a vulnerability scanner. Checkmarx KICS — an infrastructure-as-code scanner. LiteLLM — an AI API gateway. Langflow — an AI agent builder.

These are not random targets. They are the tools organizations deploy to improve their security posture and build AI capabilities. The most security-conscious organizations — the ones scanning every build, every PR, every deployment — had the greatest Trivy exposure.

The organisations doing the right things had the most to lose. That's not a coincidence. That's the design of the attack.

As a cybersecurity architect I've spent 13 years thinking about trust models. What this week exposed is a trust model failure that most organisations have never explicitly addressed: we grant our security and AI tools elevated access by design, and we never model what happens when those tools themselves are compromised.

I've written about the automation bias trap — the tendency to trust tool output over professional judgment. This is that trap operating at infrastructure level. The tool becomes the assumption. The assumption becomes the blind spot. The blind spot becomes the breach.

The same mental pattern that makes an engineer say "we have a WAF so our APIs are protected" now shows up as "we use Trivy so our pipelines are scanned." The tool is the control. The control is trusted implicitly. Nobody asks what happens if the control itself fails.

You cannot build a security model that doesn't account for the security of the security tools.

What You Need to Do Right Now

I'm not going to give you a 47-point checklist. Here's what actually matters, in priority order.

  1. Check your Trivy version immediately. Safe versions: Trivy binary v0.69.3 or earlier, trivy-action v0.35.0 (pinned to commit 57a97c7), setup-trivy v0.2.6 (commit 3fb12ec). If you ran v0.69.4 at any point during March 19–20, treat all secrets accessible to those workflows as compromised.
  2. Rotate everything those pipelines touched. GitHub tokens, cloud provider credentials, registry tokens, SSH keys, database passwords. Do not wait to confirm exploitation. The operational cost of unnecessary rotation is significantly lower than the cost of assuming you're clean when you're not.
  3. Look for tpcp-docs repositories. If your GitHub organization has a repository named tpcp-docs that you didn't create, the fallback exfiltration mechanism was triggered and secrets were successfully stolen. This is your canary.
  4. Audit your AI tooling inventory. Langflow, LiteLLM, n8n, and similar platforms. Find out who deployed them, what credentials they hold, whether they're internet-exposed, and whether they went through any security review. In many organizations, the answer to all of those questions will be uncomfortable.
  5. Pin your GitHub Actions to full commit SHAs. Mutable version tags can be force-pushed. This attack proved it at scale. A tag that said v0.34.0 last week does not guarantee it points to the same code today. Immutability is the only guarantee.

The Architecture Problem Behind All of This

Three days into last week's incident response, one of my engineers asked a question that stuck with me: "How do we model the risk of the tools we use to manage risk?"

It's not a question most security frameworks ask. Threat models focus on the systems you're protecting. Vulnerability management programs focus on the software you deploy for your users. Nobody formally models the attack surface of the security toolchain itself.

TeamPCP found that gap and walked straight through it. They didn't need a zero-day in your application. They needed a zero-day in your trust model. And your trust model said: the scanner is safe, because the scanner is the thing that finds unsafe things.

The events of this week aren't an anomaly. They're a preview. As AI tooling proliferates — as every organization deploys agents, pipelines, and gateways holding credentials to everything — the attack surface doesn't just grow. It compounds. Every new tool that touches sensitive secrets is a new potential pivot point.

I spent three days rotating secrets last week. I'd do it again in a heartbeat if it meant my teams were clean. But the harder work — the work that prevents next time — is building an architecture where the security of the security tools is treated with the same rigor as the security of everything else.

The most dangerous position in security isn't being attacked. It's trusting that the thing protecting you is still on your side.

I write weekly about cybersecurity, AI, and the human psychology that connects them. If this made you check your pipeline inventory, follow me here on Medium and connect with me on LinkedIn.

Did your organization use Trivy or LiteLLM? What did your incident response look like? I'd love to hear in the comments.