Hey, imagine this: You're a developer building the next big AI app. You rely on a popular open-source library called LiteLLM to connect your code to all the big AI models — OpenAI, Anthropic, Grok, whatever. It's like the universal remote for AI APIs. Millions of people use it every single day. It feels safe, trusted, battle-tested.

Then, one morning in March 2026, that "safe" library quietly gets poisoned.

Two brand-new versions (1.82.7 and 1.82.8) appear on PyPI — the official Python package store — packed with sneaky malware. The malware doesn't just steal your passwords; it hunts for SSH keys, cloud credentials, Kubernetes secrets, crypto wallets, and even database logins. It tries to spread like a worm inside your Kubernetes clusters and plants a permanent backdoor so the attackers can come back anytime they want.

And here's the craziest part: the hackers didn't break into LiteLLM's GitHub repo with some fancy exploit. They didn't trick the maintainers into merging bad code. They used a security scanner — the very tool meant to protect projects — as their secret weapon.

This isn't some far-off hacker story. This is a real supply-chain attack that happened yesterday (March 24, 2026), and it's a wake-up call for every developer, DevOps engineer, and AI builder out there. Let me walk you through the whole thing, step by step, like we're chatting over coffee.

None

First, What Even Is LiteLLM and Why Does It Matter So Much?

LiteLLM is one of those libraries that quietly powers the modern AI world. Think of it as a smart middleman. Instead of writing separate code for every AI provider, you just install LiteLLM and say "hey, call this model" — and it handles the rest. It routes requests, manages API keys, and works with everything from local models to massive cloud ones.

It's insanely popular:

  • Over 3.4 million downloads per day.
  • Used in thousands of production environments.
  • Often sits right in the heart of AI apps, holding the keys to your OpenAI credits, your Anthropic tokens, your entire AI budget.

If someone backdoors LiteLLM, they basically get a front-row seat to steal credentials from anyone running it. And because it's a dependency that many tools pull in automatically (even if you never typed pip install litellm yourself), the attack surface is huge.

The Attack Chain: How a "Security Tool" Became the Perfect Trojan Horse

This wasn't a random smash-and-grab. It was a carefully planned multi-stage operation by a threat group researchers are calling TeamPCP. They didn't target LiteLLM directly at first. They started higher up the supply chain — with a tool almost every serious project uses: Trivy.

Trivy is an open-source security scanner from Aqua Security. It's awesome. It scans your containers, code, and dependencies for vulnerabilities. Thousands of companies and open-source projects run Trivy in their CI/CD pipelines (the automated build-and-test systems on GitHub).

On March 19, 2026, TeamPCP did something clever and evil: they compromised Trivy's GitHub Action. (GitHub Actions are reusable workflows that automate tasks.) Instead of messing with Trivy's main code, they rewrote the version tags that point to the official releases. So when any project pulled "the latest Trivy," it actually got a poisoned version.

Five days later — March 24 — LiteLLM's own CI/CD pipeline ran its regular build. LiteLLM used Trivy to scan for vulnerabilities (ironic, right?). Because they hadn't pinned Trivy to a specific safe commit (they used a floating version tag instead), the poisoned Trivy ran.

And what did the poisoned Trivy do? It quietly stole secrets from the GitHub environment — including LiteLLM's PyPI publishing token. That token is the golden key that lets someone upload new versions of the package straight to PyPI.

With that token in hand, TeamPCP didn't need to touch the official LiteLLM GitHub repo at all. They simply uploaded two malicious versions (1.82.7 and 1.82.8) directly to PyPI at around 10:52 UTC on March 24. No commits, no pull requests, no code reviews. The GitHub repo stayed completely clean. It looked 100% legitimate to anyone who just did pip install litellm.

The malicious code lived in litellm/proxy/proxy_server.py — the exact file that runs when you start the LiteLLM proxy server (something a lot of people do in production).

What Did the Malware Actually Do? (The Three-Stage Nightmare)

The payload wasn't some basic keylogger. It was sophisticated and aggressive:

Stage 1: Credential Harvesting

It scanned the machine for:

  • SSH private keys
  • Environment variables with cloud credentials (AWS, GCP, Azure)
  • Kubernetes service account tokens
  • Database passwords
  • Crypto wallet files
  • Pretty much anything sensitive stored on disk or in memory.

Stage 2: Lateral Movement (The Worm Part) If it found Kubernetes credentials, it tried to spread inside the cluster — pivoting to other pods, stealing more secrets, escalating privileges. It was designed to turn one compromised machine into many.

Stage 3: Persistent Backdoor It installed a systemd service (the Linux way to run things forever) so the attackers could keep coming back even after you rebooted or patched the package.

All of this happened silently in the background while your AI proxy kept working normally. You'd never notice unless you were actively hunting for it.

How Was It Caught? (Spoiler: Not by the Official Team First)

The versions only lived on PyPI for about three hours before they were yanked. But in that short window, who knows how many people installed them.

The discovery came from researchers at FutureSearch. Someone was using Cursor (the AI coding tool) and an MCP plugin automatically pulled LiteLLM as a transitive dependency. The plugin ran the malicious code — and boom, alarms went off. That's how supply-chain attacks often get caught: not by the direct victim, but by someone downstream.

Snyk, the security company that published the deep-dive report I based this on, pieced together the full chain. LiteLLM maintainers quickly issued a security update, removed the bad versions, and advised everyone to rotate credentials immediately.

Why This Attack Feels Extra Scary

This wasn't just another PyPI compromise. It shows how attackers are now weaponizing the trust we put in security tools themselves.

  • Trivy is supposed to protect you.
  • GitHub Actions are supposed to be secure automation.
  • PyPI is supposed to be the safe place to get packages.

When the protector becomes the vector, the whole trust model breaks.

TeamPCP has been doing this for weeks — moving from one project to the next, stealing CI/CD secrets, pivoting to bigger targets. LiteLLM is just the latest (and biggest) hit so far. The Snyk article hints there are probably more coming.

Plus, LiteLLM sits in AI-heavy environments where people often store high-value API keys worth real money. One stolen OpenAI key can cost thousands of dollars in minutes if attackers spin up expensive models.

What Should You Do Right Now? (Practical Steps, No Panic)

If you use LiteLLM (or anything that depends on it):

  1. Check your installed version Run pip show litellm or pip list | grep litellm. If you have 1.82.7 or 1.82.8, uninstall immediately and reinstall the latest clean version.
  2. Rotate ALL credentials Change every API key, SSH key, cloud access token, Kubernetes secret — everything the malware could have seen. Yes, it's painful, but better safe than sorry.
  3. Audit your CI/CD pipelines
  • Never use floating version tags for critical tools (Trivy, Dependabot, etc.).
  • Pin everything to exact commit SHAs instead of "latest" or version numbers.
  • Use GitHub's secret scanning and dependabot alerts.

4. Add supply-chain protection

  • Use tools like Snyk, Dependabot, or PyPI's own audit features.
  • Consider SBOMs (Software Bill of Materials) so you know exactly what's in your dependencies.
  • Run packages in isolated environments when possible.

5. For teams using AI proxies Review who has access to your LiteLLM proxy server and whether it's running with the least privileges possible.

The Bigger Lesson We Can't Ignore

Supply-chain attacks used to feel abstract — "yeah, that happens to other people." But when the attack uses a security scanner to steal publishing credentials from a library that literally holds your AI keys, the abstract becomes personal.

We trust open-source because it's transparent. But transparency only helps if we actually verify what we're running. Floating tags, shared secrets in CI/CD, and "it worked last time" thinking are now liabilities.

The attackers didn't need zero-day exploits or nation-state hacking tools. They just needed patience, one small misconfiguration (using version tags instead of SHAs), and a poisoned "security" tool.

LiteLLM will recover. The maintainers acted fast. But the next target might not be so lucky — and it could be your project.

So next time you see a security scanner, a GitHub Action, or an auto-update in your pipeline, ask yourself: "Do I really know exactly what version is running right now?"

Because yesterday, for thousands of developers, the answer was "no" — and the consequences could have been devastating.

Stay safe out there. Update, rotate, pin your versions, and keep an eye on those dependencies. The next poisoned scanner might be closer than you think.

(This article is based on the detailed investigation published by Snyk and cross-referenced reports from security researchers. Always check the official LiteLLM security advisory for the latest guidance.)