Daniel Stenberg has maintained cURL for over 25 years. If you've never heard of it, that's fine — you've used it today anyway. It's the quiet plumbing behind virtually every internet data transfer on the planet. Billions of devices. One very tired maintainer. And as of January 2026, no more bug bounty program.

The program paid out $86,000 over its lifetime, 78 confirmed, real security vulnerabilities fixed. Genuinely impressive for a project running on volunteer effort and goodwill. Then the economics broke in a way nobody quite planned for.

When cURL offered up to $10,000 for a critical vulnerability, the assumption was that finding one required skill, time, and deep technical knowledge. That assumption held for years. Then people figured out you could just ask ChatGPT to find one instead. The barrier to submitting a "critical security report" dropped from weeks of work to about four minutes of prompting. The floodgates opened.

By late 2025, roughly one in twenty or one in thirty cURL security reports was real. In 2024 it was one in six. That's not a trend, that's a collapse.

What makes this genuinely maddening is what the fake reports look like. They're polished. Perfect English, neat bullet points, confident technical framing. One came in describing an HTTP/3 stream dependency exploit with GDB session logs and register dumps attached, the kind of detail that signals serious research. The function it cited as the vulnerability's root cause does not exist anywhere in cURL's codebase. The AI invented it. The person submitting never checked.

Another report was submitted with the original ChatGPT prompt still visible. It ended with: "and make it sound alarming."

cURL's seven-person security team cannot just ignore these. That's the trap. Every report that looks like a security issue has to be investigated, because the one time you don't, you've left a real vulnerability sitting unpatched. So the team spends hours chasing fabricated functions through a real codebase, tracing logic that leads nowhere, validating claims that have no basis in the actual code. Then they do it again. And again.

Stenberg calls this "terror reporting." That's not drama, it's an accurate description of what it feels like to process an endless queue of convincing nonsense, knowing a real bug might be somewhere inside it. The cURL team isn't exhausted from hard work. They're exhausted from pointless work that looks exactly like hard work until you're three hours deep.

The second-order risk is the one that should genuinely worry people. When humans process enough noise that mimics signal, they adapt by filtering faster, and faster filtering is how real vulnerabilities get missed. AI slop doesn't just waste time. It trains the people guarding critical infrastructure to be less careful. That's a software supply chain problem, not just a maintainer morale problem.

Stenberg's response was to shut the bounty down entirely at the end of January, removing the financial incentive that made zero-effort submissions worth trying. He's also been clear that he isn't anti-AI-AI-powered analysis tools have helped fix over 100 real bugs in cURL that years of fuzzers and professional audits never caught. The issue was never the technology. It was the combination of financial incentive, zero accountability, and tools powerful enough to fake expertise convincingly.

cURL's AI contribution guidelines are public and available for other projects to adopt. Stenberg has said explicitly, take them, use them. You don't have to be drowning alone.

Twenty-five years of keeping critical infrastructure running, and it took four-minute ChatGPT prompts to finally break the system. That's the story. Not AI being dangerous in some abstract future sense, AI being used carelessly right now, by real people, burning out the humans quietly holding the internet together.