A collision between "disclose everything" and "say nothing" cultures is rewriting the rules of cybersecurity, and AI is the accelerant no one asked for.

Last week, a Linux networking vulnerability dropped quietly. A developer patched it within hours. Within nine hours, two independent researchers had found the same hole. Within a day, the entire security community was arguing about whether the patch should have been public at all.

This isn't an unusual story anymore. It's a preview.

The way we handle security vulnerabilities is built on two competing philosophies that have coexisted — uneasily — for decades. Coordinated disclosure, the dominant approach in most of the security world, says: tell the vendor privately, give them 90 days to fix it, then publish. "Bugs are bugs" culture, rooted in the Linux kernel community, says: just fix it as fast as you can and don't draw attention to it.

For years, both approaches worked well enough. The old internet was slow enough that either philosophy had room to breathe. But AI has changed the math entirely. What used to take a skilled human weeks now takes a model minutes. And the consequences are tearing apart the fragile truce between these two schools of thought.

The Patch That Started Everything

The story begins with something called the "Copy Fail" vulnerability — a flaw in how the Linux kernel's io_uring subsystem handles memory. To most people, the technical details are uninteresting. But to the security community, what happened next was riveting.

Hyunwoo Kim found the bug. Following standard Linux networking practice, Kim reported the issue through private channels to a small, trusted list of kernel security engineers. The fix was merged quietly. No public announcement. No advisory. No panic. This is how the "bugs are bugs" culture works: fix the problem, move on, and trust that the sheer volume of code changes will keep anyone from noticing.

It worked for years. It doesn't anymore.

Someone noticed the obscure commit. They recognized the security implications immediately and shared the details publicly. The embargo — informal, unspoken, and never formally agreed upon — collapsed in hours. Within nine hours, Kuan-Ting Chen had independently discovered and reported the same vulnerability through a completely separate channel.

The signal-to-noise ratio problem that once protected this approach has vanished.

Two Philosophies, One Internet

To understand why this matters, you need to understand the two cultures at war.

Coordinated disclosure is the textbook approach. When a researcher finds a vulnerability, they report it to the vendor privately. The vendor gets 60 to 90 days — sometimes more — to develop and ship a fix before anyone else learns the flaw exists. On paper, this is elegant. Users get a patch before they get a threat. In practice, it depends on a fragile assumption: that the vulnerability is hard enough to find that no one else will discover it during the embargo window.

The "bugs are bugs" philosophy, championed particularly within the Linux kernel community, takes the opposite position. When a developer fixes a security issue, they do it through the same open process as any other bug fix. There is no separate security track. No embargo. No fanfare. The reasoning is pragmatic: if the kernel is doing something wrong, someone will eventually turn it into an attack regardless of whether you announce it. Better to fix it fast and move on.

For a long time, this approach worked because of an unspoken advantage: nobody was looking. Security researchers were rare, and manually auditing millions of lines of kernel code was tedious, expensive work. The sheer volume of patches flowing through the system created natural camouflage. Unless you knew exactly what you were looking for, a security fix looked like any other bug fix.

AI has eliminated that camouflage.

How AI Broke the Signal-to-Noise Ratio

Here's what's changed. Modern AI models — I tested Gemini 3.1 Pro, ChatGPT-5.5 Thinking, and Claude Opus 4.7 — can now scan a code diff and identify security-relevant changes with remarkable accuracy. Feed one a commit that modifies buffer handling in the networking stack, and it will flag it as a potential vulnerability fix in seconds.

This shifts the calculus in two devastating ways.

First, it makes "bugs are bugs" culture dangerously transparent. When a security fix gets merged into the Linux kernel, dozens of organizations now run automated pipelines that evaluate every commit through LLM prompts asking: "Does this look like it fixes a security vulnerability?" The answer, increasingly, is yes — and the report goes out to their security teams automatically. The quiet fix doesn't stay quiet anymore. Not because someone broke the rules, but because the rules were never designed for a world where AI reads every commit.

Second, it makes coordinated disclosure embargoes nearly impossible to maintain. In the old world, if you found a vulnerability and reported it privately, there was a strong chance no one else would notice during a 90-day window. That's gone. In the Copy Fail case, just nine hours separated Kim's private report from Chen's independent discovery. As AI-assisted vulnerability scanning becomes standard practice at security firms, bug bounty programs, and intelligence agencies, the probability that multiple groups will find the same vulnerability during an embargo period approaches certainty.

The uncomfortable truth is that embargoes create a false sense of security. They tell defenders "don't worry yet" while attackers — who don't respect embargoes — are already working with the information. And the longer an embargo lasts, the more organizations sit exposed without patches while the vulnerability becomes common knowledge in increasingly adversarial circles.

The Case for Very Short Embargoes

So where does that leave us? Both traditional approaches are failing. "Bugs are bugs" no longer provides cover. Coordinated disclosure embargoes are collapsing under the weight of parallel discovery.

The most practical path forward — imperfect as it is — points toward very short disclosure windows. Not 90 days. Not even 7 days. Think hours.

This is where AI offers a genuine gift to defenders. The same capabilities that allow attackers to scan code for vulnerabilities at scale also allow defenders to respond faster. Automated patch analysis can confirm whether a reported vulnerability is real within minutes. Distribution systems like live patching can push fixes to production systems without waiting for the next reboot cycle. The bottleneck is no longer analysis speed — it's organizational process and update deployment.

The math is straightforward: if multiple independent parties will discover a vulnerability within hours anyway, the optimal strategy is to disclose immediately and get patches deployed before adversaries weaponize the finding. Every hour of delay after public discovery is an hour of unpatched exposure that provides no defensive advantage.

What Comes Next

Nobody has a clean answer. But the contours are becoming visible.

The Linux kernel community will likely need to formalize what was always informal. Security-relevant fixes will need some form of accelerated handling, even if it's just a flag that says "this should go through the security review queue" before the patch lands. The current approach of hoping nobody notices is already obsolete.

Vulnerability disclosure norms will continue to compress. Bug bounty programs will need to adjust expectations. The 90-day standard, already under pressure, will give way to shorter frameworks — or collapse entirely as researchers opt for immediate full disclosure rather than waiting for a deadline that may be meaningless.

And the open source ecosystem will need to invest in real-time patching infrastructure. If the era of "fix it quietly and let it blend in" is over, then the replacement has to be an infrastructure that can push fixes faster than adversaries can develop exploits.

The Copy Fail vulnerability was, in isolation, a minor networking bug. But the story of its discovery and the chaos that followed is a harbinger of everything to come. The two vulnerability cultures are not converging. They are colliding. And the debris from that collision will reshape how every piece of critical software on the internet gets secured.

What do you think — should the infosec community adopt near-zero-hour disclosure, or is there a better path? I'd love to hear your take.

Tags: cybersecurity · artificial intelligence · Linux kernel · vulnerability disclosure · information security