A week ago, I had zero CVEs to my name. Today I have two. The list is growing, and some bigger names are next.

I want to say this up front: I do not consider myself a security researcher. I am an AI engineer at Apiiro who is obsessed with AI tools and follows curiosity wherever it leads. Before this, I had never even heard the term SSRF.

So how did I get here?

I recently started paying attention to CVEs, partly through my work at Apiiro and partly from the constant stream of vulnerabilities showing up in common packages. It felt like there was an entire layer of software I had been using without really seeing. And to be honest? I really wanted one of them on my name.

So I opened Claude Code and started exploring.

My role was to decide what to look at and what counted as interesting. Claude's role was to read the code, run analysis, and filter out the noise. That division of labor made the process far more accessible than I expected.

This post covers my first two findings: CVE-2026–42260 and CVE-2026–42261. Both are SSRF vulnerabilities in small AI tools, and both come from the same underlying mistake.

I intentionally started with smaller projects. The stakes are lower, the feedback loop is faster, and maintainers are easier to reach. Bigger targets are already in progress and will come later.

In this post:

  • What is a CVE, and why should you care
  • What is SSRF, in plain English
  • CVE #1: open-webSearch, an MCP server with no auth and a broken URL filter
  • CVE #2: PromptHub, the same kind of bug in a different project
  • The part nobody writes about: talking to the maintainers
  • Why this is only the beginning

What is a CVE?

CVE stands for Common Vulnerabilities and Exposures. It is a global ID system for security bugs, run by MITRE in the US.

Every time someone finds a real, exploitable security flaw in public software, it gets a unique identifier like CVE-YYYY-NNNNN. The year is when the ID was reserved. The number is just a counter. CVE-2014–0160 was Heartbleed. CVE-2017–5638 was the Apache Struts bug behind the Equifax breach. They are all on the same list.

A CVE is not a fix. It is a stable, citable handle so everyone can refer to the same bug by the same name.

Each CVE also gets a CVSS score from 0 to 10. Under 4 is low, 4 to 7 is medium, 7 to 9 is high, 9 and up is critical.

For researchers, CVEs are also medals. Proof that the bug was real and you found it. Every researcher I respect has a list. I now have a list of two.

My two are 8.2 and 7.1. Both High. Both SSRF.

Most CVEs follow a private process first. The researcher reports the bug directly to the maintainer, shares a PoC, and helps with a fix. Only after a patch is ready does the CVE become public.

That is what makes the system work, and also what makes it tricky.

Once a CVE is public, everyone can see it. Defenders use it to patch and prioritize. Attackers use it to find targets that are still unpatched.

That is the tradeoff. Visibility makes systems safer over time, but it also creates a short window of risk.

A CVE is how a bug becomes something the whole ecosystem can act on.

None

What is SSRF?

SSRF stands for Server-Side Request Forgery. The name is dry. The idea is simple: you trick a server into making a network request for you, and you get to choose where it goes.

Here is the analogy that finally made it click for me. Imagine you want to break into an office building. The front door has a guard, a metal detector, and a sign-in sheet. You will never get past it. But there is a delivery driver who walks in and out all day, and the guard waves him through without checking. So you stop attacking the door. You call the driver and convince him to deliver a package on your behalf, inside the building.

The server is the delivery driver. He is already inside the network. He can reach the database, the cloud metadata service, the internal admin panel, your private S3 bucket. From the outside, you can reach none of those. Hand the server a URL and convince him to fetch it for you, and suddenly you can reach all of them, through him.

Most SSRF defenses look the same on paper: check the URL before fetching it. If the hostname looks private (127.0.0.1, localhost, 169.254.169.254 for AWS metadata, anything in 10.x.x.x), reject. Sounds simple.

It is not. Both of my CVEs are about the same failure mode: the URL filter looked fine, but the attacker had a way to write a private address that did not look private to the filter.

None

CVE #1: open-webSearch (CVE-2026–42260)

What is open-webSearch?

open-webSearch is a small open-source project (around 1k stars on GitHub) that gives AI agents the ability to search the web and fetch web pages. It worked over MCP, the Model Context Protocol that Anthropic introduced in late 2024.

So open-webSearch sits between an LLM and the open internet. The LLM says "fetch me this URL", open-webSearch fetches it, returns the text. The Docker image runs by default with no authentication and listens on every network interface.

The bug

open-webSearch had a function called isPrivateOrLocalHostname whose job was to look at a URL and decide whether it pointed somewhere private. Here is the actual code:

export function isPrivateOrLocalHostname(hostname: string): boolean {
  const host = hostname.trim().toLowerCase();
  if (!host) return true;
  if (host === 'localhost' || host.endsWith('.localhost')) return true;
  if (host === 'metadata.google.internal' || host === 'metadata.azure.internal') return true;
  const integerIp = parseIntegerIpv4Literal(host);
  if (integerIp && isPrivateIpv4(integerIp)) return true;
  if (isPrivateOrLocalIp(host)) return true;
  return false;
}
function isPrivateOrLocalIp(ip: string): boolean {
  const version = isIP(ip);
  if (version === 4) return isPrivateIpv4(ip);
  if (version === 6) return isPrivateIpv6(ip);
  return false;
}

It looks reasonable. It is not.

Two things were missing.

First, IPv6 addresses in URLs are written inside square brackets: http://[::1]/. Node's URL parser keeps the brackets in the hostname. The filter received the string [::1] and asked "is this an IP?". Node answered "no" (because of the brackets). The filter let it through. [::1] is loopback. The most basic local address there is.

Second, the filter never resolved DNS. Point a domain you control at 127.0.0.1, give the tool the domain name, the filter sees a public-looking hostname, the fetcher resolves it, gets loopback, connects.

So http://[::1]/ (or any of a dozen IPv6 variations) was enough to make the server fetch from itself, or anywhere else on the internal network. Including the AWS metadata endpoint, which on a compromised cloud deployment leaks credentials.

Because the Docker image accepts requests from anywhere with no authentication, you did not even need an account. Anyone on the network could trigger the fetch. That is why this one is rated 8.2 (High).

The full advisory page with all the details is on: https://github.com/Aas-ee/open-webSearch/security/advisories/GHSA-v228-72c7-fx8j

CVE #2: PromptHub (CVE-2026–42261)

How I found this one

Once I found the first bug (and learned what is SSRF in the procrss), the obvious question was: how many other projects have written almost-the-same filter, with almost-the-same hole? I described the pattern to Claude (a hand-rolled function that pattern-matches IPv6 strings to decide if a host is private) and asked it to search GitHub for matches. PromptHub was one of the hits.

What is PromptHub?

PromptHub is a self-hosted, local-first prompt and skill manager for LLM users. You save prompts, version them, share them with your team, and import "skills" (prompt bundles) from a remote URL. Young project, around 1k stars, the kind of tool people deploy on a small VM at work and let their team use.

That import-from-URL feature is where the bug lived.

The bug

PromptHub has an endpoint that lets an authenticated user paste a URL, the server fetches it, the response body comes back. The maintainer knew this was dangerous and wrote a private-IP filter, including a custom isPrivateIPv6:

function isPrivateIPv6(address: string): boolean {
  const normalized = address.toLowerCase().split('%')[0];
  if (normalized === '::' || normalized === '::1') {
    return true;
  }
  if (normalized.startsWith('::ffff:')) {
    const mapped = normalized.slice('::ffff:'.length);
    return net.isIP(mapped) === 4 && isPrivateIPv4(mapped);
  }
  // ...generic parser that only inspects the first two groups...
}

He tried. Same outcome. The filter caught the obvious cases (the literal string ::1) but missed the equivalent ways to write the same address. 0:0:0:0:0:0:0:1 is ::1 in long form. ::ffff:7f00:1 is 127.0.0.1 written in IPv6. The OS treats them all as the same destination at connect time. The filter, doing string comparisons, did not.

A representative request:

curl -X POST https://prompthub.example/api/skills/fetch-remote \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"url":"https://[::ffff:7f00:1]:9999/"}'
# response: contents of whatever was running on loopback port 9999

The CVE is rated 7.1 (High). It is a notch lower than the first one only because it requires an authenticated account. On deployments where registration is open (a documented, supported setting), "authenticated" means "filled out a signup form".

The full advisory page with all the details is on: https://github.com/legeling/PromptHub/security/advisories/GHSA-9fhh-fjfg-5mr6

What both bugs have in common

Both maintainers tried. Both wrote private-IP filters. Both got caught by the same thing: IPv6 has too many ways to write the same address, and a hand-rolled list of "private patterns" will always miss one. The fix in both cases was to stop pattern-matching strings and instead parse the address with a real IP library (ip-address in Node, ipaddress in Python), let it canonicalize, then ask "is this in a private range?".

That is the technical lesson. The story I actually want to tell is what happened next.

None

The part nobody writes about: talking to the maintainers

This is the part that surprised me. When you find a bug in a massive enterprise project, there is a sterile form to fill out. In small open-source projects, it is just one or two people in a different timezone.

When I found these bugs, neither project had the GitHub Security Advisory feature enabled. There was no "Report a Vulnerability" button. I couldn't just post a public issue — that would be like handing a loaded gun to every attacker on the internet before a fix existed.

So I wrote a short, blunt email. I led with the impact, attached a PoC, and suggested a patch.

One detail I keep finding funny: I sent both initial reports in English and Chinese, side-by-side. Both maintainers' GitHub profiles suggested they spoke Chinese. Whether the translation helped or just signaled "I am taking this seriously," the replies came back in English, but the ice was broken.

None

I didn't just dump the bug (emails are not secure too). I asked them to enable the GitHub Security Advisory feature so we could handle it properly. Once they opened it, the process shifted into a real collaboration:

  1. The Report: I submitted the full details privately via the new tab.
  2. The Collaboration: We worked together in a private fork. I sent the PoC and a candidate fix, they came back with questions.
  3. The Iteration: I added test cases for variants I had missed, and they adapted the patch into their codebase.

The final responses of both of them was genuinely heartwarming: legeling, the maintainer of PromptHub wrote:

Thanks again for the excellent report and patch. I pulled the temporary advisory fork and integrated the fix into main, so your work directly helped resolve this issue. I really appreciate the quality of the report, the PoC, and the suggested remediation. … And thank you again for contributing — for future non-security improvements or bugs, please feel free to open public PRs or issues directly. I'd be very happy to have your continued contributions to the project.

And Aas-ee, the maintainer of open-webSearch wrote:

Thanks for the thorough work on this and for iterating so quickly on the follow-up issues. I really appreciate the care and responsiveness throughout the process.

A few things I learned:

  • Lead with the impact, not the class. "Your IPv6 filter has a bug" is a snooze. "Any internet user can read your internal credentials" gets a reply.
  • Send a working PoC. A maintainer wants to run something and watch the bug fire. A short reproduction beats three paragraphs of explanation.
  • Suggest a fix, but don't be precious about it. Send a snippet. If they take it, great. If they don't, it's their call.
  • Be patient with timezones. Most maintainers are doing this for free on a weekend.
  • Say thank you. They fixed a security bug because a stranger emailed them. That deserves respect.

What you should take from that

The AI tooling space right now is full of small projects shipped fast by people who are not "security people." These bugs aren't exotic, they are the same web bugs we've known about for a decade, sitting in code that is two months old.

  • If you're a dev: Don't hand-roll your URL filter. Use a real IP library.
  • If you're a researcher: You don't need a PhD. You need a pipeline, an interesting repo, and an AI agent to help you bridge the gap.

Coming up next

I started with small fish. The list of CVEs in disclosure right now is growing, and some of the names are ones you probably use every day. I'll write those up as the embargoes lift.

What was your first CVE, or which project would you go looking in first?