I work in enterprise software deployment. Part of my job is making sure applications and customers get patched when new vulnerabilities are published. For a long time like most people in this space I was using CVSS scores to decide what to prioritize.

It did not take long to realize this was wrong.

CVSS measures technical severity. It tells you how bad a vulnerability is in theory. It does not tell you whether anyone is actually exploiting it right now, whether working exploit code is publicly available, or whether the probability of exploitation in the next 30 days is 2% or 94%. A CVSS 9.8 with no known exploitation and no public exploit code is very different from a CVSS 7.2 that is actively being used in ransomware campaigns. But if you are sorting by CVSS alone you are patching the wrong thing first.

I started looking for a better way to think about this.

The signals that actually matter

After a lot of research I landed on four data signals that together give a much clearer picture of real world risk.

CVSS — the baseline

CVSS is still useful. It tells you the technical severity of the flaw itself, how bad is the vulnerability in terms of attack complexity, privileges required, impact on confidentiality, integrity, and availability. I use it as the foundation but weight it at 35% rather than treating it as the whole picture. It is necessary but not sufficient.

EPSS — the signal most people are not using

The Exploit Prediction Scoring System from First.org is a machine learning model that predicts the probability a CVE will be exploited in the wild within the next 30 days. It is updated daily, available via a free API with no authentication required, and it changes everything about how you think about urgency.

A CVE with a 0.94 EPSS score has a 94% probability of being exploited in the next month. A CVE with a 0.003 EPSS score has a 0.3% probability. That difference is enormous and CVSS alone tells you nothing about it.

I weight EPSS at 25% in NullScore. In practice it is the signal that most frequently changes the ranking order compared to a pure CVSS sort.

CISA KEV — confirmed active exploitation

CISA publishes a Known Exploited Vulnerabilities catalog that lists CVEs confirmed to be actively exploited by real attackers in the wild. This is not predicted exploitation like EPSS. This is confirmed. Real attackers are using these vulnerabilities right now.

Federal agencies are legally required to patch KEV entries within defined deadlines. Any CVE on the KEV list should immediately jump to the top of your priority queue regardless of its CVSS score.

I weight KEV status at 20% in NullScore. The KEV catalog is published as a free JSON feed and is updated regularly, typically within a few hours of a new entry being confirmed.

PoC availability — the patch window signal

A Proof of Concept is working exploit code that is publicly available for anyone to download and use. When a PoC drops for a CVE, the barrier for attackers drops to near zero. You no longer need sophisticated threat actors to exploit the vulnerability, script kiddies can use it too.

The moment a public PoC is published your effective patch window shrinks from weeks to days. Trickest maintains a GitHub repository that tracks public PoC code across thousands of CVEs and is updated daily.

I weight PoC availability at 15% in NullScore. It is a binary signal, either exploit code is publicly available or it is not, but the impact on urgency is significant.

Putting it together: the NullScore formula

NullScore is a composite 0 to 100 score calculated as follows:

CVSS contributes 35%. The raw CVSS v3 score normalized to the 0 to 100 scale.

EPSS contributes 25%. The exploitation probability score multiplied by 100.

KEV status contributes 20%. Binary, 20 points if the CVE is on the CISA KEV list, zero if not.

PoC availability contributes 15%. Binary, 15 points if public exploit code exists, zero if not.

Stack match bonus adds up to 5%. If the CVE affects software in your specific environment it gets a small boost because a vulnerability that is irrelevant to your stack is less urgent regardless of its global risk score.

The result is a single number. Higher means patch it sooner.

What this looks like in practice

Here is a concrete example of why this matters. Take two CVEs:

CVE A has CVSS 10.0, EPSS 0.3%, not on KEV, no public PoC. NullScore around 42.

CVE B has CVSS 7.5, EPSS 94%, on the KEV list, public PoC available. NullScore around 88.

A pure CVSS sort tells you to patch CVE A first. NullScore correctly identifies CVE B as the more urgent threat because it is actively being exploited, has a very high probability of continued exploitation, and anyone can download working exploit code right now.

This happens constantly in real vulnerability data. The gap between theoretical severity and actual exploitation probability is larger than most people realize.

Building the pipeline

To make NullScore work in practice I built a data pipeline in Python that pulls from seven sources daily. NVD and NIST provide the authoritative CVE registry and CVSS scores. CISA publishes the KEV catalog as a public JSON feed. First.org provides EPSS scores via a free API that accepts batches of up to 100 CVE IDs per request. GitHub Advisory Database covers open source package vulnerabilities. Trickest tracks public PoC code. OWASP provides category mapping.

Each source is fetched, normalized into a unified schema, and merged with NVD as the base record. EPSS enrichment runs last to catch any CVEs that did not have scores from earlier in the pipeline.

The whole pipeline runs daily and produces a ranked JSON output sorted by NullScore descending.

What I built with it

I turned this methodology into NullCVE, a free vulnerability intelligence platform at nullcve.io. It shows the ranked CVE feed, lets you filter by tech stack so you only see CVEs affecting your environment, flags KEV confirmed vulnerabilities, and shows the NullScore for every entry so you can see at a glance what needs attention today versus what can wait.

The goal was to build something that works for IT managers, help desk teams, and developers who deal with patch management every day but are not necessarily trained security analysts. Plain English severity labels, action oriented guidance, and a single score that removes the guesswork.

What I am still figuring out

The weighting is the part I am least confident about. The 35/25/20/15/5 split is based on my own reasoning about the relative importance of each signal but I have not done rigorous empirical testing to validate it. Different threat environments might call for different weights, an organization facing primarily ransomware threats might want to weight KEV higher, for example.

I am also looking at additional signals including darkweb chatter, vendor patch availability timing, and asset criticality context. The challenge is that the most valuable signals tend to require paid data sources.

The data is all free

Everything that powers NullScore is publicly available and free to access. NVD, CISA KEV, First.org EPSS, GitHub Advisory, Trickest. If you want to build your own version of this the data is all there. I would love to hear from others who have thought about CVE prioritization differently, what signals have you found most predictive?

nullcve.io