Every security leader I've talked to over the past two decades has struggled with the same question: how do you prove that your security program is working?

It sounds simple. It is not. And the way most organizations answer it today is by actively making their security programs worse.

In my previous articles, I wrote about how the CISO role has evolved from a technical gatekeeper to an architecture owner, and why the problems security leaders face today are fundamentally architectural problems. This article tackles the one problem that has haunted every phase of that evolution: measurement.

The Paradox at the Heart of Security Measurement

Here's the fundamental problem:

The goal of a security program is to prevent bad things from happening. But "nothing happened" is not a measurable outcome.

A CEO once told a security leader I was working with: "I'm only going to judge you based on security incidents, because it's all I have." That CISO's takeaway was blunt: "I've got to give him more than that." But what? If you report a quarter with zero incidents, did your program succeed, or was it just luck? If you report a quarter with three incidents, is that a failure, or does it indicate your detection methods are improving?

This is not a theoretical problem. It's the daily reality for every security leader who has to justify their budget, headcount, and existence to a board that thinks in terms of revenue, margins, and growth percentages.

The result is what I call the metrics trap: security teams measure what they can count rather than what actually matters. And then they optimize for the numbers instead of the outcomes.

Three Metrics That Are Actively Hurting Security Programs

1. Vulnerability Counts

Every security program I've seen has some version of this: a dashboard showing the number of open vulnerabilities, categorized by severity, with a trend line showing whether the number is going up or down.

On the surface, this seems reasonable. More vulnerabilities are bad. Fewer is good. The trend should go down.

In practice, this metric incentivizes exactly the wrong behavior. Teams chase volume, closing the easiest vulnerabilities first to make the numbers look good, while the 15 critical vulnerabilities that actually pose business risk sit in the backlog because they require downtime, cross-team coordination, or architectural changes that nobody wants to approve.

I've walked into organizations where the vulnerability dashboard was green: "We remediated 94% of findings this quarter," while the remaining 6% included unpatched internet-facing systems with known exploits. The dashboard said, "healthy." The architecture said "breach waiting to happen."

The problem isn't that vulnerability counts are useless. The problem is that without context, what's exposed, what's exploitable, and what the business impact would be, they create a false sense of security. And false security is worse than no security, because it prevents the conversations that need to happen.

2. Phishing Click Rates

This is the poster child for bad security metrics, and it's one I hear almost every security leader criticize.

The standard approach: run a phishing simulation, measure how many employees clicked, and report the percentage. If the click rate drops, security awareness is improving. If it goes up, you need more training.

The problems with this are well understood by practitioners, even if they haven't filtered up to every board:

First, click rates are trivially gameable. The harder the simulation is, the higher the rate goes. Make it easier, and the rate goes down. Neither change reflects the actual security posture.

Second, click rates measure the wrong behavior. What you actually want to know is: when an employee receives a suspicious email, do they report it? The reporting rate is the metric that matters; it tells you whether your security culture is functioning, whether employees trust the security team enough to flag something, and whether your detection pipeline has human sensors feeding it. But almost nobody measures reporting rates. They measure clicks because clicks are easier to count.

Third, and this is the one that makes me the most uncomfortable, phishing simulations that punish employees for clicking create fear, not security. I've spoken with security leaders who have watched phishing programs erode the trust between security teams and the rest of the organization. Employees stop engaging with the security team because they associate it with gotcha tests and shaming.

I think I heard this one from a CISO in one of the episodes of CISO Series podcast:

"Phishing click rates as a standalone metric are completely meaningless. I agree. And yet they remain one of the most commonly reported security metrics to boards, because they produce a clean number and a clear trend line. The board likes it. The metric is useless. And we keep reporting it."

3. Mean Time to Detect / Mean Time to Respond (MTTD/MTTR)

For years, MTTD and MTTR have been treated as the gold standard of security operations metrics. The logic is intuitive: the faster you detect a threat and respond, the better your security program.

But a veteran security leader once said:

MTTD/MTTR doesn't incentivize the right behavior. It incentivizes reactive security, detecting and responding to threats that have already gotten in, rather than preemptive security that stops the adversary from getting in in the first place.

Think about it: if your primary metric is how fast you respond to incidents, your entire program optimizes for incident response. You staff the SOC, you tune the SIEM, you run tabletop exercises. All necessary. But you underinvest in the architectural changes that would prevent the incidents from occurring, because those don't improve your MTTD score.

I've seen this firsthand. Organizations with excellent MTTD numbers and terrible security architectures. They can detect a breach in 4 hours and respond in 12. But their architecture allows lateral movement across the entire environment because nobody invested in segmentation, least-privilege access, or reducing the blast radius. They're measuring the speed of the ambulance, not the safety of the road.

I'm not saying MTTD/MTTR are worthless. They have value as operational metrics for SOC efficiency. But when they become the primary measure of security program effectiveness, they distort priorities, making the organization less secure.

Why We Measure the Wrong Things

The metrics trap is the result of three structural forces that have shaped security measurement for the past decade.

Force 1: Audit-Driven Measurement

For many organizations, the first time anyone asked "how do we measure security?" was during a compliance audit. The auditor needed evidence. The security team produced numbers. Those numbers became the metrics.

The problem is that auditors and security leaders have different objectives. An auditor needs to verify that controls exist and are functioning. A security leader needs to understand whether those controls are actually reducing risk. These are not the same question, but because the audit conversation came first, audit-friendly metrics became the default.

I worked with an organization that had been reporting the same 12 metrics for seven years because that's what their first PCI auditor asked for. Nobody had ever questioned whether those metrics still reflected the organization's actual risk profile. The threat landscape had changed completely. The metrics hadn't changed at all.

Force 2: Vendor-Defined KPIs

Every security product comes with a dashboard. Every dashboard comes with default metrics. Those metrics were designed by the vendor to make their product look effective. They were not designed to help you understand your security posture.

When your EDR vendor shows you "threats blocked," that number tells you how many alerts the tool generated and classified as blocked. It doesn't tell you whether any of those were real threats, whether anything got through, or whether your architecture is resilient to the threats the tool can't see.

I'm not blaming vendors; they're designing for their use case. But when security teams roll up vendor dashboards into executive reports without translating them into business context, they're presenting the vendor's story, not their own. And the board can't tell the difference.

Force 3: The "Easy to Count" Trap

Humans default to measuring what's easy to measure. Vulnerability counts are easy. Phishing click rates are easy. Number of policies reviewed, patches applied, training sessions completed… all easy to count.

What's hard to measure: whether your security architecture reduces the blast radius of a breach. Whether your data classification is actually preventing sensitive information from reaching unauthorized AI tools. Whether your incident response plan would work under pressure. Whether your employees trust the security team enough to report a problem before it becomes a crisis.

The things that matter most in security are the hardest to quantify. And so we quantify what's easy and pretend it's what matters. I've heard this more than once:

"We haven't yet reached consensus on whether we're focusing on measurement or metrics, or whether we're engaging in storytelling or vision sharing. And, our executive team has not agreed on a unified approach to defining success."

That's an honest assessment, and it reflects where most of the industry actually is, despite what the conference slides suggest.

What I'd Measure Instead

I don't have a perfect answer. Nobody does. But after 18 years of building security architectures and watching organizations struggle with this, here's the framework I've converged on. It's built on three principles:

  • measure outcomes, not activity
  • measure architecture, not tools
  • and measure what you'd want to know the morning after a breach.

Principle 1: Measure Outcomes, Not Activity

Stop reporting how many vulnerabilities you patched. Start reporting on whether your critical business systems are resilient to the vulnerability classes that are actually exploited in your industry.

Stop reporting phishing click rates. Start reporting the percentage of suspicious emails that employees report, the average time from report to security team triage, and whether reported emails have led to early detection of real attacks.

Stop reporting the number of policies you've written. Start reporting the percentage of those policies that are technically enforced, meaning a violation is physically prevented by a control, not just prohibited by a document.

The distinction is subtle but transformative. Activity metrics tell you what your team did. Outcome metrics tell you whether it mattered.

Principle 2: Measure Architecture, Not Tools

This connects directly to the architecture-owner concept I wrote about in my previous article. If the CISO's job is now architectural, the metrics should reflect the health of the architecture.

Here's what I mean:

Blast radius. If an attacker compromises a single endpoint, how far can they move laterally? This is a measurable architectural property. You can test it. You can simulate it. You can trend it over time. And it tells you more about your actual security posture than any vulnerability count ever will.

Control point coverage. What percentage of your data flows pass through an enforcement point where policy can be applied? If 80% of your employees' work happens in a browser but you have no browser-layer controls, your control point coverage has a 80% gap. That's an architectural metric that directly translates to risk.

Integration density. How many tools are involved in detecting, triaging, and responding to a security event? If the answer is seven tools that require manual correlation, your architecture is fragile. If the answer is two integrated platforms, your architecture is operable. Fewer integration points mean faster response times, fewer handoff errors, and less alert fatigue.

Policy enforcement rate. What percentage of your security policies are enforced technically versus relying on human compliance? A DLP policy that blocks the upload of sensitive data to unapproved AI tools is enforced. A policy that says "employees should not paste sensitive data into AI tools" is aspirational. The ratio between enforced and aspirational policies tells you how much of your security posture depends on people never making mistakes, which, as we all know, is not a sustainable strategy.

Principle 3: Measure What You'd Want to Know the Morning After

This is the gut-check principle. Imagine you wake up tomorrow and learn your organization has been breached. In the first 60 minutes, what questions will the board ask?

They won't ask how many vulnerabilities you patched last quarter. They'll ask:

  • How did the attacker get in?
  • How far did they get?
  • What data was exposed?
  • When did we find out?
  • What's the business impact?
  • How fast can we recover?

Now work backward. Every metric you report should contribute to your ability to answer those six questions with confidence. If a metric doesn't help you answer any of them, it's noise.

The security leader whose metrics allow them to answer "the attacker got in through vector X, was contained to segment Y, the data exposed was limited to category Z, and we can recover in N hours" has earned the board's trust. The one who reports "we blocked 1.2 million threats this quarter" has told the board nothing useful and will be scrambling when those six questions arrive.

The Storytelling Shift

Here's a trend I've noticed accelerating over the past two years:

The best security leaders are moving away from metric-heavy board presentations entirely. They're replacing dashboards with narratives.

Instead of showing a line graph of vulnerability remediation rates, they're saying:

"Last quarter, we identified a critical architectural weakness in how our payment systems communicate with third-party processors. We redesigned the integration to eliminate the exposure. Here's what the risk looked like before, here's what it looks like now, and here's what it would have cost us if it had been exploited."

That's not a metric. It's a story. And it conveys more about the security program's effectiveness in three sentences than 30 slides of dashboards ever could.

The reason this works is that boards don't think in metrics. They think in terms of risks, decisions, and outcomes. "Are we safe?" is not a metrics question. It's a judgment question. And the security leader's job is to provide enough context for the board to make that judgment confidently, not to overwhelm them with numbers that require a cybersecurity degree to interpret.

One security leader I worked with put it well: the number one metric should be qualitative, not quantitative. You should be able to walk into a board meeting and say, "We are a business that does X. The security risks to that business are Y. We have addressed the most significant ones, and here's our plan for the rest." If you can do that credibly, the board doesn't need a dashboard.

Now, I want to be honest: not every board is ready for this. Some boards still want numbers because they feel objective, while narratives feel subjective. If that's your board, give them numbers, but frame them in business terms. Don't say "MTTR is 4.2 hours." Say "When a security incident occurs, our average time from detection to containment is 4 hours, which limits our exposure to approximately X dollars of business disruption." Same number. Completely different conversation.

What This Means If You're Struggling With Metrics

If you're a security leader who dreads board presentations because you know your metrics don't really tell the story, you're not alone. This is the second hardest unsolved problem in the profession, right behind budget justification (which is really the same problem wearing a different hat).

Here's my practical advice:

Start with the six questions. Write them down. For each one, ask yourself: "Could I answer this confidently right now?" Where the answer is no, that's where your measurement investment should go.

Kill one vanity metric per quarter. You don't have to overhaul your measurement program overnight. But every quarter, identify one metric you're reporting that doesn't actually inform decisions. Replace it with one that does. In a year, your dashboard will be fundamentally different.

Align your metrics to your architecture. If your architecture is built around network segmentation, measure blast radius. If it's built around identity, measure unauthorized access attempts and privilege escalation. If it's built around browser-layer enforcement, measure policy enforcement rates at the browser. The metrics should reflect what your architecture is designed to do, not what your vendor's dashboard happened to ship with.

Learn to tell stories. If you're more comfortable with spreadsheets than narratives, that's normal; most security leaders come from technical backgrounds. But the ability to translate a metric into a business story is the skill that separates the CISO who keeps the job from the one who gets replaced. Practice it. Get feedback from non-technical colleagues. It's a learnable skill, not an innate talent.

And the hard truth: the measurement problem won't be fully solved in your tenure. The industry has been debating this for over a decade and hasn't converged on a standard. What you can do is be honest about what you know and what you don't, measure what matters rather than what's easy, and build an architecture that makes good measurement possible. That's enough. It's more than most organizations have today.

I hope you found this stuff useful

Younos Nazarian is a seasoned zero-trust architect with 18+ years of experience across highly-regulated industries.