Weekend notes from reading way too many CVE reports and not enough attack maps.
This weekend I went through a few write-ups and dashboards from different CVE scanners and vulnerability checking tools. Solid work. Strong coverage. Lots of findings….
And I kept running into the same feeling I've had for years:
We're very good at detecting issues. We're still bad at understanding attacks.
This isn't a criticism of scanners. It's a limitation of the problem they're solving.
1. CVE scanners are good at signals — not meaning
Modern security scanners do their job well:
- They find vulnerable libraries
- They detect misconfigurations
- They flag exposed endpoints
- They surface known CVEs
- They generate tickets, severities, and dashboards
But they mostly answer one question:
"Is something potentially wrong here?"
What they don't answer is:
- Can this actually be exploited?
- How would an attacker move from here?
- What does this connect to?
- Did anything change that makes this risky now?
That gap — between detection and exploitability — is where noise is born.
2. Why noise exists even when tools are good
False positives are not a tooling failure. They're a context failure.
When findings are evaluated in isolation:
- Severity must be conservative
- Alerts must over-fire
- Humans must mentally stitch context together
That doesn't scale.
If you can't see how assets connect, everything looks dangerous.
3. The missing layer: attack-asset mapping as a graph
What I rarely see — and almost never see end-to-end — is a real attack path modeled explicitly.
For example, something very common in Kubernetes + cloud environments:
Public Load Balancer
→ Ingress
→ Service
→ Pod
→ ServiceAccount
→ RBAC permission
→ Secret read
→ Cloud credential
→ Data accessEvery part of this chain is usually detected somewhere:
- Exposure tools flag the LB
- Image scanners flag the pod
- RBAC analyzers flag permissions
- Secret scanners flag credentials
But nobody connects the dots.
That chain — not any single alert — is the incident.
4. What ChaGu actually is (and isn't)
ChaGu is not another CVE scanner. It doesn't look for new vulnerabilities. It doesn't replace SAST, SCA, infra scanners, or secret detectors. ChaGu assumes those tools already work.
What ChaGu does instead is reason.
5. ChaGu builds a living attack-surface graph
ChaGu models the system as a graph:
Nodes (assets)
- Load balancers, ingress, services, pods
- Identities (users, service accounts, roles)
- Permissions (RBAC, IAM policies)
- Secrets and credentials
- Data stores and external systems
- Scanner findings (enriched, not raw)
Edges (capabilities)
- routes to
- runs as
- can access
- allows read/write
- exposes
- depends on
This isn't "graph for graphs' sake". It's attack mechanics encoded structurally.
6. From findings → attack paths
Once you have the graph, everything changes.
Instead of:
- "Ingress exposed"
- "Pod vulnerable"
- "RBAC permissive"
- "Secret exists"
You get:
"A publicly reachable endpoint can traverse workload identity and permissions to access sensitive data."
That's not four alerts. That's one attack path.
7. How this kills false alarms (for real)
ChaGu doesn't silence alerts arbitrarily. It removes alerts that should never have existed.
Context-aware suppression
- Vulnerability with no reachable path → suppressed
- Public service protected by auth + network policy → downgraded
- Issue in dead or isolated service → ignored
Graph-based deduplication
Ten alerts pointing to the same path become:
"Fix this path."
Drift-based alerting
- Same vuln, same context → no alert
- Same vuln, new path → alert
This is how mature systems behave.
8. What CVE tools gain from this layer
ChaGu doesn't compete with CVE scanners. It consumes them.
Think:
CVE / Infra / Secret / Code scanners
↓
ChaGu
↓
Attack-surface intelligenceThe output is no longer:
- "500 findings"
It becomes:
- "3 reachable attack paths"
- "1 new path introduced last week"
- "This path touches a crown-jewel asset"
That's actionable.
9. Why this matters organizationally
This isn't about prettier dashboards.
It results in:
- Fewer alerts
- Higher trust in severity
- Faster remediation
- Less engineer fatigue
- Clear ownership ("fix this edge, not these 12 tickets")
Security stops yelling. Teams start listening.
10. Why this space needs more writing
There's surprisingly little concrete writing about attack-path modeling in modern cloud-native systems. Not marketing. Not theory. Actual mechanics.
That gap is worth exploring openly.
This isn't a proprietary trick. It's a modeling approach. And it feels like a natural next step for any ecosystem drowning in findings but starving for understanding.
Closing thought
CVE scanners are excellent at telling us what exists. ChaGu is about understanding what matters.
Security doesn't need more alerts. It needs systems that can say:
"This path is real. This one isn't. And this is why."
That shift — from findings to attack-surface reasoning — is where the next level of security maturity lives.
Simple attack-path diagram

Optional "risk labels" version (still simple, but more punchy):

2) Part 2 (Medium draft): How to query attack paths
Part 2: How to Query Attack Paths (and Stop Treating Security Like a Ticket Queue)
In Part 1, I argued that scanners are great at producing findings — but the real story is the path: how an attacker can move through your system.
ChaGu's core is an attack-surface graph. Part 2 is about the practical question:
How do you query that graph like a security engineer who wants answers fast?
The mental model: you don't hunt findings — you hunt paths
Most teams still operate like this:
- "Show me critical issues"
- "Sort by severity"
- "Filter by repo"
- "Open tickets"
ChaGu flips it into:
- "Show me all paths from public exposure to secrets"
- "Show me what changed since last week"
- "Show me the smallest fix that breaks the chain"
This is the difference between a scanner output and attack-surface intelligence.
Query 1: "What are my public entrypoints?"
Find anything reachable from the internet.
Pseudo-query (Cypher-style):
MATCH (e:Entrypoint {public:true})
RETURN eWhat you do with it:
- inventory of public LBs / ingresses / gateways
- baseline for drift (new exposure = alarm)
Query 2: "Show paths from public entrypoints to secrets"
The classic "LB → Pod → SA → RBAC → Secret" chain.
MATCH p = (e:Entrypoint {public:true})-[:ROUTES_TO|FORWARDS_TO|SELECTS|RUNS_AS|BOUND_TO|ALLOWS|CAN_READ*1..12]->(s:Secret)
RETURN pWhat you get:
- not "secrets exist"
- but reachable secrets (the ones that matter)
Query 3: "Show paths from public entrypoints to crown jewels"
Define crown jewels once (prod DBs, customer exports, payment systems).
MATCH p = (e:Entrypoint {public:true})-[*1..15]->(cj:CrownJewel)
RETURN p
ORDER BY length(p) ASCWhy this is good:
- shortest paths tend to be the most dangerous
- this becomes your "top 10 attack paths" list
Query 4: "Which vulnerabilities are actually reachable?"
Scanners flag CVEs. ChaGu answers: is there a path to exploitation?
MATCH (v:Vulnerability)
WHERE v.severity IN ["Critical","High"]
MATCH (asset)-[:HAS_VULN]->(v)
MATCH (e:Entrypoint {public:true})-[*1..10]->(asset)
RETURN v, assetOutcome:
- your "critical CVE" list shrinks into "critical-and-reachable"
- instant false-positive pressure relief
Query 5: "What changed since last deploy?"
This is where maturity lives: drift.
MATCH (c:Change {window:"last_7_days"})-[:INTRODUCED]->(x)
RETURN c, xThen ask impact:
MATCH (c:Change {window:"last_7_days"})-[:INTRODUCED]->(x)
MATCH p = (:Entrypoint {public:true})-[*1..12]->(x)-[*0..8]->(:CrownJewel)
RETURN pThis is the "why now?" engine.
Query 6: "The Remediation Query"
ChaGu's best output is not "fix 12 things". It's "break this edge."
Examples of "cut points":
- remove a ClusterRoleBinding
- restrict secrets/get
- tighten service exposure
- add network policy
- rotate/relocate secrets
Conceptually:
MATCH p = (:Entrypoint {public:true})-[*1..15]->(:CrownJewel)
WITH nodes(p) AS n
RETURN nThen rank candidate cuts by:
- impact (how many paths it breaks)
- feasibility (how easy to change)
- blast radius (how safe to apply)
What this looks like operationally
ChaGu queries become operational workflows:
- Daily: "new public exposures?"
- Weekly: "new paths to crown jewels?"
- Per PR / deploy: "did we introduce a new reachable chain?"
- Incident mode: "show all paths from compromised pod"
You stop doing "alert triage". You do attack-path management.
The punchline
Scanners answer:
"Is this vulnerable?"
ChaGu answers:
"Is this exploitable here — and what path makes it real?"
That's how you reduce false alarms without gambling on silence.
3) Scanner vs ChaGu comparison table (Medium-friendly)
| Dimension | CVE / SAST / SCA / Cloud scanners | ChaGu (Reasoning Layer) |
| ------------------------- | ------------------------------------ | ------------------------------------ |
| Primary output | Findings / tickets | Attack paths + decisions |
| Strength | Detection coverage | Context + correlation |
| Unit of analysis | Single issue (file/package/resource) | Connected system graph |
| Answers | "Is this vulnerable?" | "Does this matter *here*?" |
| Reachability | Usually limited or absent | Core feature (path modeling) |
| Deduplication | Mostly heuristic | Path-based clustering (root cause) |
| False positives | Reduced via rules/filters | Reduced via context + reachability |
| Drift detection | Limited | Native ("same vuln, new path") |
| Prioritization | Severity-based | Impact-based ("crown-jewel paths") |
| Explainability | Scanner message + metadata | "Why now / what path / what to cut" |
| Best for | Shift-left + baseline hygiene | Production prioritization + maturity |
| Does it replace scanners? | N/A | No — consumes them |Closing thought
CVE scanners are excellent at telling us what exists. ChaGu is about understanding what matters.
Security doesn't need more alerts. It needs systems that can say:
- "This path is real. This one isn't. And this is why."
If you're running a security or platform team, here's what this means practically:
- Audit your current state: How many scanner findings do you have? How many are actually being fixed? What's your false-positive rate? (Be honest.)
- Map one attack path manually: Pick your most critical service. Trace the path from public exposure to sensitive data. Write it down. You'll immediately see what your scanners miss.
- Ask the path questions: "Show me public → secrets paths." "What changed this week?" "Which vulns are reachable?" If you can't answer these quickly, you have a reasoning gap — not a detection gap.
- Start modeling connectivity: You don't need ChaGu to start thinking this way. A spreadsheet, a whiteboard, even a basic graph DB can get you started. The point is to stop treating findings as isolated and start mapping how they connect.
- Measure what matters: Track "reachable critical paths" instead of "total findings." Track "paths introduced" instead of "vulnerabilities detected." The metrics you choose shape the behavior you get.
Attack-path reasoning isn't a tool feature. It's a way of thinking about security.
ChaGu just makes it scalable.
But the mental shift — from findings to paths, from alerts to decisions, from coverage to clarity — that's the real unlock.
And it's available to any team willing to ask better questions.