There are several mental models worth building as a security engineer. This is one of the more practical ones — and one that's often underused during code review.
Why Code Reviews Miss the Bugs That Matter
A lot of secure code reviews end up being glorified grep sessions — scan for obvious patterns, flag missing validation, move on. That catches a class of bugs. It doesn't catch the ones that tend to show up in breach reports.
The bugs that get exploited in production are usually spread across multiple layers. They live in the distance between where input enters the system and where it eventually causes damage. If your review methodology is file-by-file, you're likely missing those paths entirely.
Source → Flow → Sink is one approach for closing that gap. It's essentially manual taint analysis — following untrusted data through a system to see where it lands, and whether anything meaningful stands in the way.
The Core Idea
The model is straightforward:

Source is where untrusted data enters — HTTP parameters, headers, cookies, uploaded files, webhook payloads, data pulled from third-party APIs. If you don't fully control it, treat it as a source.
Flow is the path that data takes through the system. It gets stored, passed between functions, forwarded to another service, deserialized. This is where controls are applied — or forgotten.
Sink is where the data lands and does something with real consequences: a database query, an outbound HTTP call, a file write, a template render, command execution. These are the places where missing controls become actual vulnerabilities.
Worth noting: This approach complements tool-based analysis, not replaces it. SAST tools are good at flagging known-bad patterns within a function. They're poor at tracking data across service boundaries, storage layers, or async flows. That's the gap this fills.
Example 1 — IDOR via Missing Authorization
Broken access control shows up constantly, and it's almost always a flow problem — the control that should exist is simply absent somewhere along the path.
# SOURCE: doc_id comes from the URL — attacker-controlled
@app.route('/api/documents/<int:doc_id>')
@require_auth # checks: "is there a valid session?" — nothing more
def get_document(doc_id):
# FLOW: doc_id passes through; no ownership check applied
current_user = get_current_user()
# SINK: attacker-controlled ID used directly in DB query
document = db.query(
"SELECT * FROM documents WHERE id = %s",
(doc_id,)
)
return jsonify(document)
The @require_auth decorator is doing one thing: confirming a valid session exists. It says nothing about whether this user should see this document. Authentication and authorization are different checks, and this code only has one of them.
The flow from source to sink is completely unobstructed. There's no ownership assertion, no row-level check, nothing. The fix is adding AND owner_id = %s to the query, or doing an explicit check before returning the result.
Attacker perspective
Log in legitimately. Note your own doc_id in the response — say it's 42. Then try 43, 44, 45. Sequential IDs mean the entire document store is accessible. With UUIDs you'd look for another endpoint that leaks IDs — there's usually one. The IDOR is the same either way.
What to look for: Any endpoint that accepts a resource identifier and retrieves data without filtering on the authenticated user. The auth decorator may exist. The ownership check may not. Those are two different code paths.
Example 2 — SSRF Leading to Cloud Metadata Access
SSRF is worth taking seriously not because the initial request is dangerous, but because of what that request can reach once it's made server-side.
// SOURCE: user-supplied URL, no destination validation
app.post('/api/preview', async (req, res) => {
const { url } = req.body;
// FLOW: prefix check doesn't validate the resolved destination
if (!url.startsWith('https://')) {
return res.status(400).json({ error: 'HTTPS only' });
}
// SINK: server makes outbound request to attacker-controlled target
const response = await axios.get(url);
return res.json({ content: response.data });
}); The startsWith('https://') check validates a string prefix, not the destination. The cloud metadata endpoint at 169.254.169.254 responds over HTTPS. The check doesn't prevent anything meaningful.
How the attack chain develops

Attacker perspective
The "HTTPS only" check is essentially ignored. The metadata endpoint responds to it fine. Even where it wouldn't, there are URL parser inconsistencies, redirect chains, and DNS rebinding techniques that bypass prefix validation. The question being asked is just: does the server make outbound requests I can direct? If yes, the rest follows.
The real issue with SSRF is that internal services assume requests come from trusted peers. They weren't built to handle adversarial input. Once you can make the server send a request somewhere, you're inside that trust boundary.
How to Apply This During a Code Review
The methodology is less about reading files sequentially and more about building a map of where data flows:
- Catalog entry points first: HTTP endpoints, message queue consumers, webhook handlers, file parsers, jobs that pull from external APIs. These are your sources. List them before reading business logic.
- Identify sensitive sinks: Database queries, outbound HTTP calls, file system writes, template rendering, command execution, deserialization. Know where damage can happen.
- Trace from source toward sinks: Follow data through the codebase. Where does validation happen? Is it enforced or advisory? Does data cross a service boundary where context gets dropped?
- Read middleware and wrappers carefully: Auth checks, rate limiters, sanitizers — these often do most of what people think they do. The gap is where vulnerabilities hide.
- Track cross-service trust: When Service A sends data to Service B, does B validate it? Usually not. Implicit trust between internal services is one of the more reliable places to find authorization issues.
What This Approach Surfaces That Tools Miss

On Chaining Vulnerabilities
One thing this model encourages is thinking about what a bug enables, not just what it is. A medium-severity IDOR that leaks resource IDs becomes more interesting if there's an SSRF elsewhere that accepts those IDs as input. Neither is critical alone. Together they might be.
When you find something, it's worth asking: does this unlock access to another source? Does it bypass a control that was protecting a sink? Vulnerabilities don't always exist in isolation, and reviewing them that way can miss the actual severity.
Useful habit: After identifying a vulnerability, spend a few minutes asking what it enables downstream before writing it up. The chained version of the issue is often what matters in practice.
Conclusion
Source → Flow → Sink is one mental model among several worth having as a security engineer. It doesn't replace threat modeling, architecture review, or automated tooling. What it does is give structure to the part of code review that's hardest to systematize — following untrusted data through a real codebase and asking whether anything meaningful ever stops it.
The bugs that are hardest to find aren't in individual functions. They're in the assumptions the code makes about data it never actually verified. Tracing the flow is one way to make those assumptions visible.
"Vulnerabilities tend to live not in what code does, but in what it assumes about data it was handed down the line."