Draw the data flow diagram. Identify trust boundaries. Apply STRIDE. Enumerate threats. Mitigate accordingly.
For deterministic systems, this works remarkably well.
But when AI enters the architecture, something subtle happens:
The model doesn't fail. It just stops describing reality.
Threat Modeling Assumes Predictability
Traditional threat modeling relies on a few core assumptions:
- Inputs map to defined actions
- System behavior is deterministic
- Trust boundaries are stable
- Data flows are repeatable
- User roles are fixed
These assumptions allow us to reason about:
- What can go wrong
- Where it can go wrong
- Who can trigger it
AI quietly breaks most of them.
1. Inputs No Longer Map to Single Actions
In a traditional system:
Request → Endpoint → Business Logic → Database
In an AI-driven system:
Prompt → Intent Interpretation → Tool Selection → Multi-step Execution → Aggregated Response
The same prompt may:
- Trigger different backend tools
- Query different data sources
- Compose different outputs
- Change behavior based on context
From a data flow diagram perspective, this looks clean.
In practice, it's probabilistic.
You are no longer modeling a path. You are modeling a range of possible behaviors.
2. Trust Boundaries Become Blurred
Threat models depend heavily on trust boundaries:
- User → Application
- Application → Database
- Application → External Service
AI introduces a new layer: the decision engine.
The model decides:
- What tools to call
- What data to retrieve
- How to combine it
- What to expose
The boundary is no longer just between systems.
It's between interpretation and execution.
And that boundary is rarely drawn in diagrams.
3. Aggregation Is the New Risk
Most threat models evaluate:
- Can this endpoint access this resource?
- Is this role allowed to call this API?
AI systems aggregate.
Individually authorized calls can combine into:
- Policy violations
- Sensitive data exposure
- Context leakage
- Cross-tenant inference
No single data flow looks dangerous.
The composition does.
Traditional threat modeling struggles to represent aggregated risk.
4. Behavior Is Not Repeatable
One of the quiet strengths of traditional modeling:
If something is exploitable, it's consistently exploitable.
AI systems are non-deterministic.
The same input:
- May produce different reasoning paths
- May call different tools
- May retrieve different data
This makes reasoning about abuse far harder.
You are not modeling a fixed exploit path. You are modeling a capability envelope.
That is a fundamentally different exercise.
5. STRIDE Still Works — But It's Incomplete
STRIDE categories still apply:
- Spoofing
- Tampering
- Repudiation
- Information Disclosure
- Denial of Service
- Elevation of Privilege
But the threats shift location.
Instead of asking:
- Can the user tamper with input?
We now ask:
- Can the model reinterpret intent in unsafe ways?
- Can aggregation lead to unintended disclosure?
- Can autonomy indirectly escalate capability?
The framework is not wrong.
The system no longer fits neatly inside it.
A Directional Way Forward
Threat modeling AI-driven features doesn't start with more diagrams. It starts by identifying decision points, defining forbidden outcomes, and constraining what the model is allowed to combine and act on, not just what it can access.
This shift doesn't replace traditional threat modeling — it reframes it around capability, not control flow.
A Simple Shift in Thinking
Traditional threat modeling asks:
"How can someone misuse this system?"
AI-era threat modeling must ask:
"What is this system capable of producing — even unintentionally?"
That shift sounds small.
It isn't.
Closing Thought
AI doesn't expand the attack surface.
It dissolves the clean boundaries we used to draw around it.
And once those boundaries blur, the diagrams still look correct — but the risk no longer lives where we marked it.
Part 3 of a series exploring how AI quietly exposes hidden security assumptions in modern applications.