
I've seen teams take large codebases, arbitrarily chunk them into context-sized inputs, and feed them into AI expecting it to uncover security issues. This is a fundamentally broken approach and will never produce good results in enterprise architectures.
It's the equivalent of asking a security engineer to evaluate authorization controls by handing them a single file from a large code base — no call path, no service context, no understanding of where enforcement actually happens. No competent engineer would attempt that analysis under those conditions.
Generative AI is not a brute-force compute engine. It does not reconstruct structure from fragmented inputs. When you remove context, you remove the very signals required to reason about security. This results in wasted compute, wasted engineering effort, and no meaningful improvement in security posture.
Tenet 1: Hire Security Experts or People Willing to Become One
AI does not compensate for lack of understanding. It amplifies it.
Application security is a specialized domain with deep concepts — authorization models, trust boundaries, and implicit system assumptions. If these are not understood, the problem will be framed incorrectly, and the AI system will scale that incorrect framing across every service it touches.
Assigning a security problem to a team without security expertise does not produce innovation. It produces incorrect abstractions encoded into automation. This is ultimately a cultural problem. You have to build a culture where the team is expected to immerse themselves in the problem domain and develop a working understanding of how the system actually behaves. There is no shortcut here. You do not need an "elite" team that believes it is above the problem or assumes AI will figure things out automatically. That mindset is exactly how broken systems get built.
What you need are builders who are willing to engage deeply — who understand how security controls are implemented, how systems interact, and how those realities translate into AI-driven workflows. AI is only as good as the thinking behind it. If that thinking is shallow, the system will fail — quickly and at scale.
Tenet 2: Treat AI Like a Human, Not an Infinite Compute Engine
Generative AI is not an infinite compute and memory machine. Treating it like one is a fundamental mistake.
A human cannot make sense of a large dataset by scanning it line by line. They use tools — filters, queries, and structured exploration — to extract signal from noise. AI works the same way.
If you feed it disconnected data and expect meaningful conclusions, you are removing the very structure required for reasoning. Give it the right context and the right tools, and it will do useful work. Treat it like a black box with infinite capacity, and it will not.
Tenet 3: Do Not Solve the Same Problem Twice
Dynamic programming exists for a reason: once you solve a problem, you don't solve it again. Re-solving the same problem repeatedly is stupid and wasteful. Large systems are built on shared abstractions such as authorization frameworks, middleware, and common libraries. Once the behavior of one of these components is understood, that understanding should persist.
Humans do not re-compute the same conclusions every time they encounter it; they learn the abstraction once and reuse that knowledge. AI systems must do the same. If an authorization framework is used across dozens of services, there is no reason to re-analyze how it works for each one. The system should understand it once, store that understanding, and apply it consistently across all dependent services.
Verification should focus on how a service uses the abstraction, not on re-proving the abstraction itself. If your system keeps re-learning the same thing you are building inefficiency at scale.
Tenet 4: Design Workflows, Not Prompts
AI systems are not built through clever prompts. They are built through well-defined workflows. If your approach to solving application security relies on repeatedly asking the model the same generic question "Find a security vulnerability here", you are designing for stupid outcomes. That is not a reliable way to reason about security.
A security engineer does not analyze a system randomly. They follow a structured process: identify entry points, trace execution paths, evaluate controls along those paths, and apply known patterns. AI must be designed to follow the same structure.
This means explicitly defining how context is gathered, how dependencies are resolved, and how decisions are made at each step. When AI is embedded in a workflow, it becomes a reasoning system. When it is treated as a prompt-response tool, it remains a stateless responder.
Tenet 5: Context Is Everything — Curate It Deliberately
AI does not fail because it lacks capability. It fails because it is given the incomplete or wrong context.
If you provide incomplete, fragmented, or irrelevant data, the system will produce incomplete, fragmented, or irrelevant conclusions. Security reasoning depends on having the right view of the system — execution paths, service boundaries, and where controls are actually enforced. Without that, the problem is ill-defined.
Context is not something you dump into the system. It is something you construct. You have to be deliberate about what the model sees, when it sees it, and how it expands that view when needed. When context is curated correctly, AI can reason effectively. When it is not, even the best models will produce low-quality results.
Example: Verifying Authorization in a Distributed System
Consider a service that exposes an API to modify a resource. The authorization check is not implemented in a single place. The request flows through an API layer, middleware, shared authorization libraries, and sometimes downstream services that enforce additional constraints.
If you chunk this system into isolated pieces and feed those fragments into AI, it will not be able to determine whether authorization is correctly enforced. The logic is distributed, and no single fragment contains the full picture.
A human engineer would start at the entry point, trace the execution path, and follow it across services until they can confirm where and how authorization is applied. Then they execute a real world test. AI must be designed to do the same — navigating the system, retrieving context, and reusing its understanding of shared frameworks. Without that structure, the system is not performing security analysis. It is producing wildly inaccurate guesses.
Conclusion
Generative AI is not a shortcut to solving application security. It is a force multiplier for workflows that are already well-designed.
If you preserve context, enable reasoning through tools, reuse knowledge, design structured workflows, and ground everything in strong domain expertise, AI can be extremely effective. If you do not, it will simply scale flawed thinking.