A disciplined approach to SaaS security reconnaissance

Most security testers jump straight to payloads. That's the wrong instinct. When I approach a new SaaS target, the first question isn't "what can I break?" It's "what is this system actually protecting?" Features don't matter at this stage. Value does. In most SaaS environments, that value comes down to three things: data ownership, role boundaries, and paid access. Everything else is surface area. Start with observation, not action Before touching anything, I spend the first few minutes watching normal behavior. What user types exist? Is there meaningful separation between individual users and team roles? Where does control actually live — on the client, or the server?

This baseline isn't optional. Skipping it means testing blind. Isolate one high-value workflow Once the baseline is clear, I pick a single workflow to examine. The project invitation flow is almost always the right starting point — it sits directly at the intersection of access control and trust. The normal sequence is simple: a user creates a project, invites someone, and that person gains access based on an assigned role. That's expected behavior. The real question is what assumptions the backend is making while this happens. This is where I stop behaving like a normal user and start questioning the system. Is the server verifying who is allowed to send invitations, or is it accepting requests without checking? Is role assignment enforced server-side, or is it being read from client input? Is the invitation token single-use, or can it be replayed? These aren't random questions. They map directly to the most common failure points in access control design. Form hypotheses, not assumptions From that questioning, I start building controlled hypotheses. If role assignment is coming from the client without validation, there's a potential privilege escalation vector. But I only treat it as a real vector if the server actually accepts and applies the modified role. The hypothesis means nothing until the behavior confirms it. If invitation tokens can be reused or aren't properly invalidated, access control weakens over time. Again — confirmation required. An assumption is not a finding. Break the expected sequence Every multi-step flow has an assumed sequence. For invitation flows: invite → accept → access. So I ask one simple question: what happens when that sequence is broken? Can a user access a project without being invited? Can the join step be triggered directly? Is project access tied to verified membership, or just to knowing a project identifier? If access is granted outside the intended flow, that's a clear authorization issue. If the system holds firm, I move on. No drama either way. Test state transitions at the boundary Most systems define implicit user states: visitor, invited, member, admin. These states are rarely enforced as strictly as the developers believe. Before running any test, I map the transitions mentally. Is it possible to move between states without passing through required checks? Can a user influence their own role? Can they affect someone else's permissions? Only when a realistic break exists does it become worth testing. State boundary issues are often the most impactful findings — and the most overlooked, because they require thinking about the system rather than scanning it. Approach payment logic carefully Payment logic is where false positives pile up. Most testers ask the wrong question here. The question isn't "can I fake a successful payment?" It's "what does the server actually trust?" If premium access is granted based on a client-controlled flag or loosely verified status, that's a real problem. If there's proper server-side validation with the payment provider, most bypass attempts are irrelevant noise — and chasing them wastes time. The discipline that separates signal from noise Every potential issue has to survive one test: does it actually change system behavior in a way that breaks security? If the answer is no, it gets discarded immediately. Most valid findings aren't discovered by trying more payloads. They're found by identifying where a system is making incorrect assumptions — and then proving it. That gap between assumption and proof is where the work actually lives.

None

I approach a new SaaS target by first understanding what actually matters inside the system. Not features—value.

On initial access, the goal is simple: identify what the application is protecting. In most SaaS environments, that usually comes down to three things—data ownership, role boundaries, and paid access. Everything else is just surface.

Instead of jumping into testing, I spend the first few minutes observing normal behavior. What kind of users exist? Is there a clear separation between individual users and team roles? Where does control actually sit—on the client or the server?

Once that baseline is clear, I isolate a single workflow. In this case, the project invitation flow. This is almost always a high-value area because it sits at the intersection of access control and trust.

Two people viewing a mind map displayed on a laptop screen, one person pointing at the central idea labeled
Photo by dlxmedia.hu on Unsplash

The normal flow is straightforward: a user creates a project, invites another user, and that user gains access based on a predefined role. That's the expected behavior. The real question is what assumptions the backend is making during this process.

At this stage, I stop behaving like a normal user and start questioning the flow.

Is the system verifying who is allowed to send invitations, or is it just accepting the request? Is the role assignment enforced server-side, or is it taken from client input? Is the invitation token single-use, or can it be replayed?

These are not random questions. They directly map to common failure points in access control systems.

From here, I start forming controlled hypotheses.

If role assignment is coming from the client without strict validation, then there is a potential privilege escalation vector. But this is not assumed—I would only consider it valid if the server actually accepts and applies the modified role.

If invitation links can be reused or are not properly invalidated, then access control weakens. Again, this requires confirmation. No assumption is treated as a finding.

Next, I look at flow integrity.

The system expects a sequence: invite → accept → access. So I ask a simple question: what happens if that sequence is broken?

Can a user access a project without being invited? Can the join step be triggered directly? Is project access tied strictly to membership validation, or just to a project identifier?

If access is granted outside the intended flow, that becomes a clear authorization issue. If not, it's working as expected and I move on.

I also analyze state transitions. Most systems define implicit states—user, invited, member, admin. These states are rarely enforced as strictly as developers believe.

So I test the boundaries mentally first. Is it possible to move between these states without passing through required checks? Can a user influence their own role? Can they affect someone else's permissions?

Only after identifying a realistic break in state control does it become worth testing.

Finally, I shift focus to payment logic, but carefully. This is an area where false positives are extremely common.

The key question is not "can I fake success?" but "what does the server actually trust?"

If premium access is granted purely based on a client-controlled flag or loosely verified status, that's a problem. But if there is proper server-side validation with the payment provider, then most bypass attempts are irrelevant.

At no point do I assume a vulnerability exists without confirmation. Every potential issue must survive one test: does it actually change system behavior in a way that breaks security?

If the answer is no, it's discarded immediately.

The difference between noise and a valid finding is discipline. Most issues are not found by trying more payloads—they're found by understanding where the system is making incorrect assumptions and proving it.

That's the entire game.

my portfolio

GitHub

linkedin