When I first started doing recon, I thought the process was pretty straightforward — run a few tools, collect data, and somewhere in there you'd find something useful.
What actually happens is different. You end up with thousands of results, most of them useless, and you spend more time figuring out what to ignore than what to investigate.
This is the workflow I ended up settling on. It's not perfect, but it reflects what actually happens during testing.
1. Subdomain Enumeration
Everything starts here. If you miss subdomains, you miss parts of the attack surface entirely.
I usually run multiple tools because no single one is complete:
sublist3r -d target.com -o sublist3r.txt
subfinder -d target.com -silent -o subfinder.txt
assetfinder --subs-only target.com > assetfinder.txt
amass enum -passive -d target.com -o amass.txtEach tool returns slightly different results. Some overlap, some don't. That's expected.
2. Merging and Cleaning
Now I combine everything into one list:
cat sublist3r.txt subfinder.txt assetfinder.txt amass.txt | sort -u > all_subs.txtAt this point, the goal is just to have one clean dataset. No duplicates, no confusion later.
3. Finding Alive Targets
A large portion of subdomains won't even respond. There's no point analyzing something that's dead.
cat all_subs.txt | httpx -silent -o alive.txtSometimes I include extra details:
cat all_subs.txt | httpx -status-code -title -o alive_detailed.txtNow I'm working with targets that actually respond.
4. Collecting URLs
Here the amount of data starts to increase quickly.
gau target.com > gau.txt
waybackurls target.com > wayback.txtThen:
cat gau.txt wayback.txt | sort -u > all_urls.txtOne thing that you will notice here is the difference in output size. gau tends to return a much larger dataset than waybackurls, simply because it pulls from more sources.
5. Extracting JavaScript Files
From all collected URLs, I filter JavaScript files:
grep "\.js" all_urls.txt > js.txtThis step matters because a lot of modern applications expose useful information through frontend code.
6. Extracting Endpoints
Instead of manually reading JavaScript files, I extract endpoints automatically:
cat js.txt | xargs -I{} curl -s {} | grep -Eo '(https?://[^"]+)' >> endpoints.txtTools like linkfinder can also help here.
The goal here is to get a clearer view of:
- API endpoints
- internal routes
- external services
7. Running Secret Detection
Now, I run tools like:
cat js.txt | jsecretThis produces a lot of output, and at first glance it looks promising.
Examples:
Password="password"
Secret="secret"
ApiKey="api-key"This is where it's easy to go in the wrong direction. The output looks convincing, especially when you're new, but most of it is just normal code patterns — variable names, placeholders, or generic labels developers use. There's nothing sensitive about them, even though the tool flags them as if there is.
8. What I Pay Attention To
After filtering out obvious noise, I focus on a few specific patterns.
Hardcoded credentials in URLs
For example:
https://username:password@hostIf something like this appears, I try to determine whether it's actually used anywhere or just leftover code.
API keys
If I find something that looks like a real key, I try to understand:
- what service it belongs to
- whether it can be used externally
- what kind of access it provides
Some keys are intentionally public (for example, monitoring services), so context matters.
Encoded values
Sometimes values look suspicious but turn out to be harmless after decoding.
A common example is a JWT header without a payload or signature. On its own, it doesn't mean much.
Closing Notes
Most of this process is not as exciting as it sounds. A lot of the time, you go through everything above and end up with nothing worth reporting.
Early on, that feels frustrating. It can feel like you're doing something wrong, especially when tools keep giving you results that look important but lead nowhere.
After a while, that feeling changes. You start recognizing what's normal, what's worth a closer look, and what you can safely ignore. It's less about running more tools and more about understanding what you're looking at.
There isn't really a shortcut here. You just get better at filtering, and that's what eventually makes the difference.