It is about methodology. Discipline. A structured workflow that takes you from a blank scope document to a well-documented, high-impact vulnerability report — consistently, repeatably, professionally.
This is that workflow.
Whether you are just starting your first program or you have been hunting for years and want a structured reference, this methodology covers every phase: reconnaissance, discovery, enumeration, vulnerability testing, proof of concept creation, and reporting.
This methodology is maintained as a living document. For the latest updates, tool additions, and community contributions, the full version lives at the GitHub →
Let's build the workflow from the ground up.
The Full Workflow at a Glance
Before we go deep on each phase, here is the complete pipeline:
SCOPE REVIEW
↓
RECONNAISSANCE
Passive + Active Subdomain Enumeration
↓
DISCOVERY
HTTP Probing → JS Analysis → URL Discovery
↓
ENUMERATION
Parameters → Cloud Assets → APIs → Content
↓
TESTING
High-Priority Vulnerability Assessment
↓
TWO-EYE APPROACH
First pass (common vulns) + Second pass (unique findings)
↓
POC CREATION
Video + Screenshot documentation
↓
REPORTING
Executive Summary → Technical Details → RemediationEvery phase feeds the next. Skipping phases means missing bugs. Let's walk through each one.
Phase 1: Reconnaissance and Subdomain Enumeration

Reconnaissance is where your attack surface gets defined. The broader and more accurate your recon, the more bugs you will find. Most hunters who miss high-impact vulnerabilities missed them because they missed the asset in recon.
1.1 Passive Subdomain Enumeration
Passive recon collects subdomain data without directly touching the target — pulling from certificate transparency logs, public databases, search engines, and code repositories.
Subfinder
subfinder -d target.com -silent -all -recursive \
-o subfinder_subs.txtAmass (Passive Mode)
amass enum -passive -d target.com \
-o amass_passive_subs.txtCRT.sh Certificate Transparency
curl -s "https://crt.sh/?q=%25.target.com&output=json" \
| jq -r '.[].name_value' \
| sed 's/\*\.//' \
| anew crtsh_subs.txtGitHub Subdomain Dorking
github-subdomains -d target.com \
-t YOUR_GITHUB_TOKEN \
-o github_subs.txCombine all results:
cat *_subs.txt | sort -u | anew all_subs.txt1.2 Active Subdomain Enumeration
Active recon directly queries DNS and brute-forces subdomains using wordlists — finding assets that don't appear in public records.
Shuffledns (resolution + brute force)
shuffledns -d target.com -list all_subs.txt \
-r resolvers.txt -o active_subs.txtDNSX (resolve and filter live)
dnsx -l active_subs.txt -resp -o resolved_subs.txtFFuF Subdomain Bruteforce
ffuf -u https://FUZZ.target.com -w wordlist.txt \
-t 50 -mc 200,403 -o ffuf_subs.txtReverse DNS (find more assets from IPs)
dnsx -ptr -l resolved_subs.txt \
-resp-only -o reverse_dns.txtASN Enumeration (map entire IP ranges)
amass intel -asn <ASN_NUMBER> -o asn_results.txt1.3 Handling Specific Single-Target Scopes
When the scope is a single domain rather than wildcards, shift to crawling and archival methods:
# Archive URLs
gau target.example.com | anew gau_results.txt
waybackurls target.example.com | anew wayback_results.txt
# Active crawling
katana -u target.example.com -silent -jc \
-o katana_results.txt
echo "https://target.example.com" | hakrawler \
-depth 2 -plain -js -out hakrawler_results.txtValidate everything is live:
cat all_subs.txt | httpx -silent -title \
-o live_subdomains.txtPhase 2: Discovery and HTTP Probing
Once you have a list of subdomains, discovery turns that list into a map of live, interesting assets.
2.1 HTTP Probing
# Full probe with titles, status codes, IPs
httpx -l resolved_subs.txt -p 80,443,8080,8443 \
-silent -title -sc -ip -o live_websites.txt
# Filter immediately for high-value targets
cat live_websites.txt | grep -i "login\|admin" \
| tee login_endpoints.txt2.2 JavaScript Analysis
JavaScript files are one of the highest-value targets in modern bug bounty. They leak API keys, internal endpoints, authentication tokens, and business logic.
# Extract all JS file URLs
cat live_websites.txt | waybackurls \
| grep "\.js" | anew js_files.txt
# Find endpoints inside JS files
python3 linkfinder.py -i js_files.txt \
-o js_endpoints.txt
# Search for sensitive patterns
cat js_files.txt | gf aws-keys | tee aws_keys.txt
cat js_files.txt | gf urls | tee sensitive_urls.txt
# Validate any extracted keys
curl -X GET "https://api.example.com/resource" \
-H "Authorization: Bearer <extracted_key>"2.3 Google Dorking
# Admin and login pages
site:*.example.com inurl:"admin|login" inurl:.php|.asp
# Exposed config files
site:*.example.com ext:env | ext:yaml | ext:ini
# Public keys and certificates
site:*.example.com inurl:"id_rsa.pub" | inurl:".pem"
# Automated GitHub dorking
python3 GitDorker.py -tf github_token.txt \
-q target.com -d dorks.txt -o git_dorks_output.txt2.4 URL Discovery and Crawling
# Deep crawl of live targets
katana -list live_websites.txt -jc -o katana_urls.txt
gospider -s "https://target.com" -d 2 \
-o gospider_output/
echo "https://target.com" | hakrawler \
-depth 3 -plain -out hakrawler_results.txt2.5 Archive Enumeration and Parameter Extraction
# Pull archived URLs
gau --subs target.com | anew archived_urls.txt
waybackurls target.com | anew wayback_urls.txt
# Extract parameters (every = sign is a potential injection point)
cat archived_urls.txt | grep "=" \
| anew parameters.txtPhase 3: Advanced Enumeration

3.1 Parameter Discovery
Parameters are injection points. Finding more parameters means finding more bugs.
# Arjun — intelligent parameter discovery
arjun -u "https://target.example.com" \
-m GET,POST --stable -o params.json
# ParamSpider — extract from web surface
python3 paramspider.py --domain target.com \
--exclude woff,css,js \
--output paramspider_output.txt
# FFuF parameter bruteforce
ffuf -u "https://target.com/page.php?FUZZ=test" \
-w /usr/share/wordlists/params.txt \
-o parameter_results.txt3.2 Cloud Asset Enumeration
Misconfigured cloud storage is one of the most consistently rewarded finding classes in modern bug bounty.
# Enumerate cloud buckets across providers
cloud_enum -k target.com -b buckets.txt \
-o cloud_enum_results.txt
# Test S3 bucket access (unauthenticated)
aws s3 ls s3://<bucket_name> --no-sign-request
# Dump accessible bucket contents
python3 AWSBucketDump.py -b target-bucket \
-o dumped_data/3.3 Content Discovery
# Feroxbuster — recursive directory brute force
feroxbuster -u https://target.com \
-w /usr/share/wordlists/common.txt \
-r -t 20 -o recursive_results.txt
# Dirsearch — multi-extension content discovery
dirsearch -u https://target.com \
-w /usr/share/wordlists/content_discovery.txt \
-e php,html,js,json -x 404 \
-o dirsearch_results.txt
# FFuF recursive
ffuf -u https://target.com/FUZZ \
-w /usr/share/wordlists/content_discovery.txt \
-mc 200,403 -recursion -recursion-depth 3 \
-o ffuf_results.txt3.4 API Enumeration
Modern applications expose their most sensitive functionality through APIs. Don't skip this step.
# Kiterunner — API route discovery
kr scan https://api.target.com \
-w /usr/share/kiterunner/routes-large.kite \
-o api_routes.txt3.5 ASN and Infrastructure Mapping
# Shodan search across IP range
shodan search "net:<ip_range>" \
--fields ip_str,port --limit 100
# Censys asset enumeration
censys search "autonomous_system.asn:<ASN_Number>" \
-o censys_assets.txtPhase 4: Vulnerability Testing
This is where the methodology shifts from mapping to attacking. Work through high-priority vulnerability classes systematically.
# CSRF endpoint identification
cat live_websites.txt | gf csrf | tee csrf_endpoints.txt
# LFI testing (automated)
cat live_websites.txt | gf lfi \
| qsreplace "/etc/passwd" \
| xargs -I@ curl -s @ \
| grep "root:x:" > lfi_results.txt
# SQLi with Ghauri
ghauri -u "https://target.com?id=1" --dbs --batch
# Open Redirect testing
cat urls.txt | grep "=http" \
| qsreplace "https://evil.com" \
| xargs -I@ curl -I -s @ \
| grep "evil.com"
# Sensitive data in JS
cat js_files.txt \
| grep -Ei "key|token|auth|password" \
> sensitive_data.txt
# RCE via file upload
curl -X POST -F "file=@exploit.php" \
https://target.com/uploadPriority order for testing: Remote Code Execution → SQLi → SSRF → LFI → Auth Bypass → IDOR → Stored XSS → CSRF → Open Redirect → Information Disclosure
Phase 5: The Two-Eye Approach
This is the methodology's most underrated concept — and the one that separates hunters who find medium-severity bugs from hunters who find critical ones.
First Eye — Systematic Coverage Go through every subdomain, every endpoint, every parameter you have collected and test it against the common vulnerability checklist. Methodical. Complete. No skipping.
Second Eye — Curiosity and Intuition Step back from the checklist. Look at what's unusual. The subdomain that doesn't fit the naming pattern. The endpoint that returns a different response size. The parameter that behaves differently than its neighbors. The admin panel that shouldn't be indexed. The S3 bucket named after an internal project.
The Second Eye is not a tool. It is a mindset. It is what you develop over hundreds of hours of hunting — the ability to notice when something is interesting even if it doesn't match a known vulnerability pattern yet.
Actionable decision tree:
Found something interesting?
↓ YES → Create POC → Test full impact → Report
↓ NO → Pivot to deeper testing on unique
subdomains / unusual endpointsPhase 6: Proof of Concept Creation
A finding without a POC is a claim. A finding with a clear POC is evidence.
Video POC — Use OBS Studio or Greenshot's screen recording to capture the full exploitation chain from start to finish. Narrate what you are doing at each step. Keep it focused — under 3 minutes for most findings.
Screenshot POC — Capture every step of the reproduction chain with Greenshot. Annotate each screenshot: draw arrows to the relevant field, add text labels explaining what the screenshot shows, circle the evidence of impact.
HTTP Request/Response Capture — Export raw requests and responses from Burp Suite. These are essential for technical reviewers who want to reproduce the finding exactly.
Minimal Reproduction — Strip the POC down to the minimum steps required. If the exploit works in 3 steps, do not show 10. Clarity and brevity signal professionalism.
Phase 7: The Report

The report is the product. Everything before this point was research. The report is what gets read, triaged, paid, and added to your reputation.
Report Structure
# Vulnerability Report: [Title]
## Overview
- Severity: [Critical / High / Medium / Low]
- CVSS Score: [Score]
- Affected Component: [URL / Parameter / Function]
## Description
[Clear technical description of the vulnerability.
What it is, where it exists, why it is exploitable.]
## Steps to Reproduce
1. Navigate to [URL]
2. Enter the following payload in [field]: [payload]
3. Observe [specific behavior that proves the vulnerability]
## Impact
[What a real attacker could do with this.
Be specific: data exposed, accounts affected,
systems accessible. Tie to business risk.]
## Proof of Concept
[Attach screenshots, screen recording,
raw HTTP requests/responses, code snippets]
## Recommendations
[Specific, actionable remediation steps.
Reference relevant standards: OWASP, CVE, CWE]
## References
[Links to relevant CVEs, CWEs, writeups,
documentation]Report Quality Checklist
Before you submit, verify:
- Severity rating is honest and justified with CVSS score
- Reproduction steps work from a clean session — test them yourself
- Impact section explains business consequences, not just technical facts
- All screenshots are annotated and clearly labeled
- Video POC (if included) demonstrates the full chain
- Raw HTTP requests included for technical reproducibility
- Remediation is specific — not "sanitize inputs" but how to sanitize inputs
- Report is written so both a technical reviewer and a non-technical manager can understand the risk
The Four-Section Document Structure
1. Executive Summary Target scope, testing timeline, key findings summary, risk ratings. Written for someone who will not read the rest — they need to understand the risk level from this page alone.
2. Technical Details Full vulnerability title, severity rating, affected components, technical description, steps to reproduce, impact analysis, supporting evidence. Written for the engineer who will fix it.
3. Remediation Detailed recommendations, mitigation steps, additional security controls, references and resources. Written to be actionable — the developer reading this should be able to implement the fix without needing to ask follow-up questions.
4. Supporting Materials Video demonstrations, annotated screenshots, HTTP request/response logs, code snippets, timeline of discovery. The evidence package that makes your findings undeniable.
Tools Reference
PhaseToolsPassive ReconSubfinder, Amass, CRT.sh, GitHub-SubdomainsActive ReconShuffledns, MassDNS, DNSX, FFuFSingle TargetGAU, Waybackurls, Katana, HakrawlerHTTP Probinghttpx, httprobeJS AnalysisLinkFinder, subjs, GF patternsDorkingGitDorker, manual Google dorksURL DiscoveryKatana, Gospider, HakrawlerParametersArjun, ParamSpider, FFuFCloud AssetsCloudEnum, AWSBucketDump, S3ScannerContent DiscoveryFeroxbuster, Dirsearch, FFuFAPI RoutesKiterunner, Postman, Burp SuiteInfrastructureAmass, Shodan, CensysVulnerability TestingGhauri, GF, qsreplace, curlPOCBurp Suite, OBS Studio, Greenshot
Final Word

Bug bounty hunting rewards people who are systematic about the boring parts and creative about the interesting ones.
The tools in this methodology are not the point. Any tool can be replaced. The workflow is the point — the discipline of moving through phases completely, not skipping recon because you want to get to exploitation, not skipping reporting because you want to move to the next target.
The hunters who earn consistently are the ones who treat methodology as infrastructure, not as a suggestion.
This methodology will be updated as new techniques, tools, and attack classes emerge.
For the complete, regularly updated version including new technique additions, community contributions, and extended tool documentation, the full HEXSEC AI Red Teaming and Bug Bounty Field Manual is on GitHub →
Star the repo if this helped. Contributions welcome.