π€ Why Automate Recon at All?
When you're hunting on multiple programs simultaneously, manual recon becomes a bottleneck fast. You spend more time running the same commands over and over than actually finding vulnerabilities.
An automated pipeline solves this:
- Runs recon while you sleep
- Catches new assets the moment they appear
- Keeps everything organized per target
- Lets you focus on the interesting parts: analysis and exploitation
This is the pipeline I built. It's not perfect. It's iterative. But it finds things.
πΊοΈ The Pipeline Overview
Here's the order of operations my pipeline follows:
Stage 1: Subdomain Enumeration Passive + active discovery using multiple tools and sources simultaneously.
Stage 2: Live Host Detection Filter out dead subdomains, identify what's actually running.
Stage 3: Port and Service Scanning Light scan to find non-standard ports and services.
Stage 4: Content Discovery Directory and endpoint fuzzing on confirmed live hosts.
Stage 5: Vulnerability Scanning Automated template-based scanning with Nuclei.
Stage 6: Notification Alerts sent to your phone or Discord when something interesting surfaces.
π οΈ Tools in the Stack
Everything here is free and open source:
- Subfinder : passive subdomain discovery from 40+ sources
- Assetfinder : additional passive subdomain sources
- HTTPX : probe live hosts, grab titles, status codes, tech stack
- Naabu : fast port scanner
- Katana : crawl and extract endpoints from live sites
- Nuclei : vulnerability scanning using community templates
- Notify : push notifications to Discord, Slack, or Telegram
- Anew : deduplicate and track new results over time
Install everything at once:
go install github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest
go install github.com/projectdiscovery/httpx/cmd/httpx@latest
go install github.com/projectdiscovery/nuclei/v3/cmd/nuclei@latest
go install github.com/projectdiscovery/katana/cmd/katana@latest
go install github.com/projectdiscovery/naabu/v2/cmd/naabu@latest
go install github.com/projectdiscovery/notify/cmd/notify@latest
go install github.com/tomnomnom/anew@latest
go install github.com/tomnomnom/assetfinder@latestβοΈ The Pipeline Script
This is the core script. One argument: the target domain.
#!/bin/bash
# recon.sh β Automated bug bounty recon pipeline
TARGET=$1
OUTPUT="$HOME/recon/$TARGET"
mkdir -p "$OUTPUT"/{subdomains,hosts,ports,endpoints,vulns}
echo "[*] Starting recon on: $TARGET"
echo "[*] Output directory: $OUTPUT"
# ββββ STAGE 1: SUBDOMAIN ENUMERATION ββββ
echo "[*] Enumerating subdomains..."
subfinder -d "$TARGET" -silent -o "$OUTPUT/subdomains/subfinder.txt"
assetfinder --subs-only "$TARGET" >> "$OUTPUT/subdomains/assetfinder.txt"
# Merge and deduplicate
cat "$OUTPUT"/subdomains/*.txt | sort -u | anew "$OUTPUT/subdomains/all_subdomains.txt"
echo "[+] Subdomains found: $(wc -l < "$OUTPUT/subdomains/all_subdomains.txt")"
# ββββ STAGE 2: LIVE HOST DETECTION ββββ
echo "[*] Probing live hosts..."
httpx -l "$OUTPUT/subdomains/all_subdomains.txt" \\
-title -tech-detect -status-code \\
-o "$OUTPUT/hosts/live_hosts.txt" -silent
echo "[+] Live hosts: $(wc -l < "$OUTPUT/hosts/live_hosts.txt")"
# Extract just the URLs for next stages
cat "$OUTPUT/hosts/live_hosts.txt" | awk '{print $1}' \\
> "$OUTPUT/hosts/live_urls.txt"
# ββββ STAGE 3: PORT SCANNING ββββ
echo "[*] Scanning ports..."
naabu -l "$OUTPUT/subdomains/all_subdomains.txt" \\
-top-ports 1000 \\
-o "$OUTPUT/ports/open_ports.txt" -silent
# Flag interesting non-standard ports
grep -E ":(8080|8443|8888|9000|4443|3000|5000|6379|27017)" \\
"$OUTPUT/ports/open_ports.txt" \\
> "$OUTPUT/ports/interesting_ports.txt"
# ββββ STAGE 4: ENDPOINT DISCOVERY ββββ
echo "[*] Crawling endpoints..."
katana -list "$OUTPUT/hosts/live_urls.txt" \\
-jc -d 3 -silent \\
-o "$OUTPUT/endpoints/crawled.txt"
echo "[+] Endpoints discovered: $(wc -l < "$OUTPUT/endpoints/crawled.txt")"
# ββββ STAGE 5: VULNERABILITY SCANNING ββββ
echo "[*] Running Nuclei scans..."
# Update templates first
nuclei -update-templates -silent
nuclei -l "$OUTPUT/hosts/live_urls.txt" \\
-severity critical,high,medium \\
-o "$OUTPUT/vulns/nuclei_results.txt" \\
-silent
VULN_COUNT=$(wc -l < "$OUTPUT/vulns/nuclei_results.txt")
echo "[+] Potential vulnerabilities: $VULN_COUNT"
# ββββ STAGE 6: NOTIFY ββββ
if [ "$VULN_COUNT" -gt 0 ]; then
echo "Recon complete for $TARGET. Found $VULN_COUNT potential vulnerabilities." \\
| notify -silent
fi
echo "[β] Recon complete. Check: $OUTPUT"Run it:
chmod +x recon.sh
./recon.sh hackerone.comπ Setting Up Notifications
The part that makes automation actually useful: getting alerted when something new appears.
Create the Notify config at ~/.config/notify/provider-config.yaml:
discord:
- id: "recon-alerts"
discord_channel: "recon"
discord_username: "ReconBot"
discord_webhook_url: "YOUR_DISCORD_WEBHOOK_URL"Now pipe any output through Notify:
cat nuclei_results.txt | notifyπ Scheduling with Cron β Continuous Monitoring
The real power is scheduling the pipeline to run automatically and only alert on new findings. The anew tool handles this: it only outputs lines that haven't been seen before.
# Edit crontab
crontab -e
# Run recon on target every 6 hours, only alert on new findings
0 */6 * * * /home/user/recon.sh hackerone.com 2>/dev/nullFor new subdomain alerts specifically:
# In your pipeline, replace the anew line with:
cat "$OUTPUT"/subdomains/*.txt | sort -u | \\
anew "$HOME/recon/known_subdomains_$TARGET.txt" | \\
notify -silentThis means you'll only get notified when a subdomain appears for the first time. New subdomains on a program often mean new attack surface, and new attack surface often means low-hanging fruit.
π Output Structure
After the pipeline runs, your directory looks like this:
~/recon/hackerone.com/
subdomains/
all_subdomains.txt (every subdomain found)
hosts/
live_hosts.txt (with status codes + tech)
live_urls.txt (clean URLs for scanning)
ports/
open_ports.txt (all open ports)
interesting_ports.txt (non-standard ports)
endpoints/
crawled.txt (discovered endpoints)
vulns/
nuclei_results.txt (potential vulnerabilities)π‘ What I Prioritize After Recon Runs
Automation finds the surface. You find the vulnerabilities. Here's my triage order:
- Interesting ports first : non-standard ports like 8080, 3000, 9000 often run internal apps with weaker security than the main site
- New subdomains : fresh assets get less attention from the security team
- High entropy subdomains : names like
dev-internal-api-v2.target.comare more likely to be interesting thanwww.target.com - Tech stack : HTTPX shows what tech is running. Old versions, specific frameworks, and internal tools are worth digging into manually
- Nuclei medium findings : critical and high get a lot of attention. Mediums are often ignored and can chain into something worse
β οΈ Important Before You Run This
A few things that matter:
- Only run this against programs where you have explicit permission to test
- Check the program's scope carefully:
.target.comdoesn't always mean every subdomain is in scope - Aggressive scanning can trigger WAFs and get your IP blocked
- Some programs explicitly prohibit automated scanning: read the policy before running anything
π Wrapping Up
- The pipeline won't win you bounties on its own. Automated tools find the same things for everyone. What matters is what you do with the output.
- The real edge is speed and consistency. When a new subdomain appears and you're the first person to probe it properly, the odds are in your favor. That's what this pipeline gives you.
- Build it, run it, iterate on it. Add tools that work for your style, cut ones that don't. After a few months it'll barely feel like setup anymore. π―