Every serious bug bounty hunter knows the feeling. You open a program, spend hours on recon, find nothing interesting — because someone else already got there first. The asset existed for two weeks before you noticed it.
That's the real game. Not just finding bugs — finding them first.
Companies spin up new subdomains constantly. Staging environments, internal tools, API gateways, admin panels. These assets are often misconfigured, unpatched, and completely forgotten by the security team the moment they go live. The window between "this subdomain exists" and "someone finds a vulnerability in it" is small. You want to be in that window.
So I stopped doing recon manually. I automated the whole thing.
What This Pipeline Does
Every 4 days, without me touching anything, this workflow:
- Runs subfinder and findomain against 10 wildcard bug bounty targets
- Merges, sorts, and deduplicates everything into a clean domain list
- Diffs against the previous baseline to extract only new subdomains
- Probes every new domain with httpx — status code, page title, tech stack
- Sends a formatted alert straight to my Telegram, with full result files attached
If nothing new is found, I get silence. If something new shows up, I get a message with everything I need to start hunting immediately.
Tools Used
Recon
ToolPurposesubfinderPassive subdomain enumeration across 50+ sourcesfindomainCertificate transparency based discoveryhttpxHTTP probing — status, title, tech detection, content-length
Automation
ToolPurposen8nSelf-hosted workflow automationSSH nodeRun shell commands on localhost from within n8nTelegram Bot APIReceive alerts and file attachments
System Utilities
ToolPurposesort -uSort and deduplicate merged domain listscomm -23Diff two sorted files, extract lines unique to the new onewc -lCount new domains/bin/bash -cRequired wrapper for process substitution in comm
Targets Being Monitored
These are all active wildcard scopes from public bug bounty programs:
*.rapyd.com
*.optus.com.au
*.movilepay.com
*.zoop.ws
*.etsy.com
*.asana.plus
*.solarcity.com
*.opera.software
*.opera.com
*.tesla.comWorkflow Architecture
The pipeline runs as a loop — each target goes through the full chain before the next one starts:
Schedule Trigger (every 4 days)
↓
Build Target List — Code node, 10 domains as individual items
↓
Loop Over Domains — splitInBatches node
↓
Run Subfinder — SSH node → localhost
↓
Run Findomain — SSH node → localhost
↓
Merge + Sort + Deduplicate — SSH node → localhost
↓
Check Baseline Exists — SSH node → localhost
↓
Is Baseline Present? — IF node
↓ TRUE ↓ FALSE
Diff Old vs New Save Baseline
↓ ↓
Count New Domains Telegram — First Run Notice
↓ ↓
Any New Domains? — IF node Back to Loop
↓ TRUE ↓ FALSE
Run HTTPX Telegram — No Change
↓ ↓
Read Output Back to Loop
↓
Format Message
↓
Telegram — Send Alert
↓
Send HTTPX File + Raw Domains File
↓
Update Baseline
↓
Back to LoopNode Breakdown
1. Schedule Trigger
Fires the workflow automatically every 4 days. Nothing to do on my end.
2. Build Target List (Code Node)
Converts the 10 target domains into individual workflow items. Each item carries its own variables — target name, date, file paths — so nothing conflicts when the loop runs.
javascript
const domains = [
'rapyd.com', 'optus.com.au', 'movilepay.com',
'zoop.ws', 'etsy.com', 'asana.plus',
'solarcity.com', 'opera.software', 'opera.com', 'tesla.com'
];
const date = new Date().toISOString().split('T')[0];
return domains.map(domain => ({
json: {
target: domain,
date: date,
dataDir: `/root/recon/${domain}`,
currentFile: `/root/recon/${domain}/${date}.txt`,
latestFile: `/root/recon/${domain}/latest.txt`,
subfinder_raw: `/tmp/subfinder_${domain}.txt`,
findomain_raw: `/tmp/findomain_${domain}.txt`,
merged: `/tmp/merged_${domain}.txt`,
newDomains: `/tmp/new_domains_${domain}.txt`,
httpxOut: `/tmp/httpx_out_${domain}.txt`
}
}));3. Loop Over Domains (splitInBatches)
Processes all 10 domains one by one. After each domain completes its full pipeline, it loops back for the next.
4. SSH Nodes
All shell commands run through n8n's SSH node pointed at 127.0.0.1. This covers subfinder, findomain, merge/sort, diff, httpx, and baseline management.
5. IF Nodes
Two branching points:
- First IF — does a baseline exist? First run vs subsequent run.
- Second IF — were any new domains found? Alert vs silence.
6. Code Node (Message Formatting)
Parses raw httpx output, counts alive hosts, slices the top 10 results, and builds a formatted HTML message for Telegram.
7. Telegram Nodes
Three send operations per target when new domains are found:
- Summary alert with stats and top 10 preview
- Full httpx results file as attachment
- Raw new domains list as attachment
File Storage Structure
Each target gets its own directory under ~/recon/:
~/recon/
├── rapyd.com/
│ ├── 2025-04-26.txt ← today's full domain list
│ ├── latest.txt ← baseline for next comparison
│ └── new_2025-04-26.txt ← diff output (new domains only)
├── tesla.com/
│ ├── 2025-04-26.txt
│ └── latest.txt
└── ...Temporary working files go to /tmp/ — named per domain to prevent conflicts during the loop.
The Honest Part — Every Problem I Hit
This is what most writeups skip. Here's everything that broke and how I fixed it.
Problem 1 — n8n Secure Cookie Error on Startup
Launching n8n and opening http://localhost threw this:
Your n8n server is configured to use a secure cookie, however you are
either visiting this via an insecure URL, or using Safari.n8n defaults to requiring HTTPS for session cookies. Accessing it over plain HTTP triggers the block. Fix:
bash
N8N_SECURE_COOKIE=false n8n startTo make it permanent:
bash
echo 'export N8N_SECURE_COOKIE=false' >> ~/.bashrc
source ~/.bashrcProblem 2 — "Execute Command" Node Doesn't Exist
Spent time searching for "Execute Command" in the node panel. It's not there in newer n8n versions — it was renamed. Search for command instead — it shows up as "Execute a command".
Better approach: use the SSH node pointed at localhost. It gives you a proper credential layer and works even when direct shell execution is restricted.
Problem 3 — SSH Node Connecting to Localhost
SSH node is built for remote servers. Getting it to work locally required SSH to actually be running, which it wasn't by default.
bash
sudo apt install openssh-server -y
sudo systemctl enable ssh
sudo systemctl start sshThen in n8n, create an SSH credential:
- Host:
127.0.0.1 - Port:
22 - Username: your Linux username
- Auth: Password or Private Key
Problem 4 — comm Command Silently Failing
The comm command uses process substitution — <(sort file) — which is a bash-only feature. The SSH node defaults to /bin/sh, which doesn't support it. The command ran without errors but produced an empty diff file every time.
Fix — wrap it explicitly in bash:
bash
/bin/bash -c 'comm -23 <(sort new.txt) <(sort old.txt) > diff.txt'Without that wrapper, you'll think the pipeline is working fine while it's quietly doing nothing.
Problem 5 — Cloudflare Tunnel Is Not Needed
I assumed n8n needed a public HTTPS URL via Cloudflare Tunnel for Telegram to work. It doesn't.
There are two Telegram modes in n8n:
- Send mode — n8n makes outbound POST requests to Telegram's API. No public URL needed. This is what I'm using.
- Receive/webhook mode — Telegram pushes events to your n8n URL. This requires a public HTTPS endpoint. Cloudflare Tunnel is only needed here.
Since this workflow only sends notifications and never receives incoming messages, localhost works perfectly fine. No tunnel, no VPS, no cloud infrastructure.
Problem 6 — Variable References Breaking Inside a Loop
Referencing variables from earlier nodes inside a loop using $json.fieldName breaks — n8n loses the original item context after branching.
Fix — always reference the loop node directly:
{{ $('Loop Over Domains').item.json.target }}
{{ $('Loop Over Domains').item.json.currentFile }}This locks the reference to the correct domain's variables at every step, regardless of which iteration you're on.
Problem 7 — Loop Not Advancing to the Next Domain
After the TRUE/FALSE branches of the IF nodes, the workflow just stopped. It didn't go back for the next domain.
Fix — add a No Operation (NoOp) node at the end of every branch and connect it back to the Loop Over Domains input. The splitInBatches node uses this return connection to know when to advance.
Every branch needs one — new domains found, no new domains, and first run. Miss any one of them and the loop stalls.
What the Telegram Alert Looks Like
When new subdomains are discovered, this lands in Telegram:
🔍 New Subdomains Discovered!
🎯 Target: tesla.com
📅 Date: 2025-04-26
🆕 New domains found: 14
🟢 Alive (httpx): 9
📋 Top 10 Results:
https://dev.tesla.com [200] [Dev Portal] [nginx]
https://api-staging.tesla.com [403] [Forbidden] [cloudflare]
...
Full results attached below ⬇️Two files attached:
httpx_tesla.com_2025-04-26.txt— full probing resultsnew_domains_tesla.com_2025-04-26.txt— raw new subdomain list
What to Prioritize in the Results
Not every new subdomain is worth chasing. These are the ones I look at first:
dev,staging,internal,admin,apiin the name — frequently misconfigured and forgotten- Status 200 on an unfamiliar tech stack — potential unprotected asset
- Status 403 — worth testing bypass techniques
- Status 500 — backend errors, possible information disclosure
- New assets on large programs — tesla.com, etsy.com, optus.com.au have higher payout potential
What's Coming Next
This pipeline is the foundation. Planned additions:
- Nuclei — auto-scan new assets with vulnerability templates immediately after httpx
- naabu — port scan interesting new subdomains
- gau / waybackurls — pull historical endpoints from new domains
- notify (ProjectDiscovery) — unified notification handler across multiple channels
Final Thoughts
The whole pipeline runs unattended. I get a Telegram message for every new asset discovered across all 10 targets — with live status and tech stack already included. If nothing changes, I hear nothing.
The biggest lesson: the simplest approach works. No cloud infrastructure, no VPS, no tunnel. Just n8n on localhost, SSH pointed at itself, and outbound Telegram API calls.
New asset monitoring is one of the highest ROI activities in bug bounty. Companies are always growing their infrastructure. You just need to be watching when they do.
Built with n8n · subfinder · findomain · httpx · Telegram Bot API
Tags: Bug Bounty API Security Cybersecurity Ethical Hacking Web Security