Introduction
Not every bug bounty finding requires a sophisticated exploit chain. Sometimes, the most impactful vulnerabilities come from asking a simple question: what happens if I do this twice — at exactly the same time?
This writeup covers a race condition I discovered on a job application portal belonging to a multinational staffing company. The platform had a clear business rule: one application per candidate per job offer. Within 10 minutes of testing, I was able to break that rule completely — and walk away with a €200 bounty.
Background: What Is a Race Condition?
A race condition in web applications occurs when a server handles multiple concurrent requests without properly coordinating between them.
The typical flow for a "one action per user" restriction looks like this:
Request arrives → Check: has user done X? → No → Do X → Save to DBThe vulnerability appears when two requests arrive simultaneously:
Request A arrives → Check: has user done X? → No ─────────────────┐
Request B arrives → Check: has user done X? → No (A not saved yet) ┘
↓
Both execute X → Two records in DBThis is a classic TOCTOU (Time-of-Check to Time-of-Use) flaw. The check and the write are not atomic, so concurrent requests slip through the gap.
The Target
The target was a career portal for a large staffing group, running a bug bounty program on YesWeHack. Candidates apply to job listings through a multi-step form. On the final step, clicking "Submit" fires a POST request:
POST /fr/candidature/{id}/save
Host: careers.[REDACTED].com
Content-Type: application/jsonThe platform enforces a rule that each candidate can only apply once per listing. Submit a second time normally, and you get an error. That's the intended behavior — and it worked fine for sequential requests.
Discovery
During normal testing, I completed an application and tried submitting it a second time. The server correctly rejected it with an "already applied" error.
That told me the check exists — but it also told me the check is application-level, not enforced at the database layer. That's a red flag for race conditions.
The question I asked next: what if both requests arrive before either one commits to the database?
Exploitation
The setup is straightforward using Burp Suite:
Step 1: Complete the job application form normally up to the final submission step.
Step 2: Intercept the POST /fr/candidature/{id}/save request in Burp Suite and send it to Repeater.
Step 3: Duplicate the Repeater tab 30 times, group them, and set the sending mode to "Send in parallel" — this fires all 30 requests simultaneously in a single TCP connection burst, minimizing timing variance between them.
Step 4: Execute.
Result: All 30 requests returned 200 OK.
The Key Evidence — Unique Hashes
Getting 30 success responses is interesting, but it's not enough on its own. A poorly implemented server might return a cached success response without actually writing anything new. That would be a non-issue.
What made this finding undeniable was the response body. Each response contained a field called saveHash:
Request 1 → { "saveHash": "E1rdQasmKDL8P", ... }
Request 2 → { "saveHash": "e6Q3eMsqGXvv3", ... }
Request 3 → { "saveHash": "mN7pQxRtYj2wK", ... }
...
Request 30 → { "saveHash": "zB4cLvHqFd9sM", ... }Every single response had a different, unique hash.
This is the smoking gun. The server wasn't replaying a cached response — it was executing the full creation logic 30 separate times, generating 30 distinct records in the ATS (Applicant Tracking System) database for the same candidate.
The email inbox confirmed it: 30 individual confirmation emails arrived instantly, each corresponding to a unique application record.
Impact
Business Logic Bypass: The "one application per candidate" rule was completely defeated. An attacker could flood any job listing with applications from a single account, overwhelming the HR team's review queue.
Database Integrity Corruption: The 30 records created aren't simple duplicates that can be filtered automatically — each has a unique ID and hash, making them look like 30 distinct, legitimate applications in the ATS. Manual cleanup by IT/HR is required.
Email Abuse: The platform was forced to send 30 transactional emails from its own trusted domain instantly. At scale, this could damage the company's email sending reputation and be used to spam a target's inbox.
Triage & Getting the Bounty
The report was accepted as Medium severity (CVSS 5.3). However, the initial response awarded only quality points — no monetary bounty.
I followed up with a clear argument:
- The program's reward grid lists Medium findings on this asset tier as eligible for monetary bounty (starting €50).
- This is not a rate limiting issue (which the program explicitly excludes) — it's a business logic bypass with proven database corruption, evidenced by the unique hashes.
- The distinction matters: rate limiting issues are about frequency; this is about the server creating duplicate database records it was never supposed to create.
The team reconsidered and awarded €200 + 15 quality points.
Lesson: When you believe a finding deserves a bounty, follow up professionally and make the case with evidence. Triage teams handle many reports — sometimes the impact just needs to be spelled out more clearly.