I thought I understood security.

I'd studied the basics — network security, cryptography, access controls. I knew APIs existed. I knew they needed protection.

But I'd never heard of the OWASP API Security Top 10. I didn't know that Broken Object Level Authorization was the #1 vulnerability in APIs worldwide. I couldn't tell you the difference between authentication and authorization failures.

And I'd definitely never broken an API.

That changed when I joined the APIsec Fellowship. For the first eight weeks, I got to legally hack vulnerable applications — forging tokens, stealing data, creating money out of thin air. Then I learned how servers, gateways, and compliance frameworks work together to prevent exactly what I'd just exploited.

What I learned surprised me. Not because the vulnerabilities were complex. But because they were so simple.

The Assumption That Shattered in Week One

I used to think authentication was the finish line.

If users logged in with credentials, and the system issued a JWT token, the API was secure. Right?

Wrong.

In my first week, I decoded a JWT and found this in the header:

{
  "alg": "none",
  "typ": "JWT"
}

"alg": "none" means no signature required. The API accepted unsigned tokens. Anyone could create a token claiming to be anyone—admin, CEO, whoever—and the system would trust it.

My first reaction: "Who would actually deploy this?"

Then I researched real breaches. This exact vulnerability has been found in production systems. Repeatedly.

That's when I realized: Security isn't about what developers should do. It's about what happens when they don't.

The Moment Everything Clicked

I was testing Damn Vulnerable Bank — a deliberately insecure banking app used for training. I logged in as User A, intercepted a request in Burp Suite, and changed one number in the URL:

GET /transactions/0127008382  →  GET /transactions/0127008300

The API returned User B's complete transaction history.

I sat there staring at the screen.

The API had authenticated me perfectly. My token was valid. The request was well-formed. Everything worked exactly as designed.

Except nobody asked: "Is this actually YOUR data?"

This is BOLA — Broken Object Level Authorization. It's the #1 API vulnerability worldwide. And it's devastatingly simple: the API trusts that if you provide an ID, you're allowed to access it.

From that moment, I stopped seeing APIs as endpoints. I started seeing them as trust decisions.

Every time an API accepts a parameter (an account number, a user ID, a transaction reference, etc.) it's deciding: "I trust this user should access this."

Now, whenever I see an ID in a request, I ask one question:

"What happens if the user lies?"

I Created $300 Out of Thin Air

This one still makes me laugh.

The banking app had a transfer endpoint.

Normal request:

POST /transfer
{
  "to_account": "0127008300",
  "amount": 100
}

Send $100 to another account. Simple.

But what if I sent a negative amount?

{
  "to_account": "0127008300",
  "amount": -300
}

The API processed it. My balance increased by $300.

No validation. No business logic check. The developers assumed users would only send positive numbers because… why would anyone send negative money?

Attackers don't play by assumptions.

Lesson learned: Never trust client input. Validate everything server-side, even things that "obviously" should never happen.

Old APIs Don't Die. They Haunt You.

Here's something else that surprised me.

The vulnerable bank had two API versions running simultaneously:

  • v1: Password reset with 3-digit PIN (1,000 combinations)
  • v3: Password reset with 4-digit PIN (10,000 combinations)

The developers upgraded to v3 for better security. Great.

But they never turned off v1.

An attacker doesn't care which version is "current." They'll find the weakest one. I ran Burp Intruder against v1 and cracked PINs in a few hours. No rate limiting. No account lockout. Just 1,000 guesses and I was in.

Legacy APIs are shadow attack surfaces. Deprecation isn't enough. If it's reachable, it's exploitable.

Beyond the Code: Defense in Depth

The second half of the curriculum — Weeks 5 through 7 — bridged the gap between code and infrastructure.

We moved beyond application vulnerabilities to study API Server Security, Gateways, and Compliance frameworks like PCI DSS. This taught me something crucial:

Security is not just one layer. It requires a Defense in Depth strategy.

Here's how I now think about it:

  • The Gateway acts as the bouncer — enforcing rate limits to prevent DoS attacks, validating tokens, blocking malicious patterns before they reach your application.
  • The Server ensures correct configuration — like CORS headers to prevent unauthorized cross-domain calls, TLS settings, and secure defaults.
  • The Code handles the logic and validation — ownership checks, input sanitization, business rule enforcement.

If one layer fails, the others must stand guard.

We cannot rely on the gateway alone to save bad code. And we cannot rely on code alone if the server is misconfigured. The vulnerabilities I exploited in the first four weeks? Many could have been caught — or at least mitigated — by proper gateway rules or server hardening. But when all layers failed, the breach was trivial.

This is why compliance frameworks like PCI DSS exist. They're not bureaucratic checklists — they're institutionalized lessons learned from countless breaches. When PCI DSS says "never store CVV data" or "implement rate limiting on authentication endpoints," it's because organizations learned the hard way what happens when you don't.

The Real World Impact: Connected Systems

Perhaps the most jarring lesson came in Week 8, when we discussed Connected Systems and automotive security.

It shattered the illusion that API security is just about protecting websites or databases.

When APIs control cars, medical devices, or smart homes, a vulnerability isn't just a data leak — it's a physical safety risk.

Think about it:

  • A BOLA vulnerability in a banking app exposes financial data
  • A BOLA vulnerability in a connected car could let an attacker unlock doors, disable brakes, or track someone's location

The negative transfer exploit I found? Annoying in a demo app. Catastrophic in a medical device that calculates dosages.

The "real world" stakes of our work are incredibly high. APIs aren't just plumbing between applications anymore. They're the nervous system of modern life — connecting everything from your thermostat to your pacemaker.

That realization changed how I approach this work. It's not just about finding bugs. It's about protecting people.

What Changed: From Checklist to Mindset

Initially, I approached security like a checklist:

  • ✅ Authentication implemented
  • ✅ HTTPS enabled
  • ✅ Input validation added
  • ✅ Secure!

Now I understand that's backwards.

Checklists catch known problems. Attackers find unknown ones.

The biggest mental shift? Learning the difference between these two questions:

Developer question: "Does it work how it's supposed to?"

Security question: "Does it NOT work how it's NOT supposed to?"

Read that again. It's subtle but everything.

Developers test the happy path. They verify that valid users get valid data. That correct inputs produce correct outputs. That the system works.

Security testers verify the unhappy path. They check that invalid users don't get data. That incorrect inputs don't produce dangerous outputs. That the system fails safely.

The Damn Vulnerable Bank worked perfectly — for legitimate use. Login worked. Transfers worked. Transactions displayed correctly. Every feature functioned as designed.

But nobody tested whether it didn't work for illegitimate use. Nobody asked:

  • "Can User A access User B's data?" (Yes — BOLA)
  • "Can someone transfer negative money?" (Yes — business logic flaw)
  • "Can someone guess PINs forever?" (Yes — no rate limiting)

The app worked how it was supposed to. It also worked how it wasn't supposed to.

That's the mindset shift. Stop asking "does this work?" Start asking "does this fail correctly?"

Security isn't a feature you bolt on at the end. It's a lens through which you build from the start.

The One Principle I'd Tell Every API Team

If I could give developers one piece of advice from this experience:

"Validate at the gate. Verify at every door."

Most teams focus security at the API gateway — authentication, rate limiting, schema validation. That's important. That's the gate.

But once requests pass the gate, they often flow freely. No ownership checks. No field-level validation. No business logic verification.

The vulnerabilities I found weren't at the gate. They were inside — endpoints that assumed if you got past authentication, you belonged everywhere.

Defense in depth means:

  1. Gateway: Authentication, rate limiting, input schema
  2. Server: Proper configuration, CORS, TLS
  3. Endpoint: Authorization (ownership verification)
  4. Field: Whitelist what can be modified
  5. Business Logic: Validate operations make sense

A valid token isn't a skeleton key. Treat every request with suspicion.

TL;DR

  1. Authentication ≠ Authorization. Knowing who someone is doesn't mean they should access everything.
  2. Every parameter is a trust decision. If users supply it, users can manipulate it.
  3. Defense in depth isn't optional. Gateway, server, and code must all do their part.
  4. Old APIs are attack surfaces. If it's reachable, assume it's being probed.
  5. APIs control the physical world now. The stakes are higher than data — they're safety.
  6. Think like an attacker. Ask "what if someone lies?" at every step.
  7. Security is a mindset, not a checklist. Controls can be bypassed. Adversarial thinking compounds.

The real world is adversarial. Build accordingly.

If this resonated with you, let's connect on LinkedIn. And if you're curious about API security, check out the OWASP API Security Top 10 — it's free and will change how you see every app you use.

Tags: #APISecurity #Cybersecurity #OWASP #InfoSec #TechCareers #WomenInCybersecurity #PenetrationTesting #DefenseInDepth