Cross-site scripting (XSS) isn't just some theoretical threat from a textbook. It's a code vulnerability where attackers inject malicious scripts into trusted websites, and when users interact with the compromised site, those scripts execute in their browsers. Translation? Someone can steal cookies, hijack sessions, impersonate users, or straight-up take over accounts. In 2026, it's still one of the most common vulnerabilities out there.

And here's the kicker: while React reduces XSS risk with built-in protections like automatic escaping, XSS can still occur if developers introduce insecure code.

React's Safety Net (And Where It Ends)

Here's what React does protect you from: When you render content using JSX curly braces, React automatically escapes potentially dangerous characters by converting them to their HTML entities. So if someone tries to inject <script>alert('XSS')</script> into your component, React will render it as harmless text.

function UserGreeting({ userName }) {
  // Safe - React escapes the content automatically
  return <div>Hello, {userName}!</div>;
}

// If userName = "<script>alert('XSS')</script>"
// React renders: Hello, <script>alert('XSS')</script>!

Beautiful, right? The script tags get neutered before they can do any damage.

But then there's the dark side…

The Three Ways To Broke Own Security

Mistake #1

Remember that comment section vulnerability I mentioned? Here's what my code looked like:

// DON'T DO THIS
function Comment({ commentHTML }) {
  return (
    <div dangerouslySetInnerHTML={{ __html: commentHTML }} />
  );
}

The dangerouslySetInnerHTML prop bypasses React's automatic escaping mechanism. It's literally called "dangerous" for a reason. I was rendering user-generated HTML directly into the DOM with zero sanitization. Any attacker could have dropped in malicious scripts, and my app would have happily executed them.

The fix? DOMPurify became my new best friend.

import DOMPurify from 'dompurify';

function SafeComment({ commentHTML }) {
  // Sanitize BEFORE rendering
  const cleanHTML = DOMPurify.sanitize(commentHTML);
  
  return (
    <div dangerouslySetInnerHTML={{ __html: cleanHTML }} />
  );
}

DOMPurify is a lightweight and secure HTML sanitizer built by a team of XSS experts. It knows which HTML elements are safe (like <b> and <i>) and which ones are potential weapons (like <script> tags and onclick attributes). It strips out the dangerous stuff while keeping the legitimate formatting intact.

Install it with:

npm install dompurify

Mistake #2

The second vulnerability was sneakier. I had a feature where users could share links to restaurants, and I'd display them like this:

// VULNERABLE CODE
function RestaurantLink({ userProvidedURL }) {
  return <a href={userProvidedURL}>Visit Website</a>;
}

Looks innocent, right? But here's the problem: URLs can use dangerous schemes like javascript: which lead to XSS attacks. An attacker could submit javascript:alert('Hacked!') as the URL, and when someone clicked that link, boom—code execution.

I've seen this exact attack vector used in production apps. It's frighteningly common.

The solution is validating URLs before you trust them:

function SafeRestaurantLink({ userProvidedURL }) {
  // Whitelist safe protocols
  const isSafeURL = (url) => {
    try {
      const parsed = new URL(url);
      return ['http:', 'https:'].includes(parsed.protocol);
    } catch {
      return false;
    }
  };

  const safeURL = isSafeURL(userProvidedURL) ? userProvidedURL : '#';
  
  return <a href={safeURL}>Visit Website</a>;
}

Better yet? Avoid accepting full URLs as input when possible — for example, an application that accepts YouTube URLs could only accept the video ID as input, then create the full URL when needed. This strategy eliminates the attacker's ability to control the URL scheme entirely.

Mistake #3

This one was subtle. When I was doing server-side rendering with Redux, I had code that looked like this:

// DANGEROUS PATTERN
app.get('/', (req, res) => {
  const initialState = { user: req.user };
  const html = `
    <html>
      <body>
        <div id="root">${renderToString(<App />)}</div>
        <script>
          window.__INITIAL_STATE__ = ${JSON.stringify(initialState)};
        </script>
      </body>
    </html>
  `;
  res.send(html);
});

This is risky because JSON.stringify() will blindly turn any data into a string, and if that data has fields that untrusted users can edit like usernames or bios, they can inject malicious code.

Imagine if a user set their bio to: </script><script>alert('XSS')</script>

That would break out of the script tag and execute arbitrary JavaScript. This pattern used to be in the official Redux documentation, which is why it's scattered all over GitHub in boilerplate projects.

The fix requires escaping special characters:

function escapeJSON(jsonString) {
  return jsonString
    .replace(/</g, '\\u003c')
    .replace(/>/g, '\\u003e')
    .replace(/&/g, '\\u0026')
    .replace(/'/g, '\\u0027');
}

app.get('/', (req, res) => {
  const initialState = { user: req.user };
  const safeJSON = escapeJSON(JSON.stringify(initialState));
  
  const html = `
    <html>
      <body>
        <div id="root">${renderToString(<App />)}</div>
        <script>
          window.__INITIAL_STATE__ = ${safeJSON};
        </script>
      </body>
    </html>
  `;
  res.send(html);
});

Now the angle brackets and other special characters are escaped using Unicode escape sequences, so they can't break out of the script context.

Building a Validation Strategy That Actually Works

After getting burned, I established some rules that have kept me (mostly) sane:

Input Validation at the Gate

Validation means checking that input conforms to a certain format, type, length, or range, and rejecting or correcting any invalid input. Don't wait until render time to validate — catch bad data early.

function validateUsername(username) {
  // Only allow alphanumeric and underscores
  const pattern = /^[a-zA-Z0-9_]+$/;
  
  if (!pattern.test(username)) {
    throw new Error('Invalid username format');
  }
  
  if (username.length > 50) {
    throw new Error('Username too long');
  }
  
  return username;
}

Sanitization for the Win

Sanitization means removing or replacing any unwanted or harmful characters, tags, or attributes from the input. When you absolutely need to accept HTML (like in a rich text editor), sanitize it aggressively.

Here's my go-to sanitization component:

import DOMPurify from 'dompurify';

function SanitizedHTML({ content }) {
  const sanitizeConfig = {
    ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'p', 'br'],
    ALLOWED_ATTR: []
  };
  
  const clean = DOMPurify.sanitize(content, sanitizeConfig);
  
  return <div dangerouslySetInnerHTML={{ __html: clean }} />;
}

This approach gives you fine-grained control. You explicitly whitelist which HTML tags and attributes are allowed, and everything else gets stripped out.

Content Security Policy as Your Last Line of Defense

CSP is your last line of defense — even if XSS bypasses other protections, a strong CSP can prevent script execution.

I add CSP headers to every app now:

// Express.js example
app.use((req, res, next) => {
  res.setHeader(
    'Content-Security-Policy',
    "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:;"
  );
  next();
});

This tells the browser to only execute scripts that come from your own domain. Even if an attacker somehow injects a script tag, the browser will refuse to execute it.

Create Reusable Security Components

Instead of repeating sanitization logic everywhere, I built a library of secure components:

// SafeUserContent.jsx
import DOMPurify from 'dompurify';

export function SafeUserContent({ html, allowedTags = ['b', 'i', 'p', 'br'] }) {
  const config = {
    ALLOWED_TAGS: allowedTags,
    ALLOWED_ATTR: []
  };
  
  const clean = DOMPurify.sanitize(html, config);
  
  return <div dangerouslySetInnerHTML={{ __html: clean }} />;
}

// SafeLink.jsx
export function SafeLink({ href, children }) {
  const isSafe = (url) => {
    try {
      const parsed = new URL(url);
      return ['http:', 'https:'].includes(parsed.protocol);
    } catch {
      return false;
    }
  };
  
  const safeHref = isSafe(href) ? href : '#';
  
  return <a href={safeHref} rel="noopener noreferrer">{children}</a>;
}