There's a scene every security engineer knows. It's 11pm. You're staring at a SIEM alert that looks suspicious but also looks like last Tuesday's noisy nonsense. You have seventeen browser tabs open, a cold coffee, and a slowly dimming conviction that you chose the right career. I've been there. And then, like a questionable ex who got really good at therapy, AI showed up and changed things.

Not in a "robots are taking your job" way. More like finally getting a research assistant who never sleeps, never judges your 3am questions, and doesn't sigh audibly when you ask them to re-read the same CVE for the fourth time. The trick is knowing how to use it without either blindly trusting it or, worse, being too proud to try.

"The AI doesn't replace your instincts. It just gives your instincts a very fast, very well-read colleague."

Here's what's actually working for security engineers in the real world without promises that AI will magically protect you from your users clicking phishing links. (It won't. Nothing will.)

None

01 — Triage like you have backup

Alert fatigue is the industry's worst-kept secret. You're not bad at your job, you're drowning in low-signal noise. AI is surprisingly good at being a first-pass filter you can have a conversation with.

Feed a raw alert payload into a model and ask it to explain what's happening, what's suspicious, and what you'd need to confirm it's a real incident. You're not delegating the decision. You're shortcutting the orientation phase, meaning the part where you're reading docs, correlating timestamps, and trying to remember what that port actually does.

The workflow: Paste your raw log lines or alert payload into Claude or ChatGPT with full context. Ask for a breakdown of benign vs. malicious explanations and what to look at next. While you're doing that, let your SIEM's built-in AI layer (Microsoft Sentinel's Copilot, Splunk AI, or Elastic's AI assistant) handle the first correlation pass. You're not replacing your instincts, you're just arriving at the investigation with better questions already formed.

Try this prompt:

"Here's a Splunk alert and the raw log lines that triggered it. Walk me through what could explain this behaviour, both malicious and benign. What would I need to look at next to tell them apart?"

The key is treating the output like a smart colleague's first take, not gospel. It'll occasionally hallucinate a CVE number or get a protocol wrong. You still verify. But you verify faster.

Try this prompt:

"Here's a Splunk alert and the raw log lines that triggered it. Walk me through what could explain this behaviour, both malicious and benign. What would I need to look at next to tell them apart?"

The key is treating the output like a smart colleague's first take, not gospel. It'll occasionally hallucinate a CVE number or get a protocol wrong. You still verify. But you verify faster, with better questions already formed in your head.

02 — Turn CVE soup into actual understanding

Reading CVE advisories is a skill, a chore, and occasionally a cryptic puzzle written by someone who was explicitly trying not to help you understand the severity. AI is exceptional at translation work.

Paste in the advisory. Ask it to explain the exploit chain in plain English, tell you what conditions are required for exploitability in your environment, and draft the questions you should be asking your vendor. You'll go from "this is probably fine?" to an informed position in four minutes instead of forty.

The workflow: When a CVE drops, run it through your model of choice with your specific stack context included. A local model is what you should work with, especially if you're working with production data. Then cross-reference with your vulnerability management platform, tools like Nucleus Security or Tenable One now layer AI-prioritisation on top of raw CVE data, ranking by actual exploitability in your environment rather than just CVSS score. The combination of contextual AI explanation plus tool-driven prioritisation means you stop treating every critical CVE like a five-alarm fire.

Try this prompt:

"Here's the NVD entry for CVE-2024-XXXXX. We're running version 3.2.1 in a containerised environment without direct internet exposure. Is this exploitable for us, what's the realistic attack scenario, and what mitigations apply while we wait for the patch?"

03 — Code review that doesn't hate you

Security code review is one of those things everyone agrees is important and nobody has quite enough time to do properly. AI doesn't get bored on line 347. It doesn't have a meeting in twenty minutes. It will check your regex, your input validation, your auth logic, and your error handling with the same focus it brought to line one.

Use it as a first pass. Give it context about the threat model like what's this function touching, who calls it, what's the trust boundary and ask it to find security issues with that lens on. You'll still catch things it misses, but the obvious stuff (SQL injection, hardcoded secrets in places the linter missed, missing authorization checks) will be flagged before it reaches your desk.

The workflow: Layer your tools. Use Semgrep as your static analysis backbone — it runs in CI/CD, catches OWASP risks, and its AI-assisted triage has been shown to cut false positive review time by 80%. Run Snyk Code or CodeQL alongside it for deeper dataflow analysis. Then bring in an LLM as a second pass for the things static analysis can't see: business logic flaws, missing authorization checks, subtle authentication timing issues. Give it the threat model context, who calls this function, what's the trust boundary and ask it to look with that lens on.

One thing worth knowing: AI-generated code contains vulnerabilities in roughly 25–40% of cases. If your team is using Copilot or Cursor to ship code, that code needs security review just as much as human-written code, sometimes more.

Try this prompt:

"This function handles user-uploaded file processing. Our threat model includes malicious files from unauthenticated users. Review this code for security vulnerabilities — focus on path traversal, file type validation bypass, and resource exhaustion. Flag confidence level for each finding."

04 — Threat modeling at actual speed

Threat modeling is one of the most valuable things security engineering does and also one of the most time-consuming to start. The blank page problem is real. AI is a remarkably good brainstorming partner for STRIDE analysis, simply because it knows the patterns.

Give it an architecture diagram description, a data flow, or just a prose explanation of what you're building. Ask it to walk through STRIDE and generate threat scenarios. Expect maybe 40% to be relevant to your specific context, that's still a solid head start that takes twenty minutes instead of half a day. You're generating the idea surface; you're still doing the judgment work.

The workflow: Start with a prose description of your architecture and feed it into your model of choice. Let it run a STRIDE pass and generate threat scenarios, expect about 40% direct relevance, which is still a day of thinking compressed into twenty minutes. From there, use OWASP Threat Dragon or IriusRisk to structure and track what comes out of that session. The AI generates the idea surface; the tool keeps it organised and auditable. You're still doing the judgment work. You're just not starting from zero.

Try this prompt:

"Here's our new API gateway architecture: [describe it]. Apply STRIDE threat modeling. For each threat category, list the top 3 attack scenarios most relevant to an API that handles PII and financial data. Then suggest which ones should have the highest priority mitigations."

05 — Write the documentation nobody wrote

Security runbooks. Incident response playbooks. The "what to do when X happens at 2am" guide that lives entirely in the head of one person who is currently on vacation. AI is genuinely great at scaffolding this content if you give it the raw material, like your notes, your process, your war stories.

The workflow: Talk through an incident response flow like you're explaining it to a new hire. Paste in your rough notes or a post-mortem. Let the model structure it into a proper playbook, then edit for accuracy and institutional specifics. For anything that needs to live somewhere searchable and version-controlled, Confluence has AI-assisted drafting built in now, and Notion AI handles the same job well. The goal isn't a perfect first draft, it's a real document that the next person on call can actually use.

06 — The rules nobody talks about

A few things that sound obvious but genuinely trip people up.

Never paste production data, real credentials, or anything sensitive into a consumer AI product. If you need to work with production data, run a local model, Ollama with Llama or Mistral works fine for most analysis tasks and nothing leaves your machine. This matters more than people treat it: AI-service credential leaks grew 81% on public GitHub last year, a decent chunk of which was engineers pasting things into workflows they didn't think through.

Treat the output like a smart intern's first draft. Fast, often right, needs verification. Ask it to explain its reasoning, if it can't, be more skeptical. The moment you stop engaging critically is the moment it starts costing you.

Use the adversarial framing. Ask it to steelman the attacker's perspective: "Here's our mitigation plan — how would a motivated attacker get around it?" That's where AI gets genuinely useful for security specifically. It can generate attack variations faster than any single brain, and it doesn't have your blind spots.

Watch your AI coding tools as a new attack surface. Your developers' Cursor rules files and Copilot configs can be tampered with to silently generate backdoored code. Deploy GitGuardian or TruffleHog as pre-commit hooks across all repos, including with hooks specifically for AI tool workflows, not just standard CI/CD pipelines.

The Carrie Ending

And just like that, I couldn't help but wonder — what if the most dangerous thing in security was never the threat actor, the zero-day, or even the user who clicked the link. What if it was always the engineer who didn't understand the tools they were using, but was using them anyway?

AI is just the latest tool. Know when, how and why you are using it.