In my free time I go down rabbit holes. Then I bring those rabbit holes into my home lab and break things until I understand them. It's not a productivity strategy. It's just how my brain works.
Lately, the rabbit holes have been leading to the same uncomfortable conclusion: the skill gap in red teaming is widening, and it's not about who knows more tools. It's about who understands why things work, who's adapted to the new attack surface, and who's stopped treating AI like a buzzword and started treating it like what it actually is, a new layer of infrastructure with its own vulnerabilities, its own pivot paths, and its own completely undefended flanks.

This is the list I wish someone had handed me earlier. Not a certification roadmap. Not a "top 10 tools" listicle. The actual skills that are making the difference in 2026 engagements.
1. Stop learning tools. Start learning systems.
Tools give you confidence. They don't give you judgment.
The gap that separates a good red teamer from a great one in 2026 is pattern recognition, knowing what a normal environment looks like so you immediately notice what's weird. That's not a tool skill. That's a systems skill. You need to understand how Active Directory actually runs in enterprise environments (not lab environments), how cloud IAM is configured in the real world (not the AWS tutorial), how CI/CD pipelines are wired together, and where the trust boundaries between them are thin.
Because that's where you get in. Not through the front door, through the seam between two systems that were both configured by different teams who never talked to each other.
Where to build this: Spend time on the defense side, even briefly. Read incident reports. Study breach postmortems. Hack The Box's Enterprise labs are good. So is building your own AD environment and deliberately misconfiguring it, then attacking it. The mistakes you make defending teach you more than any course.
2. Cloud is not optional anymore. It's the engagement.
If you're still treating cloud as "an extension of the network," you're already behind. In 2026, the cloud is the network. And the attack surface is completely different from what traditional red team training covers.
The interesting stuff isn't in the compute layer, it's in the identity layer. Overpermissioned service accounts. Misconfigured IAM roles. Cross-account trust relationships that nobody mapped. Forgotten OAuth apps with read access to everything. An Azure App Registration that was created in 2021 for a PoC that never got cleaned up, sitting quietly with Contributor rights to three subscriptions.
That's where real lateral movement happens in 2026. Not through RDP — through a forgotten identity.
What to focus on:
- AzureHound / BloodHound for Azure — maps role assignments, group memberships, and privilege escalation paths in Entra ID the same way BloodHound does for on-prem AD. Invaluable.
- Altered Security's CARTP / CARTE — the best Azure red team labs I've come across. Real Azure tenants, real defenses, real OPSEC pressure.
- SANS SEC565 — expensive, but worth it if your org will pay. The 2026 update integrates AI across the entire kill chain and is genuinely one of the most current courses out there for full red team operations.
- CWL's MCRTA / CHMRTS — multi-cloud red teaming, hybrid environments, bypassing controls across AWS + Azure + GCP. The coverage is excellent if you work across multiple cloud providers.
3. EDR evasion is a craft, not a checklist.
The days of dropping a Meterpreter shell and walking away are over. EDRs have gotten good. Really good. And bypassing them in 2026 requires understanding what you're actually evading, not just running a tool that someone else wrote.
This means Windows Internals. Not surface-level "here's how processes work" internals — actual ETW telemetry, how EDRs hook into user-mode and kernel-mode operations, what Attack Surface Reduction rules detect and why, how memory scanning works and how to stay out of it.
The operators who are consistently getting through modern EDRs aren't using magic tools. They're writing custom loaders. Modifying existing tooling at the source. Understanding why a signature fires so they can change exactly the thing that triggers it.
What to look at:
- Altered Security's CETP (Certified Evasion Techniques Professional) — covers Windows Internals, reversing EDRs, bypassing Microsoft Defender for Endpoint, ETW tampering. One of the most practically useful courses in this space right now.
- Zero-Point Security's BOF Development and Tradecraft — if you want to write your own Beacon Object Files and understand the tradecraft at the operator level, this is it.
- Sektor7's Malware Development courses — still the go-to for understanding custom implant development. The Intermediate course especially.
4. AI security is a real attack surface. Get on it before everyone else does.
Here's the thing about AI security in 2026: most organizations are deploying agents, copilots, and LLM-powered workflows as fast as humanly possible. Almost none of them are testing the security of those deployments. Which means the attack surface is enormous, largely unexplored, and basically undefended.
I spent a lot of recent lab time on AI-Induced Lateral Movement — the class of attacks where a prompt injection payload in something as innocuous as a GitHub issue title or an EC2 metadata tag gets picked up by an AI agent and used to pivot across systems the agent has legitimate access to. The agent isn't exploited. It's persuaded. And every action it takes looks authorized in the logs because it is authorized — just by the attacker's instructions, not the user's.
This isn't theoretical. It happened. Clinejection crossed six system boundaries — GitHub issue → AI triage agent → shell execution → CI/CD cache → npm token → registry — with one sentence in a text field. Traditional security tooling saw nothing.
If you're doing red team engagements in 2026 and you're not asking "what AI agents does this environment run, what do they have access to, and what inputs can I influence?" — you're missing an entire attack dimension.
Where to actually learn this:
- OffSec AI-300 / OSAI+ — OffSec's new AI red teaming certification, launching March 2026. Covers offensive techniques against LLMs and ML-enabled environments. First serious cert in this space from a credible offensive security provider.
- EC-Council COASP (Certified Offensive AI Security Professional) — covers prompt injection, jailbreaking, guardrail bypass, OWASP LLM Top 10, MITRE ATLAS. More structured cert path if that's what you need.
- Practical DevSecOps CAISP — hands-on labs against real AI systems, supply chain attacks on AI pipelines, AI-specific threat modeling with STRIDE. Solid for practitioners.
- HackAPrompt / Maven AI Red Teaming Masterclass — taught by the team that ran the first AI red teaming competition, includes sessions with practitioners like Johann Rehberger (who built Microsoft Azure Red Teams). More hands-on than academic.
- DeepLearning.AI — Red Teaming LLM Applications — free, beginner-friendly entry point. Good for getting oriented before going deeper.
- PyRIT (Python Risk Identification Toolkit) — Microsoft's open-source AI red teaming framework, created by the same person who now runs the NDC Security AI red teaming workshop. If you want to automate prompt injection testing at scale, start here.
- Garak — open-source LLM vulnerability scanner. Runs automated adversarial probes against LLMs. Good for adding AI testing to your toolkit without a full platform budget.
- Promptfoo — automates offensive testing of GenAI systems and conversational agents. Particularly useful for testing AI deployments you're assessing during an engagement.
5. Automate the boring parts so you can focus on the interesting parts.
The best red teamers I know are not manually running the same recon commands on every engagement. They've automated it. And increasingly in 2026, that automation is AI-assisted.
This isn't about using AI to "do the hacking for you." It's about using automation workflows to handle the repetitive operational work — recon aggregation, report generation, MITRE ATT&CK tagging, infrastructure provisioning — so your human brain can spend its time on the parts that actually require it: the creative pivots, the edge cases, the things the tooling doesn't know to look for.
The tools worth knowing:
- n8n — open-source workflow automation that's become genuinely popular in security ops circles. You can build automated recon pipelines (Subfinder → Shodan → Censys → Slack alert), exfil simulation workflows, certificate recon harvesters, and ATT&CK-tagged report generation. The security workflow library on GitHub has 100+ blueprints across red team, blue team, and AppSec. Worth noting: a critical vulnerability (CVE-2026–21858, CVSS 10) was disclosed in January — update to 1.121.0 if you're running it.
- CrewAI + n8n — combine multi-agent AI crews with n8n workflow orchestration for automated MITRE ATT&CK tagging and report enrichment. Cuts manual reporting effort significantly.
- Cobalt Strike with Malleable C2 — still the professional standard. The ability to make your traffic look like legitimate applications is table stakes for mature engagements.
- Sliver — the open-source C2 that's become the go-to for teams that don't have a Cobalt Strike license. Tyler Ramsbey's course on Sliver tradecraft is a solid practical introduction.
- Havoc — worth having in the lab as a Cobalt Strike alternative, especially for evasion research.
6. Scripting is not optional. It's how you think.
If you can't write Python, Bash, and PowerShell at a functional level, not expert, functional — you are dependent on other people's tools for things you should be able to solve yourself. And that dependency shows up at the worst moments, in the middle of an engagement, when the tooling doesn't do exactly what you need it to do.
Scripting is how you bend the environment to the engagement, not the other way around. It's how you build the custom loader that doesn't trigger the EDR. It's how you automate the recon step that would otherwise take four hours. It's how you chain tools together in a way nobody anticipated.
The bar isn't "write a Python exploit from scratch." The bar is: given a new API, a new tool output, or a new environment, can you write something that interacts with it? Can you modify existing code to change its behavior? Can you build a wrapper that makes two things talk to each other that weren't designed to?
If the answer is no, that's the first thing to fix. Before the certifications, before the lab, before the cloud training.
The honest summary
The skill ceiling for red teaming in 2026 is moving fast — faster than any certification track is keeping up with. The people staying ahead are the ones who are curious enough to go down the rabbit holes before they become mainstream, who understand the why behind the tools they use, and who are paying attention to the new infrastructure (AI agents, cloud identity, agentic workflows) that organizations are deploying at speed without anyone seriously asking what happens when those things get attacked.
The AI security space especially feels like offensive web did about a decade ago: everyone knows it's important, almost nobody knows how to test it, and the practitioners who figure it out early are going to have a significant edge.
Go build something in your home lab. Break it. Understand what broke and why.
That's still the move.