Good afternoon folks. I want to address something that keeps coming up in my inbox and in the forums I hang around. The question of whether AI is going to wipe out cybersecurity jobs is everywhere right now. I see it on Reddit constantly. People who are trying to break into the field asking if it is even worth it anymore. People already in the field wondering if they are going to get automated out of a position in the next few years. I understand the anxiety. The economy is rough and the headlines about AI displacing workers are relentless.
But I am going to give you the honest take on this, the same way I would if you were sitting across from me at a coffee shop asking me directly. AI is not coming to end cybersecurity careers. What it is doing is something a lot more nuanced and honestly a lot more important for you to understand if you are trying to navigate this field right now.
Let's Start With What AI Is Actually Doing in Cybersecurity Right Now
To understand the impact, you have to understand what AI is actually being used for in security operations today. Because this is not theoretical anymore. Organizations are deploying AI-powered tools in very real ways and have been for a few years now.
The most significant application right now is alert triage. SIEM platforms and EDR solutions are increasingly using machine learning to help prioritize and filter the flood of alerts that security teams deal with on a daily basis. If you have ever worked in a SOC, you know that alert fatigue is a massive real problem. Analysts spend enormous amounts of time investigating alerts that turn out to be false positives or low-priority noise. AI-assisted triage is getting better at surfacing the things that actually matter and deprioritizing the things that do not.
Beyond alert triage, AI is being used for anomaly detection in network traffic and user behavior, automated threat intelligence correlation, phishing detection and email filtering, and vulnerability prioritization. In some more advanced shops, it is also being used to assist with incident response playbooks by helping analysts quickly pull in relevant context and suggest next steps during an active investigation.
So yes, AI is absolutely changing how security operations work. But what does that actually mean for the people doing the work?
What AI Cannot Replace (And Probably Will Not for a Long Time)
Let me be direct. The jobs AI is making obsolete in security are the ones built almost entirely around mechanical, repetitive tasks. If your entire value as an analyst is closing out low-level alerts one by one based on a static playbook, that is a workflow that is going to get compressed. Not eliminated necessarily, but compressed. The human time spent on it will shrink.
But here is what AI still cannot do particularly well, and what the really strong security professionals actually spend their time on.
Contextual judgment is one of the biggest ones. When an alert comes in that says a user account just accessed a sensitive database at 2am, an AI can flag it. What it cannot do as well is weigh all the contextual factors that a human analyst who knows the environment can weigh. Does that user regularly work odd hours? Is this a developer account that often pulls data for legitimate builds? Is there an unusual pattern in the broader context of what else happened around that event? Human judgment and organizational context still matter enormously for making the right call.
Adversarial creativity is another area where humans still have the edge. The threat actors on the offensive side of this equation are also using AI to develop more sophisticated attacks. They are writing more convincing phishing emails, generating malware variants faster, and finding novel ways to bypass detection systems. Fighting that requires human creativity, threat hunting instincts, and the ability to think like an attacker in ways that current AI models just do not handle well in open-ended scenarios.
Communication and stakeholder management is something that often gets overlooked in these conversations about AI and jobs. A significant part of working in cybersecurity, especially once you get past the entry level, is explaining risks to people who are not security professionals. Convincing a CFO why the company needs to invest in a particular security control. Walking legal through the implications of a data breach. Running a tabletop exercise with the executive team. None of that is getting automated anytime soon.
Then there is the area of red teaming and penetration testing, which is almost entirely about creative adversarial thinking in complex, context-dependent environments. AI can help with some stages of this work, but an experienced penetration tester brings intuition and creativity to engagements that AI-generated scripts cannot fully replicate. The high-end of offensive security is still very much a human domain.
The Roles That AI Is Actually Creating
Here is the part of this conversation that I feel like almost nobody is talking about, and it is frankly the most important piece. AI is not just disrupting cybersecurity jobs. It is creating entirely new ones. And those new roles are going to pay well because the skills they require are genuinely scarce right now.
AI security is emerging as its own specialty. Organizations are deploying AI models internally for everything from customer service to financial analysis to code review. Securing those AI systems, understanding how they can be attacked, and making sure sensitive training data is protected has become a real and growing job function. The concept of adversarial machine learning, where you are essentially trying to understand how an attacker might manipulate or subvert an AI model, is brand new territory that needs people with both security fundamentals and AI literacy.
Detection engineering has been elevated in importance by AI. As AI tools automate the lower-level triage work, the human energy that used to go into that work now needs to go into building better detection logic in the first place. Detection engineers write and tune the rules, models, and logic that tell security tools what to look for. That requires a combination of security domain expertise, an understanding of attacker techniques, and enough technical skill to translate that knowledge into actual detections. It is a skill set that is increasingly valued and increasingly rare.
Cloud security has an AI dimension now as well. Organizations running AI workloads in the cloud need people who understand how to secure those environments specifically. That means knowing how to lock down AI pipelines, protect the data being fed into models, monitor for unusual access patterns around AI infrastructure, and understand the shared responsibility model as it applies to AI-as-a-service products.
There is also a growing need for people who can manage and govern AI tools within security teams themselves. Deciding which AI-powered security tools are trustworthy, evaluating vendor claims, understanding what data those tools are ingesting and how it is being handled, that is a governance function that requires both security knowledge and a critical eye toward AI systems specifically.
What the Job Numbers Actually Say
Setting aside the narrative for a second and just looking at where the market actually stands. The global cybersecurity workforce shortage right now sits at roughly 4.8 million unfilled positions. That number is not going down. The threat landscape is expanding faster than the pipeline of trained security professionals can keep up with. AI is helping security teams do more with fewer people in some specific areas, but it is not closing that gap in any meaningful way at the macro level.
More attackers, more attack surface from cloud adoption and connected devices, more regulatory requirements, and more sophisticated threats. All of those factors are driving demand upward. The idea that AI is going to flip that dynamic in the near term is not supported by what the actual hiring data shows.
What the data does show is a shift in which skills employers are prioritizing. Cloud security expertise is in extremely high demand and genuinely hard to find. Incident response experience is consistently listed as one of the hardest roles to fill. Threat hunting, detection engineering, and GRC are all seeing strong demand. These are human-intensive, judgment-heavy roles that complement AI tooling rather than compete with it.
How to Position Yourself Given All of This
Alright, so what does this actually mean for your career? Whether you are trying to break in or already working in the field, here is the practical takeaway.
Stop building a skill set that is purely mechanical and start building one that requires judgment.
If everything you know how to do is something an AI tool can now do with a click, that is a problem. But if you understand the underlying security concepts well enough to tell when an AI tool is giving you bad output, to investigate the things it surfaces intelligently, and to make real decisions with real context, that is where the value is.
Get comfortable with AI tools themselves.
This is not about becoming an AI researcher. It is about being someone who can effectively work alongside AI-assisted security tools. Understand what your SIEM or EDR is doing when it uses machine learning to prioritize alerts. Know how to evaluate an AI-generated finding critically rather than just accepting or dismissing it. That literacy is increasingly expected at even the entry level.
Invest in cloud security knowledge.
Regardless of your specialty, cloud is the environment where most of the interesting security work is happening right now and where a lot of AI-related security challenges live. Understanding how AWS, Azure, or Google Cloud work at a meaningful level is not optional for anyone trying to stay relevant in this field over the next several years.
Build your communication skills deliberately. I say this all the time and I will keep saying it because it keeps being true. The people who rise in this field are not always the most technical ones. They are the ones who can translate what is happening technically into language that non-technical leadership can understand and act on. AI is not going to take that from you.
And finally, do not let the AI noise make you feel like it is not worth getting started. If you are staring down the entry-level cybersecurity path right now feeling uncertain, the demand for people who understand security fundamentals is not going away. The fundamentals still matter. Get your Security+. Learn how networks actually work. Understand how to investigate a suspicious email or analyze a log file. Build a home lab and get your hands dirty. Those basics are still the foundation, and AI is not going to make them irrelevant.
The Bottom Line
AI is a tool. A very powerful one that is changing how security teams operate. But it is not a replacement for the judgment, creativity, communication skills, and domain expertise that experienced security professionals bring. The jobs that survive and thrive alongside AI are the ones built on those things.
The anxiety is understandable. But the data does not support the doom narrative. Cybersecurity is still one of the most resilient career paths in technology, and if you are building your skill set with an eye toward judgment-heavy, context-dependent work, you are going to be fine.
If you are actively prepping for cybersecurity interviews right now, whether for your first role or your next one, the interview prep guide I put together covers the technical and behavioral questions that actually come up when hiring managers are in the room, check out previous posts!
Good luck out there.