Intro
Answer these ten questions before implementing any actions. They reveal the gap between policy assumptions and operational reality, and should drive reprioritisation of everything that follows.

Full Risk Register
Accelerated threat exploitation
Threat
Description
AI models have been discovering vulns and creating exploits for over a year. Mythos accelerates this. Each patch becomes an exploit blueprint via AI-accelerated patch-diffing and reverse engineering of fixes.
Severity
CRITICAL
Insufficient AI automation capabilities
Capability gap
Defenders operating at human speed while attackers use AI agents freely. The asymmetry is cultural as much as technical — teams not adopting AI agents cannot match AI-augmented threats regardless of their technical skill.
CRITICAL
Unmanaged AI agent attack surface
Vulnerability
Agents are privileged, insecure by default, and the primary attacker focus. MCP servers, VS Code extensions, and agentic skills introduce supply chain risk not covered by existing controls.
CRITICAL
Inadequate detection and response velocity
Capability gap
AI has reduced time to construct complex attacks. Detection, SIEM correlation, and containment authorisation were designed for human-paced threats — they have not been upgraded to match.
CRITICAL
Cybersecurity risk model outdated
Governance
Security reporting metrics built on pre-AI assumptions may misrepresent actual exposure and lead to underfunding of critical controls and inaccurate business reporting.
CRITICAL
Incomplete asset and exposure inventory
Vulnerability
Attackers can now scan an entire OS codebase at accessible cost. Without continuously updated inventory, controls have inherent gaps. Shadow agents proliferating to non-developer users fragment central IT visibility further.
HIGH
Unsecured software delivery pipeline
Vulnerability
Code produced by humans and AI agents ships without consistent LLM-driven security review. AI-generated code introduces vulnerabilities at higher volume than manual development at the same defect rate.
HIGH
Network architecture insufficient for lateral movement containment
Vulnerability
Flat or insufficiently segmented networks give every successful exploit broad leverage. AI-driven attacks automate multi-hop lateral movement faster and more creatively than manual attackers.
HIGH
Continuous vulnerability management maturity gap
Capability gap
Quarterly pen tests and reactive patching cannot keep pace with continuous AI-driven discovery. CVE/NVD infrastructure was built for dozens of critical CVEs per month, not hundreds.
HIGH
Threat detection dependent on lagging intelligence
Capability gap
Threat intelligence has been falling behind AI-accelerated vulnerability discovery for over a year. The CVE system may not scale to AI-generated discovery rates. Novel vulnerabilities have no KEV listing by definition.
HIGH
Innovation governance and oversight deficit
Governance
Without a cross-functional governance mechanism, defensive AI technology onboarding hits approval friction. AI-accelerated timelines give this friction a harder deadline.
HIGH
Regulatory and liability exposure from AI-discovered vulnerabilities
Governance
EU AI Act (August 2026) introduces new requirements. When AI scanning is broadly available, not using it may constitute negligence under the reasonableness test in existing and emerging regulations.
HIGH
AI hype causing systematic inaction
Governance
Signal-to-noise collapse in threat and vendor guidance. Teams that dismiss the shift as hype or exhaust their attention on low-signal content will miss critical threat landscape changes.
MEDIUM
Priority Actions — Aggressive Timetable
Actions are assigned to one of three phases. Note: some recommendations may appear contradictory — for example, the need to patch faster directly competes with supply chain cooldown requirements after the Glasswing wave. These tensions require nuanced, case-by-case decision-making, not blanket rules.
Short Term Action Plan
Point agents at your code and pipelines
Turn LLM capabilities inward on your own code and dependencies immediately. Ask an agent for a security review of any codebase today. Build toward full CI/CD pipeline audit. All code — human or AI-generated — must pass LLM-driven review before merge.
CRITICAL
Commercial: Claude Code Security (Anthropic), Codex Security (OpenAI). Open source: OpenAnt (Knostic), raptor framework, exploitation-validator agentic skill, Trail of Bits agentic skills.
Mandate AI agent adoption across all SOC functions
Formalise coding agent usage in all security functions — GRC, incident response, audit, triage — with mandatory security controls and oversight. Optional adoption programmes do not overcome cultural barriers. Make it standard practice with documented guardrails.
CRITICAL
Establish a documented AI agent policy covering approved tools, guardrails, logging requirements, and prohibited actions. Run a team-wide onboarding session within the first week. Measure adoption weekly.
Establish innovation and acceleration governance
Stand up a cross-functional mechanism (Security, Legal, Engineering) to evaluate new offensive threats and fast-track defensive technology onboarding. Without this, every other action hits approval friction that gives attackers the advantage.
CRITICAL
Define a fast-track review path for AI-based security tools: initial risk review within 48 hours, procurement decision within 2 weeks. Assign named owners from each function. Document the process publicly within the organisation.
Prepare for continuous patching — Glasswing wave
40+ vendors in the Glasswing early access programme are preparing patches now. Build triage and deployment capacity for a potential flood of simultaneous critical CVE disclosures — comparable to multiple supply chain incidents in a two-week window.
CRITICAL
Run a tabletop exercise simulating three critical CVEs disclosed simultaneously in one week. Validate automated patch testing pipelines. Identify which systems require manual change approval and which can be fast-tracked.
Update risk models and business reporting
Review and update security risk metrics, reporting, and business risk calculations to reflect AI-accelerated exploit timelines. Pre-AI assumptions about patch windows, exploit scarcity, and incident frequency no longer hold. Brief stakeholders before the next quarterly reporting cycle.
CRITICAL
Identify all existing risk metrics that assume multi-day exploit windows. Update MTTR, MTTD, and patch SLA targets. Update risk appetite statements. Brief the board before the next reporting cycle.
Engage collective defence networks
Attackers operate as syndicates. Engage ISACs, CERTs, and standards bodies to share threat intelligence and coordinate response. Consider organisations below the Cyber Poverty Line who cannot implement these measures independently.
HIGH
Identify the relevant sector ISAC or CERT. Establish or refresh information-sharing agreements. Assign a named liaison for ongoing threat intelligence sharing. Report back to the team weekly.
Mid Term Action Plan
Defend your agents
Agents are privileged, insecure by default, and the primary attacker focus. They are not covered by existing controls. Before deploying agents near production: define scope boundaries, blast-radius limits, escalation logic, and human override mechanisms. Audit the agent harness with the same rigour as the agent's permissions.
CRITICAL
Apply least-privilege to all agent tool permissions. Log all agent actions centrally. Test agent behaviour against adversarial prompts before production deployment. Do not wait for industry governance frameworks — define your own now.
Inventory and reduce attack surface
Use agents to accelerate and continuously update your inventory. Start with critical internet-facing systems. Generate real SBOMs. Shut down unneeded or unmaintained functionality. Phase out suppliers that no longer comply with your vulnerability management requirements. Isolate or air-gap at-risk systems.
HIGH
Use an LLM agent to parse dependency manifests and identify end-of-life components. Automate SBOM generation in CI/CD. Target full inventory coverage within 45 days and continuous updates thereafter.
Harden your environment — the basics
Implement egress filtering (it blocked every public log4j exploit). Enforce deep segmentation and zero trust. Lock down the dependency chain. Mandate phishing-resistant MFA for all privileged accounts. Use AI to accelerate software minimisation — minimise base OS images and replace third-party libraries with framework primitives.
HIGH
Egress filtering is the highest-leverage single control for containing AI-orchestrated exfiltration. Prioritise it first. Every boundary you add increases attacker cost and slows lateral movement.
Build a deception capability
Deception is attack-tool and vulnerability independent — it identifies attacks based on TTPs, not signatures. This works even when CVE intelligence lags behind AI-discovered zero-days. Deploy canaries and honey tokens. Layer behavioural monitoring. Pre-authorise containment actions. Build response playbooks that execute at machine speed.
HIGH
Deploy canary tokens in high-value file stores, configuration repositories, and cloud credential locations. Ensure all canary alerts are pre-wired to automated containment actions without requiring human authorisation. Document expected alert volumes to avoid alert fatigue.
Request headcount and budget for reserve capacity
The volume of vulnerability disclosures will exceed anything experienced before. Request additional headcount and contractor capacity before the Glasswing wave hits. Experienced staff are irreplaceable on short timescales. Burnout and attrition are a direct operational risk.
CRITICAL
Model three staffing scenarios: current capacity, Glasswing wave, and sustained elevated volume. Present the gap to leadership with a specific headcount and budget ask. Include contractor retainer options for surge capacity.
Long Term Action Plan
Build an automated incident response capability
Improve detection engineering and incident response to be systemic and, where possible, autonomous. Alert triage volumes, SIEM correlation speed, and containment authorisation latency were designed for human-paced threats — they must be rearchitected.
HIGH
Start with pre-authorised containment for highest-confidence alert types (e.g., confirmed credential compromise, ransomware IOCs). Expand automation scope quarterly. Track automation rate as a programme metric.
Stand up a VulnOps function
There is no long-term alternative to a permanent Vulnerability Operations (VulnOps) function — staffed and automated like DevOps, but for autonomous vulnerability research and remediation. VulnOps owns continuous discovery of zero-days across the entire software estate and establishes automated remediation pipelines.
CRITICAL
VulnOps is not a renamed vulnerability management team. It requires dedicated engineering capacity, LLM-powered scanning infrastructure, automated triage pipelines, and integration with both development and incident response workflows. Design triage discipline in from day one.
EU AI Act compliance preparation
The EU AI Act (August 2026) introduces automated audit, incident reporting, and cybersecurity requirements around AI systems. Not using available AI tools for defensive scanning may constitute negligence under the reasonableness test in existing regulations.
HIGH
Engage legal and compliance to map EU AI Act requirements to existing security controls. Identify gaps in AI system documentation, incident reporting processes, and audit trail capabilities. Build a gap closure roadmap.
Key Metrics to Update
The following metrics should be reviewed and updated as part of Priority Action 6. Pre-AI baselines are no longer valid benchmarks.

Conclusions
- AI-based attacks represent a structural shift in how offence and defence work, and it will not reverse. The cost and capability floor to exploit discovery is dropping, the time between disclosure and weaponisation is compressing toward zero, and capabilities that previously required nation-state resources are becoming broadly accessible.
- Engineering a resilient architecture that limits attackers' ability to exploit discovered vulnerabilities and contains impact if they are exploited. Discovering more vulnerabilities yourself before any adversary or vendor advisory. Responding quickly to incidents at scale and containing the impact to minimise business disruption. Accelerating your security programme and staff capabilities with AI agents, starting today.
- Y2K was a systemic threat with a hard deadline, and the industry met it through coordinated, disciplined effort. This is the same kind of problem, requiring the same kind of response, with more powerful tools available to defenders. Building a Mythos-ready security programme is not about reacting to one model or announcement. It is about permanently closing the gap between how fast vulnerabilities are found and how fast your organisation can respond.
Based on: The 'AI Vulnerability Storm': Building a 'Mythos-ready' Security Program — CSA CISO Community, SANS Institute, [un]prompted, and the OWASP Gen AI Security Project. Version 0.95, 18 April 2026.
Need Help?
The functionality discussed in this post, and so much more, are available via the SOCFortress platform. Let SOCFortress help you and your team keep your infrastructure secure.
Website: https://www.socfortress.co/
Contact Us: https://www.socfortress.co/contact_form.html