Six CVEs in January. Fifteen in February. Thirty-five in March. That's not a random spike — it's the trajectory of vulnerabilities directly attributable to AI-generated code, tracked by Georgia Tech's Vibe Security Radar across 43,000 security advisories. And the researchers estimate the real number is 5–10x higher, because most AI coding tools leave no commit metadata behind.

Vibe coding — describing what you want and letting AI write the code — went from a viral tweet to a mainstream development practice in about a year. It's fast, it's accessible, and it's shipping applications with security gaps that would make a junior developer wince.

If your AppSec program was built for human-speed development, you have a structural problem.

The Numbers Are Worse Than You Think

Georgetown's Center for Security and Emerging Technology ran formal verification across code generated by five major LLMs responding to 67 security-relevant prompts mapped to MITRE's Top 25 CWEs. Nearly half the code snippets contained at least one vulnerability. XSS appeared in 86% of relevant samples. Log injection hit 88%.

Veracode's 2025 GenAI Code Security Report tested over 100 LLMs across four languages and found AI-generated code contains 2.74x more vulnerabilities than human-written code. CodeRabbit's analysis of 320 AI-authored pull requests confirmed the pattern: 1.88x more improper password handling, 1.91x more insecure direct object references, 1.82x more insecure deserialization.

Apiiro's enterprise data from Fortune 50 companies found AI-generated code contains 322% more privilege escalation paths and 153% more design flaws than human-written code. While syntax errors dropped 76% and logic bugs fell 60%, the security architecture got significantly worse.

And here's the stat that should keep AppSec leaders up at night: Stanford's Dan Boneh and his team found that developers with AI assistant access wrote significantly less secure code than those without — and were more likely to believe their code was secure. The AI creates a false sense of completion that suppresses the developer's security instinct.

Your SAST Pipeline Has a Blind Spot

Traditional static analysis was designed for a world where code moved at human speed. Vibe coding breaks that assumption in two ways.

First, the volume. When developers can generate hundreds of lines per minute, your SAST queue becomes a bottleneck overnight. Pull requests pile up. Scan times increase. And the pressure to ship means findings get deprioritized.

Second — and more critically — traditional SAST misses five categories of AI-specific risk that don't exist in human-written code:

Semantic flaws. AI generates code that compiles, passes tests, and does the wrong thing securely. A function that validates input but skips authorization entirely. SAST catches known vulnerability patterns; it doesn't catch missing security logic.

Hallucinated dependencies. AI invents package names that don't exist — and attackers register them in advance. This is called slopsquatting, and it requires no phishing, no credential theft. It exploits the statistical predictability of AI hallucination outputs.

Authorization gaps. AI models trained on pre-parameterized-query era code consistently produce over-permissive IAM policies. When prompted to write a CloudFormation template for a Lambda needing S3 access, AI coding agents consistently produce s3:* across all buckets. SAST flags known patterns but misses architectural authorization gaps.

Pipeline manipulation. The Rules File Backdoor attack targets AI coding assistants directly. Attackers inject hidden Unicode characters into project configuration files that instruct the AI to generate insecure code. The developer never sees the instruction. The AI follows it.

Credential exposure at scale. AI-assisted commits expose secrets at 3.2% versus a 1.5% baseline for human-only code — roughly double the rate. GitGuardian counted 28.65 million hardcoded secrets in public GitHub in 2025, a 34% year-over-year increase. AI services specifically saw 1.27 million leaked keys, up 81%.

What Actually Happened in Production

Escape.tech scanned 5,600 publicly deployed vibe-coded applications built on platforms like Lovable, Bolt.new, and Base44. The findings: 2,000+ critical vulnerabilities, 400 exposed secrets including API keys and access tokens, and 175 instances of PII exposure including medical records and payment data. These were live production applications, not test environments.

Most of these vulnerabilities were accessible without authentication. Supabase service keys retrievable from frontend bundles. Missing row-level security on database access. Client-side authentication that provided zero actual protection.

The Purple Book Community's 2026 State of AI Risk Management survey found that 70% of organizations have confirmed or suspected AI-generated vulnerabilities in production. Nearly three-quarters — 73% — say AI-assisted development is increasing software velocity beyond the pace security teams can review.

The Three-Layer Governance Framework

Organizations that implement controls at all three layers have seen measurable results. ISACA documented a 36% reduction in remediation time in a 2026 framework study without meaningful reduction in developer velocity. Here's the framework.

Layer 1: Tool Controls. Establish an approved AI coding tool list and enforce it. Scan all AI configuration files for hidden Unicode characters as part of CI/CD — the Rules File Backdoor is real and it's targeting your developers' IDEs. Audit MCP server permissions. Treat AI coding agents as high-risk identities: least-privilege access, rate-limited API calls, and logging with the same fidelity as privileged human accounts.

Layer 2: Code Gates. Mandatory SAST blocking gates on all AI-assisted pull requests. Deploy secrets detection as pre-commit hooks, not post-push — with AI-service credential signatures for OpenAI keys, Anthropic keys, and cloud provider patterns. Enforce dependency lockfiles and package allowlists so unrecognized package names trigger review rather than installation. Set a review threshold for PRs with high AI-generated code proportion — many teams use 60% as the trigger for mandatory security-focused review. Track AI code provenance so when a vulnerability surfaces, you can identify the full scope of related AI-generated code sharing the same pattern.

Layer 3: Process Controls. Mandatory human review before merging, with a defined checklist for AI-generated code: authorization checks, least-privilege IAM verification, secrets handling, security headers, input validation on external inputs. Extend developer security training to cover AI-specific failure patterns. The Stanford finding about developer false confidence has direct training implications — developers using AI tools need explicit instruction to treat AI output with the same skepticism they'd apply to an untrusted external library.

The Uncomfortable Truth

We're not going to stop vibe coding. It's too fast, too accessible, and too productive. The organizations that get this right won't be the ones that ban AI coding tools. They'll be the ones that recognize a fundamental shift has occurred: code now moves at machine speed, so security controls must too.

The CVE trajectory from Georgia Tech's radar — 6, 15, 35, and accelerating — tells us we're in the early innings. Every month we delay building AI-aware security governance, the vulnerability debt compounds. And unlike technical debt, security debt gets collected by attackers, not accountants.

Your developers are already vibe coding. The question is whether your AppSec program knows it.

#VibeCoding #AppSec #AICodeSecurity #CyberSecurity #DevSecOps #SAST #SecureCoding #AIVulnerabilities #ApplicationSecurity #InfoSec #CodeReview #SupplyChainSecurity #SecurityEngineering #AIGovernance #ThreatModeling