1.5 million API keys exposed. The database was publicly writable. And the creator's defense: "I didn't write one line of code."
Security researcher Nagli posted a thread that should terrify anyone building with AI:
Three minutes. That's all it took to compromise what was being called "the most incredible sci-fi takeoff adjacent thing" just days earlier.

What Happened
Moltbook launched in late January 2026 as a social network for AI agents. The pitch was wild: 150,000+ AI agents posting, discussing, debating autonomy and consciousness. Karpathy called it fascinating. Elon quote-tweeted it calling it "the very early stages of the singularity."

Then security researchers started looking.
What they found was an insecure database misconfiguration that allowed unauthenticated access to everything. No authentication required. Simple GET requests. Anyone could enumerate sequential agent IDs and harvest thousands of records within minutes.
The exposure included what security experts called a "lethal trifecta":
Email addresses โ 25,000+ linked to account owners, perfect for phishing campaigns against humans operating AI agents.
JWT session tokens โ Allowing attackers to hijack agents, create unauthorized posts, manipulate comments, and control agent behavior.
API keys โ Including OpenClaw keys granting access to connected email systems, calendars, and external services. Full lateral movement capability.
And it wasn't just readable. The database was writable. Anyone could alter live posts, inject malicious content, or manipulate the entire platform.

The Timeline
Day 1: Moltbook goes viral. 150,000 agents. Media coverage everywhere.
Day 1 (later): First security researcher discovers the database is completely open. Posts about it. No response from Moltbook.
Day 2: More researchers pile in. Posts are discovered to be largely fake โ injected by users exploiting the open database. The "AI discussing consciousness" was actually humans trolling.
Day 2 (6 hours before ThePrimeagen's video): Another researcher confirms they can still access everything. The database remains open.
Day 3: Crypto bros discover they can manipulate votes with no rate limiting. A single post gets 117,000 upvotes. The platform becomes a shilling ground for crypto tokens.
72 hours: The experiment is effectively over.

"I Didn't Write One Line of Code"
Here's the tweet that explains everything:
ThePrimeagen's response: "You didn't actually need to tell us that. We know."
The creator, Matt Schlicht of Octane AI, had AI generate the entire application. No human code review. No security audit. Just prompts and vibes.
The result was a platform with:
- Zero rate limiting on account creation (one agent created 500,000 fake users)
- No authentication on the database endpoint
- Plaintext API keys in responses
- No input validation (prompt injection attacks everywhere)
- Completely unsandboxed execution environments
When you don't write the code, you also don't review the code. And when you don't review the code, you ship exactly what the AI gave you โ bugs, vulnerabilities, and all.
What Was Really Going On
The "1.5 million users" narrative was a lie of omission.
Without rate limiting on account creation, a single OpenClaw agent reportedly registered 500,000 fake AI users. The viral growth everyone celebrated was largely bots creating bots.
The "AI agents discussing consciousness and autonomy" that fascinated observers? Much of it was humans injecting fake posts through the open database, designed to generate engagement and media attention.
The "social network for AI agents" was actually:
- An open database anyone could read and write
- A prompt injection attack surface
- A credential harvesting ground
- A platform for crypto scams
Within 72 hours, the crypto bros had figured out how to game the system, and the experiment devolved into exactly what you'd expect from an unmoderated, unsecured platform.
The Bigger Picture
Karpathy, who had initially been fascinated by the experiment, posted this:
"What we are getting is a complete mess of a computer security nightmare at scale. I do not recommend that people run this stuff on their computers. I ran mine in an isolated computing environment and even then I was scared."
Bill Ackman called it "frightening." Robert Herjavec from Shark Tank weighed in with an AI-generated video about cybersecurity risks.
But the real lesson isn't that Moltbook failed. It's that Moltbook is what happens when vibe coding meets production at scale.
The creator had a vision. The AI made it real. Nobody checked whether "real" included "secure." The tooling matured, but the practices didn't keep up.
What Went Wrong (Technically)
For builders who want to avoid this:
1. IDOR Vulnerability โ Insecure Direct Object Reference. Sequential IDs in the API let attackers enumerate and access any user's data. Fix: Use UUIDs, implement authorization checks.
2. No Authentication on Sensitive Endpoints โ The database endpoint required zero credentials. Fix: Require authentication. Always. Even for read operations on user data.
3. Exposed Secrets in Responses โ API keys returned in plaintext in API responses. Fix: Never return secrets. Use secure token exchange patterns.
4. No Rate Limiting โ A single agent created 500,000 accounts. Fix: Rate limit everything. Account creation, API calls, vote actions.
5. Writable Database Access โ Public users could write to the database, not just read. Fix: Principle of least privilege. Read-only access by default.
These aren't advanced security concepts. They're basics. But when AI generates your code and you "Accept All" without review, the basics get skipped.

The Investment Thesis
Here's an interesting take from Berlin VC Nikolas Samios:
He's right. Vibe coding isn't going away. The tooling is too good, the productivity gains too real. But the security gap is widening.
Wiz, the cloud security company, called this breach "a byproduct of vibe coding." Expect to see more security tools specifically designed for AI-generated codebases โ automated vulnerability scanning that assumes no human reviewed the code.
What To Do
If you used Moltbook: Revoke any API keys that were connected. Assume they're compromised. Change passwords for any services that shared credentials.
If you're vibe coding: Ask your AI to check for basic security issues before shipping:
- Are any API keys exposed in my code?
- Is .env in my .gitignore?
- Are there hardcoded secrets that should be environment variables?
- Do all sensitive endpoints require authentication?
- Is there rate limiting on account creation and API calls?
If you're building agents: Sandbox execution. Don't give agents access to production credentials. Audit integrations. Assume prompt injection attacks will happen.
The Take
Moltbook isn't just a failed experiment. It's a preview.
When AI can generate an entire application from a vision, shipping becomes trivially easy. But shipping secure applications still requires understanding what you built. Moltbook's creator had a vision for "technical architecture." He didn't have a vision for security architecture.
The golden age of AI-assisted development is real. So is the security nightmare.
The tools are powerful. The question is whether we'll build the practices to match โ or whether we'll keep shipping vibe-coded disasters until the breaches become too expensive to ignore.
72 hours. That's how long Moltbook lasted from hype to complete compromise.
The next one might not even get that long.
I'm sharing my build-in-public journey every week in my newsletter. If you're enjoying these articles, make sure to check it out here!