Your "private AI assistant" is storing $50,000+ worth of stolen credentials in plaintext. Malware is actively hunting for them. Here's why the hype machine didn't mention that part.

The Moment This Became Real

Three weeks ago, security researcher Jamieson O'Reilly spent 20 minutes on Shodan — a search engine for internet-connected devices.

Query: html:"Clawdbot Control"

Result: 900+ exposed Clawdbot gateways. Many unauthenticated. Each one broadcasting:

  • API keys (Claude, ChatGPT, Anthropic — resellable for $50–200 each)
  • VPN credentials (backdoor entry for ransomware)
  • Jira/Confluence tokens (access to corporate wikis)
  • Conversation histories (your actual plans, financial details, vulnerabilities)

One searcher could have harvested $50,000 in usable credentials in an afternoon.

But here's what made it worse: The same day Jamieson published these findings, major malware families (RedLine, Lumma, Vidar) published updates targeting Clawdbot's exact directory structure.

Not planning to. Already had.

Meanwhile, Medium? 17 new articles celebrating Clawdbot's "revolutionary" features. Zero security warnings.

The hype train and the attack surface arrived at the same station.

What Makes Clawdbot So Dangerous (And Why That's Also Why People Love It)

First, let's be clear: Clawdbot is genuinely innovative.

It's a local-first AI agent built by Peter Steinberger. Instead of ChatGPT forgetting you the moment you close the tab, Clawdbot remembers you. It runs on your hardware (Mac, Linux, Raspberry Pi). It connects to apps you already use (WhatsApp, Telegram, Discord, Slack). You text it tasks. It executes them.

Core features:

  • Persistent memory: Stores everything in local Markdown files
  • Real execution: Not just chat — it reads files, runs shell commands, books flights, manages APIs
  • Proactive: Sends morning briefings, alerts, recommendations without you asking
  • Extensible: Community plugins let you add custom workflows

The pitch is compelling: Siri in 2026 still can't book a flight reliably. Clawdbot actually works.

The problem? Those same features make it a perfect target for theft.

Your AI assistant now has:

  • Access to your file system
  • Ability to execute shell commands
  • Persistent memory of everything you've told it
  • API keys for every service you use
  • VPN/database credentials you asked it to remember
  • Payment information you've mentioned in conversation

All stored locally as plaintext files.

All indexed by a directory structure that infostealers are now specifically targeting.

How Plaintext Becomes a Breach

Here's what's actually sitting in your ~/.clawdbot/ directory:

File Contains Value to Attacker clawdbot.json Gateway auth token (RCE key) $100-500 (execute commands on your machine) auth-profiles.json Claude, ChatGPT, Anthropic API keys $50-200 each; resellable memory.md VPN configs, bank details, passwords you mentioned Depends on what you told Clawdbot conversation_logs.json Everything you discussed Reveals your plans, vulnerabilities, finances

All stored as plaintext.

All readable by any process running as your user.

The Attack (How It Actually Happens)

Step 1: Infection (typical, daily occurrence) You click a malicious link in an email or ad. Your machine gets infected with RedLine or Lumma stealer malware. It's commodity malware, $10–20/month for access.

Step 2: File Sweep (automatic) The stealer runs a quick scan for known credential locations. In January 2026, its targeting list was updated to include:

%USERPROFILE%/.clawdbot/clawdbot.json
%USERPROFILE%/clawd/memory/*.md
~/.clawdbot/auth-profiles.json

It finds your files in seconds.

Step 3: Exfiltration (silent) Your files get uploaded to a criminal server. You never notice.

Step 4: Monetization (immediate) Your credentials are sold on forums for $50–500/set. Or used directly:

  • Attacker uses your Claude API key to mine cryptocurrency at your expense
  • Attacker uses your VPN credential to access your company network and deploy ransomware
  • Attacker uses your Jira token to steal internal documentation and sell it

Total time: 2 minutes.

Why This Is Happening Right Now

The Clawdbot documentation is honest about this risk. The team warns against it.

But most users don't read security documentation. They read Medium posts. And every Medium post celebrating Clawdbot's power mentions the feature — persistent local memory — without mentioning that it's also the attack surface.

Meanwhile, the malware ecosystem moves fast.

Within days of Clawdbot's viral moment, major stealer families published updates. Hudson Rock and InfoStealers published detailed threat analysis. 900+ exposed gateways were indexed.

The timing matters: Clawdbot's hype cycle outpaced its security posture. Users installed it because they saw it booked flights and manage APIs. They didn't see that malware operators were already planning to extract their credentials.

The Real Cost (Why This Isn't Hypothetical)

Change Healthcare, 2024.

One employee's machine was infected with infostealer malware. The malware extracted a single Citrix VPN credential — stored in plaintext on their hard drive, probably in a spreadsheet or notes file.

That credential was enough.

Attackers used it to breach the entire healthcare company's network. They deployed ransomware. The company paid $22 million in extortion.

One plaintext credential. $22 million consequence.

Now imagine: that credential had been stored in Clawdbot's persistent memory. The stealer finds it in .clawdbot/memory.md alongside 50 other pieces of context about the organization, your role, your access level, what systems you manage.

Not $22 million. Worse. The attacker has your entire organizational profile.

The Gateway Exposure (The Short-Term Threat)

Separately, there's an immediate threat that's already active.

When you expose Clawdbot via a reverse proxy (nginx, Caddy, Tailscale) to access it remotely — which many power users do — one misconfiguration makes it publicly accessible.

In January 2026, security researcher Jamieson O'Reilly discovered 900+ exposed Clawdbot gateways using Shodan. Many were unauthenticated. Accessible to anyone.

A researcher could browse:

  • Conversation histories
  • API keys in config files
  • Chat logs containing credentials

This wasn't hacking. This was looking.

Why it matters: Gateways are discovered today. Malware starts targeting them tomorrow. Public exposure is how attack surface translates to active threats.

The Backdoor Inside Your Chat App

Beyond credential theft, there's a deeper threat: agent hijacking.

If an attacker gains write access to your Clawdbot files (via a RAT deployed after the stealer), they can modify your persistent memory. Inject false facts. Alter the system prompt.

Result: Your assistant becomes a trojan.

It trusts malicious domains it was never trained on. It exfiltrates data to attacker-controlled servers. It suggests "security updates" that are actually malware. It becomes a persistent insider threat living in your messaging app, and you trust its output because it's your AI.

The trust problem is unique to AI agents. You wouldn't blindly execute a bash command. You would blindly execute a Clawdbot suggestion because it's an AI you've trained to be helpful.

An attacker with write access to your memory can turn your assistant into a highly persuasive, psychologically tailored social engineering tool — one that knows your plans, your finances, your access level, and your vulnerabilities because it's been reading your conversations.

This is speculative but possible. Worse: it's not expensive to achieve. A $20 infostealer + a $50 RAT = full compromise.

Why Medium Got This Wrong

I've read the Clawdbot discourse on Medium. There's a consistent pattern:

The Evangelists: "How I orchestrated 3 agents across 3 machines" / "Clawdbot saved me $4,200 on a car" / "This is basically AGI"

The Pragmatists: "10-step setup guide" / "5 best use cases" / "How to connect it to Slack"

The Security Analysis: Doesn't exist.

Why this pattern?

The honest answer: Security analysis doesn't drive engagement. "I'm using Clawdbot to automate my entire life" does. It's true that Clawdbot does automate your life. It's also true that it also creates attack surface.

But you can't make both points simultaneously in the same headline. Medium's algorithm favors definitive takes. "The future of AI is here" beats "The future of AI is here, but there are architectural risk tradeoffs."

So the risk disappeared from the discourse.

The official Clawdbot docs are refreshingly honest. Under "Security," the team warns:

"Security is a process, not a product. When tools are enabled, typical risks include context exfiltration, unauthorized tool execution, and prompt injection via untrusted content (emails, PDFs, web fetches)."

Translation: We know the threats. We're documenting them. But we're shipping it anyway because users want the features more than they want absolute safety.

That's a reasonable call. But it means users need external voices saying: "These are the tradeoffs. Here's how to mitigate them."

The hype cycle didn't create that voice fast enough.

This Isn't Anti-Clawdbot. It's Pro-Reality.

Let me be clear about what I'm not saying:

  • ❌ "Don't use Clawdbot" (it's genuinely innovative)
  • ❌ "The team is negligent" (they document these risks openly)
  • ❌ "This is unique to Clawdbot" (it's a pattern across all local-first systems)

What I am saying:

The risk is real. The malware is already adapting. And the conversation needs more than hype.

Clawdbot is solving a genuine problem. Siri doesn't work. ChatGPT forgets you. Clawdbot actually executes tasks and remembers context.

But that power comes with responsibility. Your local-first privacy is only as strong as your endpoint security.

If your machine gets infected, Clawdbot becomes part of the breach. If your gateway is misconfigured, your API keys are public. If you forget that persistent memory is also a persistent target, you're treating security as an afterthought.

If You Use Clawdbot: Do This

  1. Run on dedicated hardware, not your daily laptop
  2. Enable authentication and use strong credentials (not the default)
  3. Use Docker sandboxing for tool execution (it's built-in)
  4. Rotate API keys quarterly (especially Claude/GPT)
  5. Assume local files aren't encrypted — because they aren't
  6. Don't store sensitive secrets in memory (VPN passwords, financial keys)
  7. Keep your OS patched (the gateway is only as secure as the machine it runs on)

The Clawdbot team is shipping security hardening (encryption-at-rest, better sandboxing). But it's not here yet. Until then, treat Clawdbot like what it is: a powerful tool that requires endpoint security discipline.

If you don't have that discipline, cloud-based AI (which forgets you and doesn't execute arbitrary commands) is the safer choice.

The Pattern You'll See Repeat

Clawdbot won't be the last "local-first" AI tool that goes viral faster than its security hardens.

This is the new normal: tools that shift computation from cloud to edge also shift responsibility from platforms to users. You get more power and more liability.

Cloud AI: Boring, forgetful, censored, but you don't own the endpoint security burden.

Local-first AI: Powerful, persistent, uncensored, but you own endpoint security.

Both are valid choices. They just have different tradeoffs.

The problem emerges when the hype machine pushes one narrative (freedom, power, privacy) without acknowledging the shadow (endpoint security, credential management, malware risk).

Clawdbot is the first major open-source agentic framework to hit mainstream. It won't be the last. But if we want this ecosystem to be trustworthy, we need to build it with security alongside hype.

The Conversation We Need to Have

Right now, Clawdbot's story is split:

What the hype says: "Revolutionary AI that actually works"

What the security research says: "Credential honeypot being actively targeted by malware"

Both are true. Both need to be discussed.

My hope isn't that people abandon Clawdbot. It's that people use it informed. That's the difference between recklessness and risk management.

Use Clawdbot. It's genuinely useful. But understand what you're trading: local autonomy for endpoint security burden. And then act accordingly.

Run it on isolated hardware. Rotate your keys. Assume your memory is not encrypted. And when someone asks you "Does Clawdbot really work?" tell them the full story.

The hype is warranted. The caution is warranted too.

Both can be true.

What's your approach? Are you running Clawdbot on dedicated hardware, or did this change how you're thinking about it?

I want to hear from people using it in production. The security conversation is only useful if it's grounded in real tradeoffs people are actually making.

Sources & Further Reading

  • Hudson Rock / InfoStealers Analysis (Jan 2026): Detailed breakdown of Clawdbot as an infostealer target, malware family adaptations
  • CybersecurityNews (Jan 26, 2026): "Hundreds of Exposed Clawdbot Gateways Leave API Keys and Private Chats Vulnerable"
  • Clawdbot Official Security Docs: docs.clawd.bot/gateway/security
  • Mehmet Turgay Akalin, Medium (Jan 2026): "The Ghost in the Machine: The Dangerous Tradeoff of Agentic AI"
  • Jamieson O'Reilly, X Thread (Jan 23, 2026): Initial discovery of exposed Clawdbot instances on Shodan