A follow-up guide covering the security risks, best practices, and hardening steps for running an AI assistant with access to your personal life

In my previous article, I introduced Clawdbot as the AI assistant that messages you first. A proactive butler living in your messaging apps, powered by Claude, costing about $5 a month to run. It was exciting. It felt like the future.

Then I watched that future get a harsh reality check.

The wake-up call: hundreds of exposed Clawdbot instances

None

In January 2026, security researcher Jamieson O'Reilly ran a routine Shodan scan. What he found was alarming: hundrds of Clawdbot instances sitting wide open on the public internet.

Not just exposed. Completely compromised.

API keys. Conversation histories. Personal messages between users and their AI assistants. And in the worst cases, full root shell access to the underlying servers (yikes!).

"When I ran whoami, it came back as root," O'Reilly reported.

Let that sink in. Someone's personal AI assistant (the one that knows their calendar, reads their emails, manages their smart home) was running with root privileges and no authentication. Anyone with a web browser could have taken complete control.

An AI assistant with access to your personal life is only as secure as its weakest configuration setting.

The root cause? Proxy misconfiguration. Users were running Clawdbot behind nginx or Caddy reverse proxies, which made all incoming traffic appear to come from localhost.

Clawdbot's authentication saw "local" traffic and waved it through. The front door was locked, but someone had propped open a window.

What makes Clawdbot different (and why security matters more)

If you haven't read my first article, here's the quick version: Clawdbot is an open-source AI assistant created by Peter Steinberger (founder of PSPDFKit). It launched in January 2026 and quickly gathered over 9,000 GitHub stars.

Unlike ChatGPT or Claude's web interface, Clawdbot lives in your messaging apps. Telegram. WhatsApp. Discord. Slack. Signal. Even iMessage. It has two parts: the Gateway (handles message routing) and the Brain (Claude AI doing the actual thinking).

The killer feature? It can message you first. Your AI doesn't wait to be asked. It reminds you about appointments, follows up on tasks, notices patterns in your life.

This is also why security matters more than with a typical chatbot.

Traditional AI assistants are reactive. You ask, they answer. The attack surface is limited to what you explicitly share in that moment.

Clawdbot is proactive. It has persistent context. It might have access to your calendar, your email, your files, your smart home. It knows your schedule, your contacts, your habits. It can take actions on your behalf.

The trust hierarchy looks like this:

Owner (you): Full access to everything

AI: Acts on your behalf with delegated permissions

Friends: Limited access you've explicitly granted

Strangers: Should have zero access

When that hierarchy breaks down, when strangers get owner-level access, you're not just leaking chat logs. You're potentially handing over the keys to your digital life.

The built-in security you're probably not using

Here's the thing that frustrated me when researching this article: Clawdbot actually has excellent security engineering. The codebase includes timing-safe authentication using crypto.timingSafeEqual to prevent timing attacks. It has solid prompt injection protection that detects patterns like "ignore previous instructions." It binds to localhost by default.

The problem isn't the code. It's that users don't know these features exist.

Start here. Run this command:

# These commands are all valid and documented
clawdbot security audit
clawdbot security audit --deep
clawdbot security audit --fix
clawdbot security audit --deep --fix

This built-in security audit will scan your configuration, identify vulnerabilities, and offer to fix them automatically. It checks for exposed ports, weak authentication, overly permissive settings, and common misconfigurations.

The best security feature is the one you actually use. Run clawdbot security audit --deep --fix before reading any further.

⚠️ Again, don't read any further if you haven't run the command above yet.

The 5-minute security checklist

You don't need to understand cryptography to secure your Clawdbot instance. You need to copy these settings into your configuration file.

Open ~/.clawdbot/clawdbot.json and verify these values:

// File: ~/.clawdbot/clawdbot.json
// Secure baseline configuration for Clawdbot
{
  // Gateway settings
  "gateway": {
    "mode": "local",
    "bind": "loopback",
    "port": 18789,
    "auth": {
      "mode": "token",
      "token": "your-long-random-token-here"
    },
    // Required if running behind nginx/Caddy
    "trustedProxies": ["127.0.0.1", "::1"]
  },

  // Per-channel DM policies
  "channels": {
    "whatsapp": {
      "dmPolicy": "pairing",
      "groups": {
        "*": { "requireMention": true }
      }
    },
    "telegram": {
      "dmPolicy": "pairing"
    },
    "discord": {
      "dm": { "policy": "pairing" },
      "guilds": {}
    },
    "slack": {
      "dm": { "policy": "pairing" },
      "channels": {}
    }
  },

  // Global group policy
  "groupPolicy": "allowlist",
  "groupAllowFrom": [],

  // Agent sandbox settings
  "agents": {
    "defaults": {
      "sandbox": {
        "mode": "all",
        "scope": "agent",
        "workspaceAccess": "none"
      }
    }
  },

  // Elevated tools (shell access) - keep restrictive
  "tools": {
    "elevated": {
      "allowFrom": []
    }
  },

  // Logging with redaction
  "logging": {
    "redactSensitive": "tools"
  }
}

What each setting protects against:

gateway.bind: "loopback" keeps Clawdbot listening only on localhost. External traffic can't reach it directly.

gatewayauth.mode: "token" requires a secret token for all API requests. Without it, anyone who can reach the port has full access.

channels.whatsapp.dmPolicy: "pairing" means new devices must go through a pairing flow with time-limited codes (1-hour TTL). No random strangers sliding into your AI's DMs.

groupPolicy: "allowlist" prevents your bot from being added to random group chats where it might leak information.

requireMention: true stops the bot from responding to every message in a group. It only activates when explicitly called.

sandbox.mode: "all" runs agent operations in isolation, limiting blast radius if something goes wrong.

Finally, lock down your file permissions:

chmod 700 ~/.clawdbot
chmod 600 ~/.clawdbot/clawdbot.json

This ensures only your user account can read the configuration file containing your API keys and tokens.

If you're running behind a reverse proxy (read this)

This is where the 900+ exposed instances came from. Pay attention.

When you run Clawdbot behind nginx or Caddy, all incoming requests appear to originate from 127.0.0.1 (localhost). Clawdbot sees "local" traffic and assumes it's trusted.

It's not.

You need to tell Clawdbot which proxy addresses to trust, and then verify the real client IP from forwarded headers.

{
  "gateway": {
    "trustedProxies": ["127.0.0.1", "::1"]
  }
}

For nginx, ensure you're setting the forwarded headers correctly:

location / {
    proxy_pass http://127.0.0.1:18789;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

How to verify you're not exposed:

From outside your network (use your phone on mobile data, or a VPN), try accessing your Clawdbot port directly:

curl -v http://your-public-ip:3000/health

If you get a response, you have a problem. You should see a connection timeout or refusal.

If you can reach your Clawdbot instance from the public internet without authentication, assume it has already been compromised.

The nuclear option: Tailscale integration

Look, if the proxy configuration makes your head spin, there's a simpler approach: don't expose Clawdbot to the internet at all.

Tailscale creates a private network between your devices. Your phone, laptop, and server all get private IP addresses that only your devices can reach. No port forwarding. No firewall rules. No proxy configuration to get wrong.

Set your Clawdbot bind address to your Tailscale interface:

gateway:
  bind: "tailnet"

Now Clawdbot only accepts connections from devices on your Tailscale network. Even if someone knows your server's public IP, they can't reach the Clawdbot port. It doesn't exist on the public internet.

This is zero-trust networking in practice. Instead of trying to secure a publicly accessible service, you make the service inaccessible to anyone who isn't already authenticated to your network.

For non-technical readers: think of Tailscale as a private tunnel between your devices. Your AI assistant lives inside that tunnel. People outside the tunnel can't even see it exists.

None

What if you've already been exposed?

If you ran Clawdbot with default settings behind a reverse proxy before January 2026, assume compromise. I know that sounds paranoid. It's not.

Immediate actions:

Rotate your Anthropic API key. Log into the Anthropic console and generate a new key. Delete the old one.

Rotate any other API keys Clawdbot had access to (calendar APIs, email APIs, smart home integrations).

Change your Clawdbot authentication token. Generate a new random string of at least 32 characters.

Review your conversation history for anything sensitive that might have been exfiltrated.

Check for unauthorised device pairings in your Clawdbot admin interface.

Audit your server for unexpected processes, cron jobs, or SSH keys if you had root shell exposure.

How to check if you were affected:

Search Shodan for your IP address. If your Clawdbot instance appeared in scan results, someone has likely already poked at it.

Check your Clawdbot logs for unusual access patterns:

# Default log location
grep -i "unauthorized\|failed\|unknown" /tmp/clawdbot/clawdbot-*.log

# Or use the built-in CLI (recommended)
clawdbot logs --follow

Look for requests from IP addresses you don't recognise, especially during the January 2026 exposure window.

The bigger picture

The Clawdbot incident isn't isolated. It's part of a pattern.

In 2024, researchers found 23.8 million secrets leaked on GitHub, a 25% increase year-over-year. The DeepSeek database exposure in January 2025 leaked over a million log entries including API keys. The Rabbit R1 shipped with hardcoded OpenAI and ElevenLabs keys baked into the device firmware.

LLMjacking attacks (where criminals steal AI API credentials to run their own workloads) can cost victims up to $46,000 per day in compute charges. November 2025 saw the first documented AI-orchestrated cyber espionage campaign.

Security researcher Simon Willison calls prompt injection "the SQL injection of the AI era." We're in the early days of understanding how to secure systems where the AI itself can be manipulated.

We're building AI assistants that know everything about us, then securing them like it's still 2010.

GitHub Secrets Leaked Per Year

The Air Canada chatbot lawsuit should give everyone pause. A customer won their case after the company's AI gave incorrect information about bereavement fares. The court held Air Canada responsible for what their AI said.

Now imagine that liability applied to an AI assistant that has access to your email, your calendar, your files. One that can message your contacts on your behalf.

The convenience is real. I use Clawdbot daily. But so is the responsibility.

Moving forward

Clawdbot represents something genuinely new: an AI assistant that doesn't wait to be asked. That proactively helps. That integrates deeply into your digital life.

Clawdbot’s trusted zones

That integration is a feature. It's also an attack surface.

The good news: the Clawdbot team has responded well. The security audit command exists. The default bindings are sensible. The documentation now prominently warns about proxy configuration.

The responsibility is on us, the users, to actually configure it properly.

Run the security audit. Check your proxy settings. Consider Tailscale. And remember that an AI assistant with access to your personal life deserves the same security attention you'd give to your banking app.

Probably more.

References

Official Sources

Security Research

Additional