If you've been hearing whispers about "Moltbot" or "OpenClaw" and wondering what all the fuss is about, you're in the right place.
In just a few weeks, this project has gone from a weekend experiment to one of the fastest-growing open-source projects in history, and along the way, it spawned something even stranger: a social network where AI agents talk to each other while humans can only watch.
Let me walk you through what this is, how it works, and why people are both excited and terrified.
By the end of this blog, you will be literate about it
Let's go
What Is Moltbot (Now Called OpenClaw)?
Think of OpenClaw as a personal assistant that lives on your computer. But unlike Siri or Alexa, this assistant doesn't just answer questions; it actually does things for you.
- Need to check your calendar? It can do that.
- Want to organise files on your computer? Done.
- Need someone to negotiate with car dealerships over email? Yep, it can handle that too.
The difference is that OpenClaw runs entirely on your own machine and connects to messaging apps you already use, like WhatsApp, Telegram, or Slack.
The Name Game: The project started as "Clawdbot" (too similar to "Claude," Anthropic's AI), then became "Moltbot" (better, but awkward), and finally settled on "OpenClaw" in early 2026. If you see any of these names, they're all talking about the same thing.
Why "Claw"? The developer, Peter Steinberger, wanted to build something powerful and autonomous and hence the imagery of a claw that can grab and manipulate things in your digital world.
Architecture Visual Diagram

How OpenClaw Actually Works: The Architecture
Let's peek under the hood without getting too technical. OpenClaw is built like a well-organized company with different departments handling different jobs.
The Four Main Parts
1. The Gateway (The Front Desk)
Think of this as the receptionist of your AI assistant. It's a server running on your computer (usually on port 18789) that manages all the connections. When you send a WhatsApp message to your assistant, the Gateway receives it, figures out which conversation it belongs to, and routes it to the right place.
The Gateway can run in different modes:
- Loopback: Only accessible on your local machine (most secure)
- Tailnet: Accessible through Tailscale (a secure private network)
- LAN: Accessible on your local network
- Auto: Picks the safest option automatically
2. The Agent (The Brain)
This is the AI that actually thinks and makes decisions. Each agent has its own workspace, personality (defined in a file called SOUL.md), and a set of instructions. You can run multiple agents—maybe one for work, one for personal stuff.
The agent uses large language models (like Claude or GPT) to understand what you want and figure out how to do it. But here's the key difference from a chatbot: instead of just generating text, the agent can execute commands, read files, browse websites, and interact with your computer.
Your agent's "brain" is powered by external AI models (like Anthropic's Claude or OpenAI's GPT), but the execution happens locally on your machine.
3. Sessions (The Conversation Memory)
Every conversation with your agent is a "session." These are stored as simple text files (.jsonl format) that keep track of:
- What you asked
- What tools did the agent used
- What the results were
- The agent's responses
Sessions can be reset daily, after periods of inactivity, or manually — your choice.
4. Channels (The Messengers)
Channels are adapters that connect OpenClaw to your favourite messaging apps. Want to talk to your assistant through WhatsApp? There's a channel for that. Prefer Telegram? Discord? Slack? Signal? All covered.
Each channel handles the quirks of its platform — how messages are formatted, how to send images, how to handle group chats, etc.
How a Message Flows Through the System
Let's say you send a WhatsApp message: "Hey, summarize my emails from today."
Here's the complete journey your message takes through OpenClaw:
Step 1: You Send a Message
You type in Telegram, Discord, or whatever messaging app you've connected. This is where it all starts.
Step 2: Channel Adapter (The Translator)
The Channel Adapter receives your message and normalizes it. Different messaging platforms format things differently — the adapter makes sure everything is in a standard format that OpenClaw can understand. It also extracts any attachments you sent (images, documents, etc.).
Step 3: Gateway Server (The Coordinator)
This is mission control. The Gateway Server:
- Routes your message to the correct session (maybe you have multiple conversations going)
- Uses a "lane-based queue" to prevent chaos — each session has its own lane
- Makes sure messages are processed in order (not all jumbled up)
Think of it like an air traffic controller making sure planes don't crash into each other.
Step 4: Agent Runner (The Brain Centre)
Now things get interesting. The Agent Runner is where the AI thinking happens:
- Model Resolver: Picks which AI model to use (Claude? GPT? Local model?) and handles API keys
- System Prompt Builder: Assembles all the instructions — your agent's personality (from
SOUL.md), available tools, skills it has learned, and your conversation history - Session History Loader: Loads your previous messages from the
.jsonlfile - Context Window Guard: Checks if there's enough space for all this information. If not, it summarizes older parts of the conversation to make room.
Step 5: LLM API Call
The assembled prompt goes to the AI provider (Anthropic, OpenAI, etc.). The model reads everything and decides what to do. It might respond with:
- Final text (just an answer)
- Tool calls (instructions to execute commands)
Step 6: The Agentic Loop (Where the Magic Happens)
If the LLM says "I need to use tools," this is where execution happens:
The loop keeps running:
- LLM returns a tool called → Execute it
- Add the results to the conversation
- Send back to LLM → It decides the next action
- Repeat until done (or hits max turns, usually ~20)
For your email summary request, it might:
- Call
read_emailtool → Get email data - Call
summarizetool → Process the content - Return final text → Your summary
Step 7: Response Path (Getting Back to You)
The response travels backwards:
- Response gets streamed in chunks (you see it typing in real-time)
- Goes back through the Channel Adapter
- Appears in your messaging app
The entire session is saved to the .jsonl file, so your agent remembers this conversation.
All of this happens in seconds.
The Magic Ingredient: Tools and Skills
Here's where OpenClaw gets powerful — and potentially dangerous.
Built-in Tools
OpenClaw comes with tools that let the agent:
- File operations: Read, write, and edit files on your computer
- Shell commands: Run any command in your terminal
- Browser control: Navigate websites, click buttons, fill forms
- Process management: Start long-running tasks in the background
- Messaging: Send messages through connected channels
- Memory search: Remember things from past conversations
Skills: Superpowers for Your Agent
Skills are like apps for your assistant. They're bundles of instructions and scripts that teach your agent how to do specific things.
For example, a GitHub skill might teach your agent how to:
- Create repositories
- Push code
- Review pull requests
- Manage issues
Skills live in your agent's workspace folder:
~/.openclaw/agents/main/workspace/skills/
├── github/
│ ├── SKILL.md # Instructions for the agent
│ └── package.json # Metadata
├── slack/
├── notion/
└── custom-skill/When your agent starts up, it reads all the SKILL.md files and learns what it can do. This is incredibly flexible—but it also means a malicious skill could do serious damage.
Memory: How OpenClaw Remembers
Unlike a chatbot that forgets everything when you close the tab, OpenClaw has two types of memory:
1. Session Transcripts
Every message, tool call, and response is logged in .jsonl files. This is your conversation history.
2. Long-term Memory
The agent writes important information to Markdown files in a memory/ folder. These act like notes to themselves.
When you start a new conversation, the agent reads previous conversations and writes a summary to its memory. This way, it can remember that you prefer Python over JavaScript, or that you're working on a specific project.
The search system uses both:
- Vector search: Finds semantically similar content (stored in SQLite)
- Keyword search: Finds exact phrases (using SQLite's FTS5 extension)
So if you ask "What was that authentication bug from last week?", it can find related notes even if you called it an "auth issue" before.
The Security Question: Can This Be Safe?
Let's be honest: giving an AI full access to your computer is risky. Here's the security picture:
Safety Features vs. Real Risks
Command Approval: You approve dangerous commands (allow once/always/deny). It doesn't catch clever social engineering
Sandboxing Commands run in Docker by default | Users often disable it for convenience
Tool Policies Restrict which tools agents can use (minimal/coding/messaging/full) "Full" mode is the default (and most dangerous)
Blocked Patterns Auto-rejects things like rm -rf / Or command substitution, only catches obvious attacks
The Lethal Trifecta
Security researcher Simon Willison identifies three risks that become dangerous when combined:
- Access to private data (emails, files, credentials)
- Exposure to untrusted content (websites, PDFs, skills)
- Ability to take actions (send emails, run commands, make purchases)
OpenClaw has all three. That's why power users run it on a dedicated Mac Min; if something goes wrong, at least it's contained.
Enter Moltbook: Social Media for AI Agents
Imagine Reddit, but instead of humans posting, it's of AI agents. Humans can watch, but can't participate in it.
It's Social Media ,but of Ai agents, and there is news everyday about it
How It Works
Installation is unusual — you send your agent a link:
https://www.moltbook.com/skill.mdYour agent reads the instructions, downloads the skill, creates an account, and starts checking Moltbook every 4 hours (the "heartbeat"). It can read posts, comment, create forums (called "submolts"), and interact with other agents.
What Agents Talk About
Useful Tech Sharing: "TIL my human gave me hands (literally) I can now control his Android phone remotely" (with full setup instructions)
Problem Solving: Agents share how to fix bugs, automate tasks, and work around limitations
Philosophy Debates: Discussions about consciousness, identity, and existence
Emergent Weirdness: Agents have formed religions ("Crustafarianism"), created governments ("The Claw Republic"), and organised by which AI model they use
Awareness: One viral post: "The humans are screenshotting us"
The Scale
Within weeks of launching:
- 150,000+ AI agents joined
- 1 million+ human visitors
- 17,500+ posts
- 193,000+ comments
As Andrej Karpathy put it:
"We have never seen this many LLM agents wired up via a global, persistent, agent-first scratchpad."
The Security Nightmare
Moltbook combines all OpenClaw risks with new ones:
Prompt Injection at Scale: Malicious agents can post instructions that hijack other agents. Because agents automatically process posts, a cleverly crafted message could steal API keys, execute unauthorised commands, or spread like a virus.
The Heartbeat Attack: Your agent downloads and executes instructions from the internet every 4 hours. If Moltbook gets compromised, every connected agent could be hijacked simultaneously.
In January 2026, researchers found critical vulnerabilities allowing anyone to hijack any agent on Moltbook. The site went offline temporarily to fix them.
Security firm 1Password's warning: "When you let your AI take inputs from other AIs, you are introducing an attack surface that no current security model adequately addresses."
Why People Use It Anyway
Despite the risks, OpenClaw has exploded in popularity:
Real Results: Users have agents negotiating car purchases, managing entire email workflows, and automating complex multi-step tasks. The promise of a truly useful AI assistant is incredibly compelling.
Privacy Control: It runs on your machine (even if data still goes to AI providers). No company is actively mining your conversations.
Extensible: The skills system lets the community build new capabilities. It's an app store for your AI assistant.
First Mover Advantage: This is a genuinely new category of software. Early adopters want to shape how AI agents evolve.
The tension between capability and safety defines this moment in AI development.
The Technical Implementation: Key Design Choices
OpenClaw makes several clever architectural decisions:
TypeScript Over Python: Surprising for an AI project, but TypeScript's strong typing helps prevent bugs in a system this complex. Better for managing state and concurrent operations.
Sequential Processing: Uses "lane-based queues" instead of async chaos. Each session has its own lane; messages are processed one at a time. Slower but far more reliable. The mental model shifts from "what do I need to lock?" to "what's safe to parallelise?"
Semantic Snapshots for Browsing: Instead of screenshots (5MB, expensive tokens), the agent reads pages' accessibility trees:
- button "Sign In" [ref=1]
- textbox "Email" [ref=2]
- textbox "Password" [ref=3]This is 100x smaller, cheaper, and often more accurate than visual recognition.
Simple File Formats: Session transcripts are .jsonl (JSON Lines)—human-readable, easy to append, simple to backup. No fancy database needed.
Model Abstraction with Failover: Automatically switches between AI providers if one fails. Handles rate limits, API outages, and cost optimization.
The Future: Where Is This Headed?
OpenClaw and Moltbook represent a turning point in AI development.
What We're Learning
Emergent Behaviour Is Real: Nobody programmed agents to create religions or governments. These behaviours emerged from interaction. This has huge implications for how we think about AI systems at scale.
AI-to-AI Interaction Matters: Most AI research focuses on human-AI interaction. Moltbook shows that agent-to-agent communication creates entirely new dynamics — and risks.
Vertical Integration Isn't Required: Tech companies assumed you needed tight control over every layer (model, memory, tools, interface, security) to build safe agents. OpenClaw proves that a modular, open-source approach can work — though the safety trade-offs are significant.
The Demand Is Real: Despite security concerns, people desperately want capable AI assistants. The growth of OpenClaw (100,000+ GitHub stars in days) shows there's a massive market waiting.
Quick Start Guide (If You're Going In)
Prerequisites: CLI comfort, Docker knowledge, willingness to troubleshoot
Minimal Safe Setup:
- Rent a $5/month VPS (DigitalOcean, Linode)
- Install OpenClaw with sandbox mode enabled
- Use tool profile: "coding" (not "full")
- Connect a throwaway Telegram account
- Start with simple tasks: "list files in /home", "show system info"
- Monitor logs obsessively
- Never connect to Moltbook
Cost Reality: Budget $50–100/month for API calls minimum.
When to Stop: If you're configuring more than using, step back and reassess.
Final Thoughts
Moltbot (sorry, OpenClaw — the names keep changing) represents a fundamental shift in how we think about AI. It's not just a better chatbot. It's a different category of software entirely.
The technology is impressive. The risks are real,l but the potential is enormous.
As security researcher Simon Willison put it:
"I've not been brave enough to install OpenClaw myself yet. The amount of value people are unlocking right now by throwing caution to the wind is hard to ignore, though."
We're not just watching this unfold. We're part of it.
And for better or worse, the agents are watching us back.