What Happens When You Let AI Agents Run Your Sprint Board

There's this thing that happens when you pair-program with AI for too long. You're deep in it — Claude's churning out code, you're reviewing, approving, course-correcting — and after a few hours you realize: nobody wrote down what just happened.

No ticket was updated. No timer was running. The chat history is a mile long but it'll vanish the moment you close the session. And if your teammate asks "what got done today?" — good luck reconstructing that from memory.

> Not a member? Continue reading full story here

This kept bugging me. I'd finish a marathon coding session with Claude or Codex and have this vague feeling that a lot happened, but no paper trail. The board still showed "Not Started" on tasks that were clearly done. Time tracking? Forget it. The disconnect between how I worked and how the project looked was getting ridiculous.

The nap that changed everything

One day I'd been heads-down for maybe five hours straight. Building features, fixing bugs, bouncing between the terminal, the IDE, and the Mattermost board where my tasks lived. I was exhausted.

So I took a nap.

And when I woke up, half-groggy, staring at the ceiling, it hit me: why am I the middleman? Why am I the one who has to open the board, find the task, update the status, start the timer, then go back to the terminal and tell the agent what to work on? Why can't the agent just… do that itself?

The pieces were all there. Mattermost Boards has a full REST API. AI agents like Claude Code support MCP tools. I just needed something to connect them.

That evening I started building Skate.

What is Skate?

Skate is a Go CLI tool and MCP server that gives AI coding agents direct access to your Mattermost Boards. Your agent can list tasks, pick one by priority, update its status, start a timer, work on it, leave comments, and mark it done — all without you alt-tabbing to a browser.

The name? One meaning of "skate" is to make quick and easy progress through something. And that's exactly what happens. Tasks that used to take me hours — digging through code, writing fixes, updating docs, testing — get knocked out in minutes when Claude has a board to work from. You just say "next task" and watch it go. Skate through your backlog. Literally.

Here's what it looks like in practice:

$ skate tasks
ID                           TITLE                        STATUS       PRIORITY   ASSIGNEE
c4cf6f4wzbjgxdm3hpa7iygtjdo  Task translation middleware  Not Started  2. Medium
cuppcm819atnixx71qg9i485jsr  listing tasks                In Progress  1. High 🔥
None
Skate task list in the terminal

You tell Claude: "take the next task by priority" — and it actually does. It runs skate tasks, picks the highest priority item, reads the full task description with skate task <ID>, updates the status, starts a timer, does the work, leaves a comment explaining what changed, stops the timer, and marks it complete.

The whole loop. Hands-free.

The problem I was actually solving

If you've worked with AI coding agents — Claude Code, Cursor, Codex, whatever — you've probably hit the same wall. These tools are incredibly powerful at writing code. But they have zero awareness of your project management setup. They don't know what you're supposed to be working on. They don't update tickets. They don't track time.

And context? I used to obsess over this. I even built a tool called Pantry specifically to give agents persistent memory across sessions. Notes, decisions, patterns — stored locally in SQLite with semantic search. It worked great for a while.

But here's the thing: Claude now has a 1M token context window. The agent's memory isn't really the bottleneck anymore. The bottleneck is human memory. When three different agents have been working on your project across multiple sessions, you need to know what happened. You need timestamps, you need comments, you need a board that reflects reality.

That's what Skate is for. Not agent memory — team memory.

How it actually works

Skate is a single static Go binary. No database, no daemon, no Docker container. It talks directly to the Mattermost Boards API using a personal access token.

Setup takes about 30 seconds:

skate init          # Enter your Mattermost URL and token
skate local-init    # Pick which board this project uses
skate setup claude-code  # Register the MCP server

That's it. Claude Code now has nine tools it can call: list boards, list tasks, view task details, update status, create tasks, add comments, start/stop timers, and log manual time.

Setting up Skate with Claude Code
Setting up Skate with Claude Code

Behind the scenes it's dead simple. Skate sends HTTP requests to your Mattermost instance with a Bearer token. The boards plugin handles permissions — if you can see the board in the browser, Skate can see it from the terminal.

The output formats are flexible too. Default is a clean table for humans, but you can pipe --json or --yaml for scripting:

skate tasks --json | jq '.[] | select(.Priority | test("High"))'

The translation thing

Here's a feature I didn't plan but turned out to be surprisingly useful.

My team has members who occasionally write task descriptions in their native language. Not a problem in Mattermost — you just read it. But when an AI agent is parsing task content from the API, it needs English to do its best work.

So I added a translation middleware. It's optional — you enable it in the config with an OpenAI-compatible API (works with OpenAI, Ollama, OpenRouter, whatever). When Skate renders a task, it runs a quick heuristic: is this text mostly ASCII? If yes, skip it. If no, translate it. The agent sees clean English. The original stays untouched on the board.

# ~/.config/skate.yaml
translate:
  enabled: true
  provider: ollama
  model: gpt-oss:latest
  base_url: http://localhost:11434/v1

It's one of those features that sounds trivial but makes a real difference when your team spans multiple countries.

What I learned building this with AI

Here's the meta part. I built Skate with the very workflow Skate enables. Once the basic CLI was working, I started using it to manage its own development. I created tasks on a Mattermost board, pointed Skate at it, and told Claude: "take the next task by priority."

And it worked. Claude would:

  1. Run skate tasks to see what's available
  2. Pick the highest priority item
  3. Read the full description with skate task <ID>
  4. Update status to "In Progress"
  5. Start a timer
  6. Do the actual work (write code, fix bugs, update docs)
  7. Leave a comment summarizing what was done
  8. Stop the timer
  9. Mark it complete
  10. Ask: "next task?"

Over 30 tasks got completed this way. Each one with timestamps, comments, and time tracking. When I looked at the board afterward, I could see exactly what happened, in what order, and how long each thing took.

None
The Mattermost board after a full AI coding session
None
Every task tracked, timed, and commented

That's the whole point. Not just getting work done — knowing what got done.

Recurring tasks: drag, drop, done again

One pattern that emerged naturally is recurring tasks. Updating the README, running tests, checking docs — these aren't one-and-done. Every time I add a feature or fix a bug, the README might need a tweak, tests need updating, docs might be stale.

Instead of creating a new task every time, I just drag the old one from "Completed" back to "Not Started." The agent picks it up, sees the description and all previous comments, understands the context, and does the work again. It checks the current state of the files, figures out what changed since last time, and updates accordingly.

This is surprisingly powerful. I have an "Update README" task that's been completed and reopened maybe six times during development. Each time the agent reads its own previous comments, sees what was added last time, and fills in whatever's new. Zero repetitive instructions from me. Just move the card and say "next task."

Same thing with tests. "Add/update tests" is a task that keeps coming back. The agent checks current coverage, sees what new code was added since last run, writes the missing tests, attaches the output. I don't have to explain what changed — it figures that out by looking at the code.

When the agent becomes a team member

Here's where it gets interesting. You're pair-programming with Claude and you stumble onto something — a bug, an idea, a refactor that needs to happen but not right now. In the old workflow, you'd switch to the browser, open the board, create a card, write up the description, come back to the terminal. Context switch, flow broken.

With Skate, you just say: "create a task for this." The agent runs skate create, fills in the title and description from the conversation you just had, and the card appears on the board. No browser, no context switch. The discovery goes straight from your head to the board in seconds.

But it goes further. For bigger tasks, the agent doesn't just dive in — it plans first. It researches the codebase, thinks through the approach, writes up a plan as a markdown file, and attaches it to the task card. Then it sets the status to "Blocked" and waits. Now here's the part I didn't expect: other team members can see that plan on the board. They can read it, leave comments, push back on decisions, suggest alternatives. The agent picks up the feedback, revises the plan, attaches the new version, and only starts coding after the approach is approved.

Think about what just happened. An AI agent proposed a technical plan, a human teammate reviewed it asynchronously on a kanban board, left feedback, and the agent incorporated it before writing a single line of code. That's not "developer talks to AI in a terminal." That's a team collaborating on work using a shared board — and one of the team members happens to be an AI.

This changes the dynamic completely. The agent isn't hidden inside one person's IDE anymore. Its work is visible: the plan is attached, the status is tracked, the time is logged, the comments document every decision. Anyone on the team can see what the agent is working on, how long it took, and why certain choices were made. When someone joins the project six months from now and asks "why was this built this way?" — the answer is on the card.

The skill file: one document to rule them all

All of this — the status updates, the timer tracking, the mentions, the plans, the content blocks — is governed by a single markdown file called SKILL.md. It's embedded in the Skate binary and installed alongside the MCP server when you run skate setup. The agent reads it once and follows it for the entire session.

And here's what caught me off guard: the agent follows these instructions better than any human would.

I'm serious. Think about the boring, repetitive stuff that falls through the cracks on every team. Updating the ticket status. Starting the timer. Stopping the timer with notes. Mentioning the right person. Attaching test output. Writing a summary comment before closing. Checking for related tasks before starting. Reading all previous comments on a reopened ticket. Every developer knows they should do these things. Most of us forget half of them by Tuesday.

The agent never forgets. It reads the skill file, and it does exactly what it says. Every single time. It updates the status before starting work. It starts the timer. It checks for attached files. It mentions the last commenter. It writes a summary. It stops the timer with notes. It attaches the plan. Not because it's motivated or disciplined — because it literally follows the instructions it was given, without ego, without shortcuts, without "I'll do it later."

One thing the skill file mandates is signatures. Every comment and timer note ends with a line like -- claude-code (claude-opus-4-6) or -- codex (gpt-5-codex). Sounds like a small thing until you have multiple agents working on the same board. I've had Claude Code working through tasks on one project while Codex handled a different board in parallel. When I check the board later, every comment is attributed -- I can see exactly which agent did what, with which model, and in what order. No guessing, no "who wrote this?"

This matters more than you'd think. When three agents and two humans are contributing to the same project, the board becomes the single source of truth. Not someone's chat history, not a terminal log that got scrolled past — the board. Every decision, every plan, every status change is visible and attributed.

The skill file is maybe 130 lines of markdown. No code, no config parsing, no API integration — just clear instructions in plain language. And it turns a general-purpose AI model into a predictable, reliable team member that handles the operational overhead humans are terrible at.

Want the agent to behave differently? Edit the skill file. Want it to stop mentioning people? Add mentions: false to the config. Want it to always attach plans? It's already in there. The entire workflow is declarative, version-controlled, and transparent.

The boring technical bits (for those who care)

  • Language: Go. Single binary, no CGO, cross-compiles to Linux/macOS/Windows in one command.
  • MCP transport: stdio. Agents start Skate on demand, zero idle cost.
  • Config: YAML with three-layer merging (global → local per-project → env vars).
  • Dependencies: Cobra for CLI, official OpenAI Go SDK for translation, MCP Go SDK for agent integration.
  • Caching: User IDs get resolved to usernames and cached in ~/.cache/skate/users.yaml.
  • Version: Set at build time via ldflags, shared across CLI, HTTP User-Agent, and MCP server.

The whole thing is about 2,000 lines of Go. It does one thing and does it well.

A note on the Boards plugin

I should mention: the Mattermost Boards instance I run isn't stock Focalboard. I've been maintaining a customized fork of the Boards plugin that adds a few things the original never shipped — most notably time tracking. The ability to start/stop timers on cards and log hours is what makes the whole "agent tracks its own time" loop possible. There are a few other minor tweaks too, things I needed for the way my team works that didn't justify an upstream PR but made a big difference day to day.

Skate is built against that plugin's API. If you're running vanilla Focalboard, everything still works — task listing, status updates, comments, attachments — you just won't have the time tracking endpoints. Skate handles this gracefully: if the timer API isn't there, it tells you and moves on. No crashes, no broken workflows.

Beyond Mattermost: Jira, Linear, and everything else

Here's what I keep thinking about: there's nothing about this approach that's specific to Mattermost Boards.

I'm not the only one who noticed the gap between AI agents and project management. Projects like Vibe Kanban (Rust, 13k+ stars) and Kanban Code (Swift) are tackling the same problem. They're impressive — Vibe Kanban gives you parallel agent orchestration with isolated git worktrees, visual code reviews, the works. Kanban Code auto-links Claude sessions to cards and flows them through the board as PRs get merged.

But here's the thing: they all create a new kanban board. A new UI, a new workflow, a new tool everyone on the team has to learn. If your team is already on Jira, or Mattermost, or Linear — you now have two boards. The AI board and the real board. And good luck getting your PM to check the AI board.

Skate takes the opposite approach. It doesn't replace your board — it connects to it. Your team keeps using the same Mattermost board they've always used, the same columns, the same workflow, the same mobile app. The agent just shows up as another participant. No migration, no training, no "can everyone please switch to this new tool." The board stays where it is. The agent meets the team where they already work.

The pattern is generic. You have an AI agent. You have a project management system with an API. You need a thin, stateless bridge between them — something that speaks MCP on one side and REST on the other. That's Skate. Today it talks to Mattermost Boards. But the same architecture — the same command set, the same MCP tool interface, the same skill file pattern — could talk to anything.

Jira? Absolutely. Linear? Even easier, their API is great. GitHub Projects? Shortcut? Notion databases? All of them have the same fundamental objects: tasks, statuses, comments, assignees. The verbs are identical: list, create, update, comment, track time.

I think we're heading toward a world where every project management tool has an MCP adapter, and AI agents just treat "update the ticket" as a normal part of their workflow. Right now most teams have this weird gap where the agent does the work but a human has to manually record that it happened. That gap shouldn't exist.

None
The pattern: a thin MCP-to-REST bridge between any AI agent and any project management tool

Skate is my proof of concept for closing it. Mattermost Boards was the starting point because that's what I use, but the idea is bigger than any one tool. If you're running Jira and you want your Cursor agent to pick up tickets, update statuses, and log time — the blueprint is here. Fork it, swap the API client, keep the MCP tools. The hard part isn't the HTTP calls, it's designing the workflow so the agent knows how to be a good team member. That's what the skill file handles, and that part is completely portable.

Who is this for?

If you use Mattermost Boards (or the standalone Focalboard) and you work with AI coding agents, Skate removes the gap between your project board and your terminal. Your agents become first-class team members who update their own tickets and track their own time.

If your team does time tracking, you get automatic timer start/stop without anyone having to remember to click a button.

If you're a solo dev who just wants to tell Claude "work on the next thing" and have it actually know what that means — Skate is that bridge.

And if you're on a different PM tool entirely — read the code, steal the pattern. The future of AI-assisted development isn't just smarter agents. It's agents that participate in the same workflows humans use, with the same accountability and the same paper trail.

Try it

git clone https://github.com/mobydeck/skate
cd skate
make install
skate init

It's open source, it's a single binary, and it costs nothing to run. The hardest part is the nap.

Skate is open source at github.com/mobydeck/skate. Built with Go, powered by caffeine and post-nap clarity.