Some attendees at an AI Tinkerers meetup in early Feb were asking me what it's like to be the maintainer of a big OSS project where the community PRs are all AI slop. They thought it would make for a good blog post. I thought so too, at the time!
It turned out to be very, very, very hard to write down. It's in many ways the opposite of conventional wisdom for software maintainers, OSS or otherwise. So this post has given me over 2 months of writer's block. (I had to update that duration many times while writing this.)
Why is it so important for me to tell you how I deal with a storm of AI-generated PRs? Because I'm beginning to believe that my "vibe maintainer" workflow, crazy as it might sound, will be what a lot of you are doing before long. Everyone who works on successful OSS will soon have to deal with PR storms.

The Rising Tide
To give you a sense of the scale I work at, I'm cruising towards 50 contributor PRs a day, combined between Beads (20k stars, 5 months old) and Gas Town (13k stars, 3 months old). That's seven days a week; if I take a day off, they pile up and I may have to deal with 100 or more in a single day.
It's an enthusiastic community. We've had over 1000 unique contributors between the two repos, with over 4k PRs (2300+ merged), and over 15k commits, all in just a few months. And we have a great community with almost two thousand Gas Town users hanging out chatting on the Gas Town Hall Discord.
Through all that, my median time to resolution is about 15 hours, with few PRs waiting more than a few days. This is high velocity. But I still manage to keep my quality bar high enough that both projects continue to exhibit strong growth. It may look like I'm only merging 60% of the PRs from the metrics, but that's an artifact of fix-merging. I actually merge about 88% of all incoming contributor PRs, one way or another, and both projects are flourishing from it.
Beads in particular is well-integrated with the broader ecosystem — for instance, it now has solid integration with GitHub, GitLab, Linear, JIRA, Azure DevOps, Notion, and five self-hosted storage options. This is all encapsulated as a rich plugin interface for backend engines that each of them implement. Nearly all of that was from community contributions.
It's safe to say people love Beads and Gas Town, even though mostly only agents have ever seen their source code. I certainly haven't. Why the fuck would I look at their code, I'm already pretty busy. No time for field trips.
I'm a very lazy person, and maintaining a popular OSS repo, let alone two of them, would simply not have been possible for me, like ever, up until maybe a year ago. I'm getting by with AI, that's the only way. As my PR volume has increased, I've been able to keep afloat through model and tool improvements, automating as much of the decision tree as I can.
Even with AI help, keeping up with community PRs takes me 15–20 hours a week, usually 2 to 3 hours a day. Sometimes much, much more.
I wish I could tell you it's easy work. I have managed to automate all the easy stuff, which is about half the PRs. I was recently inspired by Dane Poyzer's gt-toolkit package — a series of Gas Town formulas he published, which now has its own little user subcommunity. Dane's formulas help you run his ambitious long running idea-to-delivery workflow, which is geared at comprehensive feature development and moving through mountains of work with his Gas Towns. My own PR workflow is now a formula as well.
Before we dive into the vibe maintainer workflow, let's revisit why it's needed at all. Am I not just bringing this on myself by allowing AI-assisted PRs in the first place?

Saying No to AI: The "Fork You" Problem
Since 99% of my incoming PR submissions are AI-generated, it stands to reason that I could reduce my workload by 99% by saying "No AI PRs." In that world, instead of doing all this crazy vibe maintainer stuff, I'd just wake up every morning, brew some coffee, shake my fist at the sky, browse HN, and maybe take the mule to town. The easy life.
Most OSS maintainers go this route. They straight-up forbid AI-generated or even AI-assisted pull requests. And I can understand why. The crap you see in AI-generated PRs these days can turn you into the Clint Eastwood angry-porch meme, practically overnight. Rather than deal with it, they outlaw it.
That of course triggers an arms race. You can get an AI-assisted PR accepted, but only if you sneak it in. So we hear comical stories of once-rejected PRs suddenly being accepted after they're resubmitted with all the AI DNA scrubbed from the crime scene, hee hee haw haw.
But that's the official party line for most OSS projects today: "No AI." It all has to be done by sneaking. The whole status quo in open-source is characterized by historic levels of silliness.
Here's the problem with that old-school approach. We are headed toward a world in which if you refuse enough PRs, the community will consider you a dead-end street and begin routing around you. They will copy your software, either by forking it or rewriting it from the ground up, and use their own mutated version from then on. They might even develop a community around their version.
Software survival is now about velocity–specifically, keeping up with what your users want. Even a permissively-licensed OSS project can be forked and lose a ton of users and mindshare, if the project owners don't listen to a sufficiently large subcommunity of their happy users.
As just one of many easy examples, Roo Code is a big community that forked off Cline (which itself is a fork of VS Code). And now people are forking Roo; I've seen some sizable ones taking shape. That's a sad list of at least three communities effectively fighting over a code base. They could have united, but none of them could keep up with what their own users wanted, so it keeps on forking.
That particular fork family all happened before coding agents came into their own in late 2025. It was hard back then. Now that everyone on earth has access to powerful coding agents, we will see way more forks. Forking used to be a declaration of war. Now it's simply a declaration that someone liked your software enough to want to change it, but you said No.
It's always been trivial to create a fork. The hard part has always been maintaining the fork. As the fork's community grows, so does the maintenance burden. It used to take a good-sized team to support a fork, which was almost like a rival gang. Nobody liked a fork. There was always bad blood. The XEmacs/Emacs rivalry was absolutely the stuff of legend. You wouldn't believe some of it.
It used to basically require corporate backing to have a credible fork of a large OSS project. Today, things are astoundingly different. With the coding agents of 2026, everyone who loves your software is a credible threat to forking you. Any grandma who wants to use your software for gardening could build a massive grandma subcommunity with your shit if you don't take her PRs. She might not even know she's done it. It's just there for the taking, yo, and you wouldn't flex.
There's nothing wrong with forks per se; that's how evolution will happen. But it's also a ton of duplication that might not have been necessary. A lot of people just want to add a small feature or fix to an existing software product, not clone the whole damn thing and maintain it for the rest of their lives. They want to share tokens and energy by pooling features and fixes together, as a community. So there are good reasons to avoid forking.
I do actively encourage forks where people are trying to take the code in a direction I just can't follow. The earliest big fork was beads_rust from Jeffrey Emanuel, who told me he was very embarrassed to be forking my code, until we chatted about it and I gave it my heartfelt blessing.
This was a situation where he wanted only the streamlined original code's behavior. Nothing else. And that's one thing I couldn't offer with my kitchen-sink approach to making my tool community- and AI-friendly. So I was happy he made that fork for those who need that streamlining.
If your software is popular, and you want to avoid forking (or rewriting), you need to build and foster your community. You need to let your system expand to accommodate the needs of as many users as practical.
People will ask more and more of your software, so you also have to decide where to draw your lines. You choose what belongs in your code, and what to exclude–any of which may go off to live in a fork somewhere to "compete" with you. Choose wisely!
The Gravitational-Well Finish Line
My own approach is radically different from how most OSS maintainers work: I say Yes to AI. Instead of rejecting AI submissions, I encourage everyone to use AI to submit their PRs (subject to a growing list of hygiene rules). Indeed, I both observe and expect 99% of incoming PRs to be AI-assisted.
Why? Because this empowers my users to turn their wishes into code and get it into the system. It keeps them from thinking about off-ramps. It keeps them from forking me, and unites them into a larger community, which means they all benefit from sharing rather than reimplementing. It's a net token savings, so software with strong communities will tend to survive.
Do some contributor PRs belong in a fork? Absolutely! I maintain high standards for what goes into the Beads and Gas Town core. I'll reject PRs for many reasons, including being too opinionated, niche, or they don't pull their tech-debt weight.
But if there's a germ of a good idea in there, I try hard to find it and cultivate it.
Instead of requiring perfect PRs from everyone, I aim to find a quick resolution that is satisfying to all parties. I accept most PRs, but still maintain hard lines on architecture, what goes in core, code quality, and many other AI-era design principles (e.g. ZFC). If I were to send every PR back to the contributor for fixes, the rest of the community might be losing out on some important fix or feature for days to weeks. And there it is, sitting there in the PR; it just has issues with it.
In this situation, if you want to maximize throughput, then you may need to fix the contributor's code yourself before it can be merged. Most OSS maintainers say, "Go fix your code." I try my best to fix it myself and get it merged. There's an art to this that I'll discuss below.
My core philosophy is, help contributors get to the finish line. I optimize for community throughput. I review every PR and try to find the value in it, and have my worker agents do something appropriate for each one.
The PR Sheriff
My sheriff workflow consists of runs, where I try to resolve all PRs in a run, though I'm not always able. A run kicks off automatically every time I restart my designated sheriff crew members in my Gas Town, because I place a "sheriff bead" on their hook. So they notice it when they wake up. I can also start runs manually.
I discovered today, after many months of using this workflow with my Gas Town crew members, that the Mayor is actually way better at it. Like, far, far better, it's crazy how much better it is. The Mayor, acting as PR Sheriff, takes a more holistic view, makes better decisions, and makes better use of Gas Town resources to get the PRs reviewed, fix-merged, and escalated in parallel.

This was big news. It means I can shrink my Gas Town crew down to maybe 3 agents per rig (from 8 apiece), which will be huge for memory pressure. Huge for running it in parallel with Gas City while I cut over. More on that later.
Anyway, the workflow begins with the sheriff pulling descriptions of all open PRs, and categorizing them into easy-wins, fix-merge, and needs-review.
Easy wins are things like targeted bug fixes, doc updates, dependency bot auto-upgrades, and automatically closing drafts, PRs from banned contributors, etc. These are handled automatically every 2 hours with a patrol, which contributes to my sub-day median turnaround time. And they're handled automatically during a PR sheriff run.
The first fix-merge candidates are easy wins that are broken for some reason — they fail CI, they need a rebase, or they have a simple error in them, but aside from that, they fit all the easy-win criteria. The sheriff may decide to auto-fix-merge those. My Mayor decided to sling them to polecats, which was nice.
Needs-review is any PR that looks kind of suspicious for some reason, so we're going to have to have an agent suss it out, do a deep dive, and produce a report. These can be farmed to crew members or polecats, as the instructions are usually pretty simple. The reports can be handled however you like, e.g. having the sheriff summarize them for you. Or I sometimes go directly to the agents' tmux sessions and read their reports.
From needs-review, we get a set of possible recommendations:
- Easy win. Oops, it turned out to fit the easy-win criteria after all. Happens sometimes.
- Merge. Agent recommends to human that we merge this PR. It may be small or big, but it is well-tested, broadly useful, well-documented, and good to go.
- Merge-fix. It's mergeable but we need to fix some issues afterward. But it's OK to do it in a follow-up commit. We merge the PR as-is, then push a follow-up fix to main.
- Fix-merge. It's pretty busted, so we're going to pull it locally, make a bunch of changes, and then we'll push it; you will see the contributor attribution in the
CHANGELOG. - Cherry-pick. The PR contains M items (features and/or fixes) and we only want N < M of them. We cherry pick the N things locally, fix them as needed, and commit them with attribution. We close the PR, effectively throwing the rest away, with an explanation.
- Split-merge. The PR contains M items, but they're separate concerns, and really should all have been in separate PRs. We pull it locally, split them into separate commits, and push them all with attribution to the original contributor.
- Reimplement. The PR is essentially rejected, perhaps because I don't like its design. But it was trying to solve some fundamental problem. So we see if we can find a better design, and if so, we implement it that way. We then close the PR thanking them and letting them know how we solved it.
- Retire. This PR is obsolete; it may have been superseded by another PR (often from the same author, interestingly), or fixed by some other mechanism. Close it with a thank-you.
- Reject. This may be a feature that does not pay its weight in tech debt, or one that is too niche to include in the core. Or it might be a design that does not fit my standards. Close the PR with a polite note to the sender.
- Request changes. Last resort. This can lead to contributor starvation, so there's almost never a good reason to do this, but I do use it occasionally.
There are a few other possible outcomes, such as re-routing the PR to the right project, banning the contributor, etc. But this is a pretty good starter list.

Notice that the first resort for almost every OSS maintainer, which is to send your PR back requesting changes, is the last resort in my vibe maintainer workflow. It ranks even lower than rejection, which itself is very serious, because rejection can lead to forking. And it's cumulative. The more PRs you reject, the higher the chance of someone getting fed up with you.
Requesting changes is unfortunately the last resort because is quickly leads to contributor starvation; if you keep making them rebase it, the sheer velocity of the project can prevent their PR from landing for weeks, until you take steps to help it along. So you might as well help it right from the start. Don't send it back for changes.
If there is good in the PR, then you should absorb that good into your code base, right there and then, rejecting anything you don't like, and transforming the parts you're absorbing. You can make bug fixes, architectural fixes, change the naming, make it a plugin, mix and match.
Most of the time, it's Claude telling you, hey, this PR is mostly healthy but it's missing a kidney, and you say, please add the kidney.
But sometimes, Claude looks at PR that adds a face-hugging alien to each worker, and it sizes it up, and says, "This PR is well-constructed, and the alien is robustly hugged to the agent's face, with good test coverage and updates to all relevant work formulas."
And you say, Claude, it's a fucking face-hugging alien. And Claude says, oh right, that's a very good point, we probably don't want that, shall I close it with a polite note?
And that's why the last 25% or so of pull requests need human review. At least, so far. It's because there's still a thing called taste that current models can't be trusted with. Not yet.

PR Hygiene
I've instituted some lightweight hygiene rules for contributors. I don't enforce them yet, and I've only announced them in a few places. I'm working to get them baked into the CONTRIBUTOR.md files and other important locations.
Here are some examples of hygiene rules for my repos:
- Cross-project pollution: Beads must not know about Gas Town. Do not put Gas Town concepts into Beads. Gas Town doesn't know about Gas City, and the Wasteland doesn't know about any of them.
- Zero Framework Cognition: Read it, learn it, live it. I wrote a blog post about it. I mean it.
- Use plugins whenever possible. Do not put stuff in core if there is a way to do it with integrations/extensions/plugins.
- Don't submit drafts. I'll just close them.
- One concern per PR. Split up large PRs.
- Remove all unnecessary files. Minimalist. Make the fewest changes possible.
- Rebase. Don't use an old fork; rebase right before you submit the PR.
Right now, my policy is to fix all these things, and just complain about them when I close the PR. As the models get smarter, more of this can be handled automatically. But in general, I think people who abuse it might start getting banned, since they're basically pushing their QA for their PRs (and associated costs for fixing them) onto me.
Summing Up
That's a pretty good overview of the PR workflow. I know I said it was hard to write down. It was, dammit.
When you get down to the last 5–10% of the PRs, they're usually hefty features, and you may need to spend a lot of time digging into them and figuring out whether you really want to pull them in. You may ask the agent a ton of questions about each PR, ask it to consider alternatives, or just tell it you don't see the point. If the agent can't justify the PR, then it's probably not worth taking yet.
But that last 5% to 10% is the part that takes hours a day. There's always a list of PRs that are just right on the edge and need my judgment call, though I wish it weren't so. It is definitely getting easier now that Gas Town is becoming essentially feature-complete, due to the launch of Gas City. So I auto-close any large features and redirect the contributors to Gas City.
Summing it all up, being a vibe maintainer means trying to absorb the good parts from every PR, and there are various techniques for approaching it, depending on what's wrong with the PR. It means only rejecting PRs with no alternative as a last resort, since each rejection increases the chances that someone's going to fork you.
Most of all, it means helping contributors get to the finish line, with attribution, in whatever way you deem best for your projects and repos. Maximize community throughput, and you'll have a happy and thriving community. In fact, I'm pleased to report that Beads crossed from 19.9k to 20k stars on GitHub as I was finishing this draft tonight.
My main success metric is whether my users are happy, and so far it's looking pretty good!
Speaking of Gas City
Gas City went to alpha last week, and aims to be generally available later in April. What's Gas City, you ask?
Gas City is a ground-up rewrite of Gas Town from first principles, using the MEOW stack (i.e., Beads and Dolt, the fundamental substrate of Gas Town.) It is almost a perfect proper superset of Gas Town's features, and can be used as a drop-in replacement for Gas Town — except Gas City is also an orchestrator-builder.
Gas Town is a "pack" within Gas City, a fully declarative bundle of prompts and skills, no code at all — it still has the iconic original Mayor, Deacon, Dogs, Polecats, Witness, Refinery, and Crew, with their various hooks, inboxes, skills, prompts, and sandboxes. But there's no dedicated code — unlike in the OG Gas Town, which is a large kitchen-sink binary.
Gas City was built by my buddies Julian Knutsen and Chris Sells, and as far as I can tell, it is exactly what I envisioned and outlined to them when they first suggested tackling it. They did a bang-up job. So good that Gas Town itself, the binary, is I think not long for this earth. Only its shape, the original characters, and all the individual features we loved about Gas Town remain — all available as LEGO-like pieces for creating your own agent orchestration shapes.
Beads, the original, powered fully by Dolt (Git for structured data), will live on as version 1.0. The MEOW stack from Gas Town, including formulas, molecules, hooks, the GUPP engine, and Nondeterministic Idempotence (NDI) are all implemented in Beads and Dolt. That means that all work is decomposed into durable, version-controlled, SQL-queryable orchestration steps.
That's all stuff that your orchestrator doesn't have, especially if you're using something Claw-based…unless you're also using Beads, or else you reimplemented the whole damn MEOW stack, including Dolt, which would just be incredibly foolish of you.
I think the MEOW stack and Dolt give both Gas Town and Gas City a gargantuan advantage over anyone using postgres and snapshots, or git-lfs, or really anything other than Dolt or Datomic for their agentic memory system. If you're using Datomic, bravo. You've got a nice system with everything agents need for world-class forensics and mistake-recovery in production. I just prefer Dolt because it uses Git as its protocol. Both are great.
If you think of Gas Town as a Dark Factory (it is!), then Gas City is a Dark Factory Factory. I am putting my money where my mouth is, and diving in to start orchestrating my own game's production systems using Gas City as my new SREs. I want to live this a bit before I evangelize it further.
Stay tuned for several upcoming announcements and new blog posts from me. I've been sitting on quite a backlog while I was trying to squeeze this one out. Unnnggghhh. I've got a post about dark factories coming up, including some insights into how to make a coding agent into the perfect dark factory worker — something I think coding agents will all need soon just to survive. But as a dark factory user, I'm biased, so who knows.
I've mentioned the Wasteland, and there's more coming there. There has been a bunch of work behind the scenes, and it's an important building block. Our two thousand Discord users are basically an army. We're sitting on that army and it's restless. I'm headed to Portland in the morning for a meeting of our generals, called by Chris Sells, and one of the topics for discussion is how best to put that army to use. Fun times.
I've also got upcoming announcements about Gas Town and Beads, both of which are headed to v1.0 very soon. And a blog post in the form of a totally untrue made-up fantasy-horror campfire story about a fake monster called the SaaS-Eater, whose wings shall beat a mighty Saas Hurricane, whose wind will de-SaaS companies and save them millions of storytale coins, which is all of course utter hogwash, merely an amusing fiction for scaring small children and investors and the general public. But I'm a horror fan, so we'll see. Maybe an army could beat it. Or create it. Now that my Vibe Maintainer post is out, I can finally get to that blog backlog, and try my hand at some proper fiction.
See you next time. And to all of you who, one way or another, manage to read everything I write — thank you! When I hear those stories it really does help with the writer's block. I hope you're all having fun with agents and orchestrators. I'm absolutely having the time of my life. More posts coming soon!
Don't forget to come visit our Discord at gastownhall.ai!
