Three years ago, we had 21 of them. Each with its own repo, pipeline, database, and "owner." Each supposedly enabling "team autonomy" and "rapid iteration."
Now we have one application. Well-structured, modular, and shipping features faster than we ever did with microservices.
The experiment is over. We have the data. Microservices failed for most of us. But something better emerged from the wreckage.

The Promise vs. The Reality
They promised:
- Independent deployments
- Team autonomy
- Technology flexibility
- Better scalability
We got:
- Deployment coordination nightmares
- Teams blocked by other teams
- A zoo of technologies nobody could maintain
- Complexity that killed velocity
The promise wasn't a lie. It just had a prerequisite nobody mentioned: you need to be Google.
What Actually Failed
The "Independent Deployment" Myth
# What we were promised:
Team A deploys → users happy
Team B deploys → users happy
Teams work independently!
# What actually happened:
Team A deploys → breaks Team B's service
Team B deploys → incompatible with Team C
Team C deploys → waiting for Team A's migration
Teams coordinate deployments for 3 daysServices weren't independent. They were tightly coupled, just across network boundaries instead of function calls. We traded compile-time errors for runtime failures.
The Cost Nobody Calculated
We tracked it for six months:
Infrastructure:
- Monolith: $1,200/month (3 servers)
- Microservices: $9,800/month (47 services, monitoring, service mesh)
Engineering time:
- Monolith: 10% on deployment/ops
- Microservices: 45% on service coordination, deployment orchestration, debugging distributed systems
Time to ship features:
- Monolith: 2 weeks average
- Microservices: 6 weeks average (3 services need updates, coordination overhead)
We paid 8x more to ship 3x slower.
The Debugging Hell
Debugging a monolith:
- User reports error
- Check logs
- Find the bug
- Fix it
- Deploy Total time: 2 hours
Debugging microservices:
- User reports error
- Which service failed? (Check 12 different log systems)
- Find the request trace (if correlation IDs were propagated correctly)
- Discover service A called service B, which called service C, which timed out
- Service C was restarting because of a deployment
- Who deployed service C? (Check Slack, nobody responds)
- Rollback service C? (Need approval from team that owns it)
- Schedule meeting for tomorrow Total time: 3 days
What Emerged From the Wreckage
After three years of pain, patterns emerged. Teams that succeeded didn't use pure microservices. They used something else.
The Modular Monolith
/src
/orders
/domain
/api
/repository
/users
/domain
/api
/repository
/payments
/domain
/api
/repository
# One deployment
# Clear boundaries
# Enforced dependencies
# No network callsAll the organization of microservices. None of the distribution penalty.
The Service Per Team (Not Per Domain)
Companies that succeeded with microservices had one pattern: services matched teams, not domains.
Failed approach:
- User Service (3 teams contribute)
- Order Service (2 teams contribute)
- Payment Service (4 teams contribute) Result: Coordination nightmare
Working approach:
- Checkout Service (owned by Checkout Team, does users + orders + payments for checkout)
- Admin Service (owned by Admin Team, complete admin experience)
- Mobile API Service (owned by Mobile Team, exactly what mobile needs) Result: Teams actually independent
The Database-First Architecture
The biggest lie: "Each service needs its own database."
What actually works:
- Shared database for tightly coupled data
- Separate database only when truly independent
- Database boundaries match deployment boundaries
-- Shared schema, separate services
orders_service.orders
orders_service.order_items
users_service.users
users_service.addresses
-- All in one database
-- But clear ownership boundariesNetwork calls are expensive. Database queries are cheap. Stop fighting this reality.
The Teams That Never Left
Stack Overflow: Monolith. Billions of requests. Works fine.
Shopify: Started microservices, moved to modular monolith. Handles Black Friday.
Basecamp: Monolith for 20+ years. Still shipping fast.
GitHub: Mostly monolith. Billions in revenue.
These companies aren't small. They're not "too simple" for microservices. They made a choice: optimize for developer velocity, not architecture trends.
The New Pattern Language
The successful teams converged on similar patterns:
1. Modules, Not Services
Strong module boundaries in code. Extract to service only when you have evidence you need to.
# Good: Modules in monolith
from orders.domain import OrderService
from payments.domain import PaymentService
# Bad: Microservices by default
await http_client.post("orders-service/api/orders")
await http_client.post("payments-service/api/payments")2. Vertical Slices Over Horizontal Layers
Each team owns a complete slice of functionality, not a layer.
Failed: Frontend team, Backend team, Data team (need all 3 for any feature)
Works: Checkout team (owns frontend, backend, data for checkout)
3. Shared Database, Clear Ownership
Tables have owners. Other teams can read, but only owner can write. Enforced by code review, not network boundaries.
4. Monorepo Wins
The successful teams: monorepo. Easy refactoring. Atomic changes. Shared tooling.
The struggling teams: 47 repos. Versioning nightmares. Deployment coordination. Nobody knows where anything is.
5. Extract When You Have Data
Don't start with microservices. Start with a monolith. Extract services when you have evidence:
- This service needs different scaling characteristics
- This team can truly work independently
- The domain is genuinely bounded
Not before. Premature distribution is expensive.
The Migration Path
We didn't flip a switch. We migrated gradually:
Month 1–2: Merge highly coupled services
- Orders + OrderItems + OrderHistory → Orders module
Month 3–4: Consolidate databases
- 12 databases → 3 logical databases (but actually just schemas)
Month 5–6: Simplify deployment
- 47 pipelines → 1 pipeline with component testing
Month 7–8: Remove service mesh
- Istio, service discovery, circuit breakers → gone
- Function calls replace HTTP calls
Results:
- Deployment time: 45 min → 5 min
- Incident response: 4 hours average → 30 minutes
- Feature velocity: +180%
- Infrastructure cost: -75%
- Developer happiness: +300%
Nobody misses the microservices. Not one person.
What We Learned
Start Boring
Monolith. PostgreSQL. Simple deployment. Add complexity only when you have evidence you need it.
Team Size Matters
- 5–20 engineers? Monolith
- 20–100 engineers? Modular monolith
- 100–500 engineers? Maybe a few services (3–5)
- 500+ engineers? Netflix-style microservices might make sense
Distribution Is Expensive
Every network call is a potential failure point. Every service boundary is coordination overhead. Only pay this cost when you get value from it.
Developer Experience Beats Architecture
Fast feedback loops, easy debugging, quick deployments — these matter more than architectural purity.
The Bottom Line
Microservices weren't wrong for everyone. They were wrong for most teams at most stages.
The experiment taught us what actually works:
- Strong module boundaries without network overhead
- Teams that own vertical slices
- Shared databases with clear ownership
- Simple deployments with fast feedback
- Extract to services only with evidence
This isn't "going back" to monoliths. This is moving forward with the lessons learned from the microservices experiment.
Microservices taught us what we needed: boundaries, ownership, and team autonomy. We just don't need distribution to get those things.
The future isn't microservices. It's well-structured monoliths with the option to extract when needed.
Simple. Fast. Effective.
The experiment is over. We know what works now.