All the important metrics spiked. It felt like a betrayal and a lesson all at once. If you want fewer race conditions, safer deployments, and faster user interfaces, keep reading.
Introduction
The build failed in production and it did so in a way that looked like normal traffic. One hundred users made the same request at the same second. The UI stalled. The server logged duplicate writes. The notification bot sent the same email twice.
That was the exact moment the team stopped celebrating the new architecture and started asking the one honest question: what did we miss? This essay is that answer. Read it like a postmortem done by a teammate who wants the same mess avoided.
Why go fully reactive
Reactive architecture brings three real promises:
- Predictable asynchronous flows. Streams replace scattered callbacks and nested promises.
- Fewer race conditions. Operators like
switchMapandexhaustMapexpress intent. - Cleaner component templates. The
asyncpipe keeps components declarative and side-effect free.
This project needed those promises. The codebase had tangled service calls, manual subscription cleanup, and UI flicker under load. The theoretical gains were real. The path was sound. The execution required discipline.
What broke — the concrete failures
These are the real defects that appeared after the rewrite, ranked by severity.
1) Duplicate network calls and duplicate side effects
Symptom: Two identical POST requests fired for a single user action.
Root cause: Multiple subscriptions to a cold observable in a shared service. The service returned http.get() directly instead of sharing the observable. Each component subscribed separately, and operators caused repeated side effects.
2) Stale UI due to premature completion
Symptom: UI showed old data when navigation happened fast.
Root cause: Use of take(1) and first() in places where a persistent stream was required. The stream completed and the UI never received updates.
3) Memory leak from forgotten subscriptions
Symptom: Heap grew over time on long sessions.
Root cause: Direct subscribe() calls in components without proper teardown or untilDestroyed pattern. Some Observables were hot and emitted forever.
4) Change detection thrash
Symptom: UI felt janky under moderate load.
Root cause: Heavy BehaviorSubject writes from frequent events and use of ChangeDetectorRef.detectChanges() sprinkled liberally. No throttling or microtask bundling.
5) Bad error handling and silent retries
Symptom: Silent failures with retries hammered backend.
Root cause: retryWhen configured globally with poor backoff. Retries retriggered side-effectful streams, causing duplicated work.
Three short code fixes that removed the worst bugs
For each: problem, change, result.
Fix A — share the HTTP observable to avoid duplicate calls
Problem: Service returned cold http.get directly causing multiple network calls.
Change — service code
// user.service.ts
import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { shareReplay } from 'rxjs/operators';
import { Observable } from 'rxjs';
@Injectable({ providedIn: 'root' })
export class UserService {
constructor(private http: HttpClient) {}
getProfile(): Observable<User> {
const url = '/api/profile';
return this.http.get<User>(url).pipe(
shareReplay({ bufferSize: 1, refCount: true })
);
}
}Result: Single network call per profile load across multiple subscribers. Side effects no longer duplicated.
Why it works: shareReplay converts a cold observable into a shared one and replays the last value. refCount: true allows it to teardown when nobody listens.
Fix B — avoid premature completion and keep the stream alive
Problem: Using take(1) for a stream that needed to push updates later.
Change — component code
// profile.component.ts
profile$ = this.user.getProfile(); // do not use take(1)
ngOnInit() {
// template uses async pipe: <div *ngIf="profile$ | async as p">
}Alternative when caching is required
// in service: cache profile but keep it hot
private profile$ = new BehaviorSubject<User | null>(null);
loadProfile() {
this.http.get<User>('/api/profile').subscribe(u => this.profile$.next(u));
}
getProfile() {
return this.profile$.asObservable().pipe(
filter((u): u is User => u !== null)
);
}Result: UI stays in sync when profile changes. No accidental completion.
Why it works: Streams that complete remove downstream listeners. Keep a long-lived stream for live data.
Fix C — centralized error handling and safe retry
Problem: Global retryWhen retried side effects and leaked load.
Change — operator-level retry with backoff and idempotency check
import { throwError, timer } from 'rxjs';
import { mergeMap } from 'rxjs/operators';
function retryWithBackoff(max = 3, delayMs = 500) {
return (errors$) =>
errors$.pipe(
mergeMap((err, i) => {
if (i >= max || !err.isTransient) {
return throwError(() => err);
}
return timer(delayMs * Math.pow(2, i));
})
);
}Usage
this.http.post('/api/charge', body).pipe(
retryWhen(retryWithBackoff(4, 200))
)Consequently, using exponential backoff and retrying only for temporary errors reduces backend pressure.
Why it works: Limit operator-level retries and confirm error idempotency.
Benchmarks — before and after (with math)
Benchmarks gathered from a production-like load test in which 100 virtual users visited the feed and profile pages at once.
Every figure represents the median of three runs. CPU is average per-core usage on the app server.
Benchmark A — feed query latency (ms)
- Before (imperative): median 320 ms
- After (reactive, shared observables): median 90 ms
The following formula is used to calculate latency reduction: 230 ms is equal to 320 − 90. The percentage decrease is (230 / 320) × 100 = 71.875 percent. Reasoning: In order to prevent identical queries from being made simultaneously, redundant calls were converted to a shared stream, which reduced median latency by 230 ms, or 71.875 percent. The user-visible load time improved accordingly.
Benchmark B — 95th percentile latency (ms)
- Before: 950 ms
- Following: 220 ms
Drop = 950 − 220 = 730 ms is the calculation. The percentage decrease is 76.84210526315789 (730 / 950) × 100. 76.84 percent, rounded.
Explanation: Under peak, shared caching and switchMap cancellation lowered tail latency by approximately 76.84 percent because obsolete requests were cancelled and redundant calls removed.
Benchmark C — CPU usage for periodic polling (single core %)
- Before: 65 percent
- After: 28 percent
Calculation: reduction = 65 − 28 = 37 points. (37 / 65) × 100 = 56.92307692307692 percent is the percentage reduction. 56.92 percent, rounded.
Explanation: Replacing naive interval polling with a consolidated stream and shareReplay reduced CPU by consolidating duplicate timers.
Hand-drawn-style architecture diagrams (text lines)
Below are three views: legacy, reactive transitional, final reactive. Use these to sketch ideas quickly.
Legacy (imperative spaghetti)
[Component A] [Component B] [Component C]
| | |
v v v
callApi() callApi() callApi()
\ | /
\ | /
v v v
[HttpClient -> /api/resource]
|
v
DatabaseProblems: repeated HTTP calls, no sharing, side-effects duplicated.
Transitional (shared service introduced)
[Component A] --->\
[Component B] ----> [SharedService.get() -- shareReplay] ---> HttpClient
[Component C] --->/
|
v
DatabaseThis removes duplicate network traffic by sharing one observable.
Final reactive with cancellations and central store
[UI] --- async pipe ---> [Component stream]
\ /
\ /
[Store (BehaviorSubject/Signal)]
|
+-----------+------------+
| |
[Effects]--switchMap/concatMap-->HttpClient
| |
Retry/backoff Cache layerThis central store coordinates intent, cancels obsolete requests, and prevents duplicate side effects.
Deployment and monitoring lessons
- Deploy feature flags. Ship reactive patterns behind flags per page, not all at once.
- Monitor request duplication. Track identical endpoint calls within a short window. Alert on duplicates.
- Track subscriptions. Add metrics for open subscriptions per component or per user session.
- Use rollout canaries. Deploy to 5 percent of users first. The rebuild broke because rollout was too broad.
- Log idempotency keys. For state-changing endpoints, add and check idempotency keys to prevent duplicate writes.
Practical checklist before a full reactive rewrite
- Audit places that perform side effects inside subscriptions
- Decide which streams must be hot and which can stay cold
- Add
shareReplay({ bufferSize: 1, refCount: true })where results should be shared - Avoid
take(1)when data must update over time - Centralize retry logic with idempotency checks
- Replace direct
subscribe()withasyncpipe ortakeUntilteardown patterns - Run load tests that simulate real user concurrency before rollout
Mentor notes (direct, human-to-human)
If the team is nervous, that reaction is healthy. Rewrites are not free. Reactive patterns are powerful, and they are also an API for intent. Use them to express intent clearly. When in doubt, write one-line comments that explain why a stream must be hot or why it should cancel obsolete calls. That single sentence saved us hours in one incident.
Be patient with the team. Code that reads clean can still behave badly under load. Build tests that assert no duplicate side effects.
Final thoughts
Rebuilding the app reactively produced real gains. The wins were measurable and durable once the wrong assumptions were fixed. The hard lessons were not about RxJS itself. The hard lessons were about engineering discipline: sharing where appropriate, preventing duplicate side effects, and matching stream lifetimes to the data lifecycle.
If rebuilding is the way forward, do it with purpose. The reactive paradigm isn't a panacea. It is a powerful toolkit. Use it deliberately.
Appendix — extra small reusable snippets
safeSubscribe utility
// safe.ts
import { Subscription } from 'rxjs';
export function safe(sub: Subscription | null) {
if (sub) sub.unsubscribe();
}Minimal helper to avoid forgotten teardown.
untilDestroyed pattern (example)
// comp.ts
import { Subject } from 'rxjs';
import { takeUntil } from 'rxjs/operators';
private destroy$ = new Subject<void>();
ngOnInit() {
this.store.some$().pipe(takeUntil(this.destroy$)).subscribe();
}
ngOnDestroy() {
this.destroy$.next();
this.destroy$.complete();
}