1. The Bug Was a Race Condition, Not a Logic Error
The production failure that forced me to finally internalize the event loop was not exotic. A UI state update occasionally ran before a dependent API response was processed. The code looked correct. The logs showed the right sequence of function calls. The user-visible behavior was wrong under load.
The simplified shape of the bug:
let state = { ready: false };
function markReady() {
state.ready = true;
}
function fetchData() {
return fetch("/api/config").then(r => r.json());
}
fetchData().then(config => {
state.config = config;
});
markReady();This code looks obviously wrong when isolated, but the real system had multiple layers of async boundaries. The failure only appeared when promises resolved faster than expected due to caching. The core issue was assuming synchronous ordering in a system where scheduling is implicit. Fixing the bug required a mental model of how work is queued and when callbacks actually run.
2. A Concrete Mental Model of the Event Loop
The mental model that finally stuck for me is to treat JavaScript as a single-threaded scheduler with two primary queues: the macrotask queue and the microtask queue. The engine executes one macrotask, then drains all microtasks, then moves to the next macrotask.
A minimal reproduction of the ordering:
console.log("start");
setTimeout(() => {
console.log("timeout");
}, 0);
Promise.resolve().then(() => {
console.log("promise");
});
console.log("end");Observed output:
start
end
promise
timeoutThe key insight is that promises do not wait for the next "tick" in the same way timers do. Microtasks run immediately after the current call stack clears, before any pending timers or IO callbacks. Once you internalize that ordering, a lot of seemingly random async behavior becomes predictable.
3. Microtasks Are Priority Work, Not Deferred Work
The production bug came from treating promise callbacks as deferred work similar to timers. They are not equivalent. Microtasks are effectively priority callbacks that run before the system returns to the event loop.
A pattern that caused subtle reordering:
function updateUI() {
render();
}
function loadConfig() {
return Promise.resolve({ featureFlag: true });
}
loadConfig().then(cfg => {
applyConfig(cfg);
});
updateUI();Under some conditions, applyConfig ran before updateUI finished rendering because rendering itself scheduled microtasks via framework internals. The fix was not adding more promises, but making ordering explicit:
async function init() {
const cfg = await loadConfig();
applyConfig(cfg);
updateUI();
}
init();This change makes the sequencing part of the control flow rather than an emergent property of the scheduler. The lesson is that microtasks are not a safe place to hide ordering assumptions. They are part of the critical path.
4. Macrotasks and the Illusion of "Next Tick"
Timers, message channels, and some IO callbacks enqueue macrotasks. The common assumption is that setTimeout(fn, 0) means run "as soon as possible." In practice, it means run after the current task and after all microtasks have drained.
This pattern caused UI starvation:
function heavyWork() {
return new Promise(resolve => {
let i = 0;
function loop() {
while (i < 1e6) i++;
resolve();
}
setTimeout(loop, 0);
});
}
async function run() {
await heavyWork();
paint();
}
run();The heavy work blocked the UI longer than expected because other microtasks queued ahead of the timer. The practical fix was to yield control intentionally:
function yieldToBrowser() {
return new Promise(resolve => setTimeout(resolve, 0));
}
async function heavyWorkChunked() {
let i = 0;
while (i < 1e6) {
for (let j = 0; j < 10_000; j++) i++;
await yieldToBrowser();
}
}This makes responsiveness explicit. The event loop will not save you from starvation if you schedule work without considering queue priority.
5. Frameworks Hide the Event Loop, They Do Not Change It
React, Vue, and similar frameworks introduce abstractions that schedule microtasks and macrotasks under the hood. The production issue surfaced because a state update scheduled via a promise callback raced with a layout measurement scheduled via requestAnimationFrame.
A reduced example:
Promise.resolve().then(() => {
setState({ loaded: true });
});
requestAnimationFrame(() => {
measureLayout();
});The microtask runs before the next frame, which meant state mutated before layout measurement. The fix was to align scheduling domains:
async function updateAfterLayout() {
await new Promise(requestAnimationFrame);
setState({ loaded: true });
measureLayout();
}This aligns state mutation with the rendering lifecycle instead of relying on implicit ordering. The practical lesson is that frameworks do not abstract away the event loop. They layer on top of it. When things go wrong, you debug the underlying scheduler, not the framework API.
6. Debugging Async Order With Instrumentation, Not Guessing
The way I stopped guessing about ordering was to instrument the scheduler boundaries. Logging inside promise callbacks, timers, and animation frames revealed actual execution order under load.
A minimal tracing helper:
function trace(label) {
console.log(label, performance.now().toFixed(2));
}
trace("sync start");
Promise.resolve().then(() => trace("microtask"));
setTimeout(() => trace("macrotask"), 0);
requestAnimationFrame(() => trace("raf"));
trace("sync end");This produces timestamps that show relative ordering across different queues. In production, I wrapped critical scheduling points with similar logging and sampled traces under real traffic. The patterns were consistent. The mental model held. The bugs stopped feeling random.
7. Making Ordering Explicit With Structured Async Control
The long-term fix was to stop relying on implicit scheduling order and move to explicit async control flow. Async functions and explicit awaits made dependencies visible.
Before:
function init() {
loadConfig().then(cfg => applyConfig(cfg));
render();
attachHandlers();
}After:
async function init() {
const cfg = await loadConfig();
applyConfig(cfg);
render();
attachHandlers();
}This looks like a stylistic change. In practice, it eliminated a class of race conditions because ordering became part of the function contract. The event loop still schedules microtasks and macrotasks, but the code no longer assumes anything about their relative timing.
8. When Microtasks Become a Footgun
It is easy to accidentally create microtask starvation by chaining promises in loops. This showed up in a background processing feature that ran on the client.
Problematic pattern:
function process(items) {
return items.reduce((p, item) => {
return p.then(() => handle(item));
}, Promise.resolve());
}This schedules a long chain of microtasks that blocks timers and rendering until completion. The fix was to periodically yield to the macrotask queue:
async function process(items) {
for (let i = 0; i < items.length; i++) {
await handle(items[i]);
if (i % 50 === 0) {
await new Promise(resolve => setTimeout(resolve, 0));
}
}
}This reintroduces responsiveness. The tradeoff is slightly more total processing time. The benefit is that the application remains usable. In UI-heavy systems, responsiveness is a functional requirement, not an optimization.
9. The Event Loop as an Architectural Constraint
The lasting lesson from breaking that production app is that the event loop is not an implementation detail. It is an architectural constraint. Any system that mixes IO, UI updates, and background processing on a single thread must make scheduling decisions explicit. When ordering is implicit, bugs emerge under load, caching, or unusual timing conditions.
Once I started designing async flows with the microtask and macrotask model in mind, async bugs stopped being mysterious. They became reproducible scheduling problems with concrete fixes. The code did not become more complex. The mental model did.