A bug shows up — nothing catastrophic, but weird. Intermittent. The kind that doesn't surface cleanly in logs. You start debugging, and somewhere around hour two, you realize: this used to take you twenty minutes.

Musicians call it "vacation hands." Two weeks away from the piano, and the Chopin sounds thicker. Not unplayable — just slower.

Aviation researchers have been studying the same phenomenon for decades. A 2011 FAA analysis found that 60% of accidents involved lack of pilot proficiency in manual operations (skills that had atrophied through autopilot reliance). They've given it a clinical name: automation-induced skill degradation.

Software engineering doesn't have a name for this yet. But the pattern is familiar.

None
Image by Zyanya Citlalli on Unsplash

The Boring Work Was Never Just Work

The tedious parts of software development were never just labor. They were training.

Writing tests wasn't about coverage. It was about forcing yourself to think like an adversary: what could go wrong here? what input would break this? That instinct didn't come from reading about edge cases. It came from the reps.

Documentation served a similar function, though nobody frames it that way. The act of explanation exposes the gaps: the places where your understanding is fuzzy, the decisions you made for reasons you can no longer articulate. Skip that process enough times, and you stop noticing the gaps.

Even boilerplate. After writing the same authentication flow for the tenth time, your fingers knew where the bugs would be before your brain did. That's not inefficiency. That's pattern recognition you can't build any other way.

You didn't hate writing tests because they were pointless. You hated them because they were hard in a way that didn't feel productive.

That friction was the training. And now it's gone.

The Atrophy is Invisible

I noticed it three months ago. A race condition that should have been obvious, the kind I used to smell before I saw. It took me two hours to find. Two years earlier, it would have taken twenty minutes.

The gap didn't announce itself. I didn't wake up one day feeling less capable. I just… was.

Researchers at Aalto University studied an accounting firm that had experienced similar erosion. Their 2023 paper, "The Vicious Circles of Skill Erosion," found something troubling: the degradation was invisible to both workers and managers. Automation fostered complacency. Skills eroded gradually, acknowledged by no one.

The software data tells the same story. In a 2025 study, experienced developers expected AI to speed them up by 24%.

The actual result: AI increased completion time by 19%.

The tools made experienced developers slower — and they didn't notice.

I explored what this means for the industry previously — here I'm focused on what it means for you:

You're still shipping code. Still closing tickets. The dashboards all point up and to the right.

But something's different.

The AI-generated tests pass, but the feature is still broken. Coverage report: 94%. Everything green. But the tests check that the code does what the code does — not that the code does what it should.

Edge cases you would have caught three months ago didn't occur to you. The pattern-matching part of your brain that used to generate them has gone quiet.

Your velocity metrics are up. Your actual capability is eroding.

Dashboards don't measure that.

The Leverage Argument

You might argue that AI frees us to focus on the actually challenging work. Architecture decisions. Novel problem-solving. The interesting bugs.

That's the theory.

If AI handles the boilerplate and you spend the saved time on complex debugging, you might come out ahead. The boring work was training, but it wasn't the only training. Maybe the hard problems provide enough reps on their own.

Here's what the data shows instead.

GitClear's 2025 analysis of millions of lines of code found that refactoring (the deliberate improvement of existing code) dropped from 24% of changes in 2020 to less than 10% in 2024. Meanwhile, copy-pasted code rose from 8% to over 12%.

Developers aren't using the saved time to think more deeply. They're shipping faster.

The promise was leverage. The reality is acceleration.

And acceleration without practice is just moving faster toward the moment you can't do the thing you skipped.

The other problem is subtler. You don't always know which work is "boring" until you've done it. The CRUD endpoint that turns out to have a weird edge case. The documentation that forces you to realize your mental model was wrong.

AI doesn't know which boring tasks are secretly important. And increasingly, neither do you.

What's Actually at Risk

A 2024 paper in Cognitive Research: Principles and Implications laid out the mechanism: AI assistance may accelerate skill decay among experts and hinder skill acquisition among learners, while simultaneously preventing both groups from recognizing these effects.

Even Anthropic's own engineers have noticed. In an internal survey published in August 2025, some reported "skills atrophying as they delegate more." One put it simply: "When producing output is so easy and fast, it gets harder and harder to actually take the time to learn something."

This is the part that should unsettle you. The atrophy is invisible to the person experiencing it.

The specific skills at risk aren't abstract.

Tests aren't about coverage percentages. They're about paranoia — the productive kind. When you write tests yourself, you're forced to inhabit the mind of an adversary. What input would break this? What would a malicious user try?

AI-generated tests are better than they were eighteen months ago. But they still struggle with failure modes that require domain knowledge, historical context, or adversarial creativity: the kinds of edge cases that come from having been burned before.

That experience doesn't transfer to the model. And if you stop exercising it yourself, it fades.

Debugging intuition comes from pain. You learn to read stack traces by reading hundreds of them. You learn to form hypotheses about system behavior by having your hypotheses proven wrong, repeatedly, until your instincts calibrate.

When you ask the AI to debug for you, it often works. But you skipped the part where your brain built the pattern. Next time, you'll ask again.

The intuition that would have formed never forms at all.

What This Means

The obvious answer, "stop using AI," isn't realistic. The leverage is real. You're not going back to writing boilerplate by hand.

The engineers I know who seem to be maintaining their edge aren't using AI less. They're using it differently. Treating its output the way they'd treat a junior's PR — not rubber-stamping, actually reviewing. Asking themselves a question that's harder than it sounds: could I have gotten here without it?

Some have started keeping what one called a "manual reps" practice. Once a week, picking something the AI usually handles and doing it themselves. Not because it's efficient. Because the slowness is the point.

The FAA figured this out decades ago. They didn't ban autopilot — they mandated periodic hand-flying. The skill has to be exercised to be retained.

The difference between using a tool and depending on one is whether you could do the work without it.

That gap is worth measuring, before the moment you need to close it and can't.

The concert pianist doesn't forget how to play. They forget how to play well. And they don't notice until it matters.

You're not going to lose your job to AI.

But you might lose the thing that made you good at it, so gradually you don't notice until you're staring at a bug you used to be able to diagnose, an architecture you can't explain, a system you no longer understand.

The question isn't whether to use AI. You will.

The question is whether, five years from now, you'll still be the kind of engineer who can work without it — or the kind who wouldn't know where to start.