If compliance adds engineering work, it will be ignored.
I've spent years talking to engineering leaders, AppSec teams, and developers across every stage of company growth, from startups chasing their first SOC 2 report to large enterprises navigating FedRAMP and acquisition due diligence. The pattern is always the same. The moment compliance becomes manual engineering work, it heads straight for the backlog.
And once it's on the backlog, it's already lost.
This isn't a cultural problem. It's not that developers don't care about security or compliance. It's human nature. People gravitate toward the work they're trained to do, rewarded for doing, and measured on doing well. For developers, that's building product, shipping features, and keeping systems running. It's not interpreting compliance controls or chasing transitive dependencies across dozens of repos.
Friction Doesn't Create Discipline. It Creates Avoidance.
When I talk about "compliance" here, I'm not talking about policies sitting in a wiki or checklists someone dusts off once a year. I'm talking about the requirements that show up in frameworks like SOC 2, ISO 27001, FedRAMP, and customer security reviews — keeping dependencies up to date, reducing known vulnerabilities, understanding license exposure, and maintaining a defensible software supply chain.
And this is where friction creeps in.
There's a deeply rooted belief in compliance and security that friction equals rigor. That if a process is painful, it must be effective. That friction forces people to be disciplined and get things done. In reality, friction just teaches people how to route around it.
Engineers optimize for flow. Anything that interrupts that flow, whether that's manual remediation, one-off scripts, spreadsheets of vulnerabilities or license audits that require weeks of investigation, gets deferred until something forces action. That "something" isn't usually fun. We're talking about a breach, a failed audit, an acquisition, or a stony-faced regulator.
By then, it's a bit late.
Update Inertia Is the Real Compliance Risk
Vulnerabilities aren't as scary as people would have you believe. The ecosystem does a decent job of finding and disclosing them. Patches are released quickly. Advisories are loud and frequent. It's pretty easy.
In contrast, integrating updates into real codebases is hard. APIs change. Transitive dependencies break things unexpectedly. Builds fail. Tests fail. Engineers spend hours untangling issues caused by libraries they didn't choose and don't own.
After enough experiences like that, hesitation becomes rational, and dependency updates stop being routine maintenance and start feeling like risky events. If there isn't a compelling reason to deal with them right now, they get deferred.
Most people think of this as update inertia, and it's actually one of the biggest drivers of long-term compliance and security risk. The uncomfortable truth is that most compliance failures don't come from unknown zero-days. They come from known issues that sat untouched for months or years because fixing them required manual effort across already overcommitted teams.
Developers Aren't Security Experts… and That's Not a Failure
A lot of compliance strategies assume that if we just give developers more information, better scanners, or earlier warnings, everything will sort itself out. In practice, that just shifts responsibility without removing work.
Most developers aren't trained in interpreting security or compliance. That doesn't mean they're careless. It means they're specialists, just like security teams are. When faced with a choice, people naturally gravitate toward the work they understand best and are accountable for.
Security-related remediation often lives just far enough outside that comfort zone to feel expensive in terms of time and mental energy. Over time, those tasks get postponed. No one is being malicious, but there's always something more urgent and more familiar to work on.
The Compliance That Actually Works is Quiet
I've seen this play out in very concrete ways. During an acquisition due diligence process I was involved in years ago, a huge amount of time was spent cataloging third-party dependencies and validating license compliance. There was no clean, modern way to do it. We were running brittle scripts, generating reports, and manually justifying why certain licenses were acceptable. It was painful, slow, and stressful, and it wasn't because anyone was careless. It was because the process depended on humans doing work that could and should have been automated long before.
That experience sticks with me, because it's not unique. Almost every compliance framework eventually comes back to the same foundational requirement: your codebase needs to be up to date, free of known vulnerabilities, and defensible from a licensing and supply-chain perspective. And yet that's exactly where most organizations struggle the most.
The answer isn't to ask engineers to care more, learn more, or work harder. The answer is to design compliance systems that don't rely on constant human attention.
The most effective compliance I've seen is continuous, automated, and enforced quietly in the background. It runs as part of normal development workflows. It reduces risk incrementally instead of letting it pile up until the next audit. Simply? It just keeps dependencies current.
Stop Thinking About Compliance, Let it Think About Itself
The best advice I can give is this: don't ask engineers to "care more" about compliance. Change the system instead. If compliance only works when everyone drops what they're doing and panics, it's not working.
Make it boring. Make it automatic. Tie it to the workflows that already exist and let it run in the background. When compliance removes work instead of creating it, adoption takes care of itself, and risk stops piling up unnoticed.
I'd love to hear how compliance is managed in your company — come find me on LinkedIn, or drop me an email ali@alchemain.com