The lights were green. Every indicator showed normal. And 14 million people were about to lose power.
This is how catastrophic infrastructure failures often begin — not with alarms blaring and red warnings flashing, but with systems reporting that everything is fine. The gap between what dashboards display and what's actually happening is where disasters live.
I've spent my career in that gap.
The Pattern Recognition Problem
Before I worked in cybersecurity, I spent a decade in air traffic control. In that world, your job is to see patterns in chaos — aircraft converging on the same airspace, storms developing over approach corridors, timing conflicts invisible to pilots flying at 500 miles per hour.
What I learned in the tower is that catastrophe rarely announces itself. It emerges from the accumulation of small anomalies that individually mean nothing but collectively signal something very wrong.
A flight slightly off its assigned altitude. Another aircraft requesting an unexpected delay. A controller's voice with a tone that suggests they've noticed something they can't quite articulate yet.
None of these are emergencies. All of them together might be.
The same principle applies to enterprise networks, power grids, water treatment facilities, and every piece of infrastructure we depend on. Attackers understand this. They don't trip alarms — they blend into the noise of normal operations until it's too late to respond.
Three Systems, One Lesson
My career path looks unconventional on paper: classroom teacher, air traffic controller, cybersecurity professional. But each role taught me the same fundamental truth about complex systems.
In the classroom, I learned that the students who fail aren't the ones asking questions. They're the ones who nod along silently while confusion compounds. The system — grades, attendance, participation — reported them as "present" long after they'd mentally checked out.
In the control tower, I learned that the most dangerous moments weren't the obvious crises. When an aircraft declares an emergency, everyone mobilizes. The real danger was the slow degradation of situational awareness — traffic building, attention fragmenting, the picture on your scope becoming disconnected from the reality outside the window.
In the security operations center, I found the same pattern again. Breaches don't happen when monitoring tools flash warnings. They happen when adversaries move slowly enough to look like legitimate traffic, when alerts get dismissed as false positives, and when the attack surface expands one ignored vulnerability at a time.
The common thread: systems fail not despite their monitoring but because of their monitoring's blind spots.
The Trust Problem in Critical Infrastructure
Here's what concerns me most about our infrastructure: we've built systems that are extraordinarily good at reporting normal operations and remarkably poor at detecting sophisticated threats.
Power grids can identify a tripped breaker in milliseconds. They struggle to detect an adversary who has spent six months mapping the network, positioning for an attack that won't look like an attack until substations start failing in sequence.
Water treatment facilities can monitor chemical levels with precision. They have limited visibility into whether the control systems themselves have been compromised — whether the readings they trust are even real.
Air traffic control systems can track thousands of aircraft simultaneously. They assume the data feeds they receive are accurate, that the automation they depend on hasn't been subtly manipulated.
We trust our instruments. And that trust is exploitable.
What I Write About (and Why)
This is why I write techno-thrillers about critical infrastructure under attack. Not to frighten people, but because fiction lets me explore scenarios that security professionals discuss in classified briefings but rarely explain to the public.
In Vector Strike, the first book in my CRITICOM Files series, I explore a question that haunted me during my aviation years: What happens when someone weaponizes the gaps between agencies? When an attack is sophisticated enough to exploit the seams where FAA authority ends, and FBI authority begins, where CISA's mandate runs into state utility commission regulations?
The threats are real. The vulnerabilities exist. The scenarios I write about aren't speculation — they're extrapolations from publicly documented attack methodologies and known infrastructure weaknesses.
Fiction makes these threats accessible. It lets readers experience what a grid-down scenario actually feels like without living through one. And maybe — hopefully — it creates the public awareness that drives the policy changes and security investments we need.
Vector Strike is available now on amazon.com, at calianapress.com/books/vector-strike and wherever books are sold.
The Human Factor
After all the certifications I've earned (Security+, SSCP, CySA+, CCSP, PenTest+), the most important lesson I've learned is that technology is the smaller problem.
The biggest vulnerabilities in any system are human:
- The operator who dismisses an anomaly because it's probably nothing
- The administrator who delays a patch because the change window is inconvenient
- The executive who underfunds security because breaches happen to other companies
- The analyst who trusts the dashboard when the dashboard is lying
This isn't criticism — it's recognition. We're asking humans to maintain perfect vigilance over systems too complex for any individual to fully understand. The failure mode isn't laziness or incompetence. It's the inevitable consequence of cognitive limits meeting adversaries who specialize in exploiting them.
The solution isn't better technology. It's a better understanding of how systems actually fail — and the humility to design for human limitations rather than expecting humans to transcend them.
What I'm Building
Beyond writing, I'm building educational infrastructure through Caliana Academy and publishing through Caliana Press. The mission is the same across all of it: translate complexity into understanding.
Whether that's a 650-page SAT preparation guide (Digital SAT Mastery), a citizenship test textbook (The AMERICAN Promise), a white paper on electoral autocracy (Uganda's Masquerade Democracy), or a techno-thriller series (The CRITICOM Files) exploring critical infrastructure vulnerabilities — the goal is always to make complex systems legible.
Because systems we don't understand are systems we can't protect. And systems we can't protect are systems waiting to fail.
The Invitation
If you're interested in cybersecurity beyond the buzzwords, in infrastructure beyond the abstractions, in understanding how the systems we depend on actually work and fail — follow along.
I'll be writing here about:
- Critical infrastructure security and the vulnerabilities we don't discuss publicly
- The human factors that determine whether complex systems protect us or fail us
- Political and institutional analysis (when documentation becomes a form of accountability)
- Behind-the-scenes perspectives on writing, publishing, and translating expertise into accessible content
The lights on the dashboard are green. That doesn't mean everything is fine.
Sometimes it means the attack is working exactly as designed.
Martin Balome is the Founder and CEO of Caliana, LLC, comprising Caliana Press, Caliana Academy, and BalomeTech. A cybersecurity professional (CCSP, CySA+, PenTest+, SSCP, Security+) and former air traffic controller, he writes techno-thrillers including The CRITICOM Files, The Alaris Protocol, and The Blackwood Dossier series. His work explores critical infrastructure vulnerabilities, AI security, and the human factors that determine system success or failure.
Find him at calianapress.com
Tags: Cybersecurity, Critical Infrastructure, Technology, Writing, Systems Thinking, Air Traffic Control, Vector Strike, The CRITICOM Files, Caliana Press.