I build governed security systems for a living.
Not dashboards. Not compliance checklists. Systems where automated decisions have bounded authority, actions leave audit trails, and escalations follow policy defined before the pressure hits.
I started writing here because LinkedIn has a 3,000-character ceiling, and some ideas need more room.
This is where I'll publish longer arguments about:
- AI governance in security operations — why "deploy AI and keep a human nearby" is not a governance architecture
- Security decision quality — why many security failures are really decision failures, not technology failures
- Governed autonomous systems — what it actually takes to let AI act with bounded authority in high-stakes environments
If you work in security, risk, or AI governance and you are tired of recycled frameworks dressed up as thought leadership, this will be for you.
The first real piece drops this week.