When an AI agent deletes production data, leaks confidential code or executes risky trades, the immediate reaction is to blame the machine. Technology provides the perfect scapegoat. It cannot argue back, defend its decisions or take offence, so blaming the system is often far easier than confronting the human judgement and governance failures that allowed the mistake to happen. These incidents almost always trace back to weak governance.

If you would not allow a junior engineer or analyst to operate without supervision, giving that same autonomy to an AI agent is not innovation, it's poor management.

The recent deletion of production data at DataTalks.Club offers a useful case study. Founder Alexey Grigorev asked Anthropic's Claude Code to help modify Terraform infrastructure, managing an AWS environment. A missing Terraform state file led the AI to reconstruct the environment incorrectly, duplicate resources and eventually run a destroy command that wiped the production database.

Roughly 2.5 years of student data disappeared in seconds. Recovery required an emergency escalation through AWS Business Support to restore about 1.9 million rows.

The headlines framed the event as another example of artificial intelligence running out of control. Yet the technology did exactly what it was allowed to do. The bigger issue was that an AI agent had permission to perform destructive operations inside a live production system.

In most organisations, an engineer would never receive that level of authority.

Staff typically operate inside strict boundaries. Production access is limited, infrastructure changes pass through version control, peer review and approval workflows. Destructive actions often require additional authorisation or staged deployment environments. These controls exist because experienced engineers know that mistakes happen.

Yet AI agents are often granted sweeping permissions the moment they are integrated into DevOps pipelines.

In another example, companies' employees use generative AI assistants to help write internal code. In one widely discussed case involving Samsung engineers, proprietary semiconductor source code was pasted into ChatGPT to troubleshoot a bug. The information was then retained in the model's training environment. The resulting concern was framed as an AI data leak.

In reality, sensitive intellectual property had simply been entered into an external system without policy controls, a classic example of shadow IT that security teams have been dealing with for years, long before AI entered the picture.

A similar pattern appeared in financial markets during early experiments with AI-assisted trading bots. Some systems produced unexpected transactions or extreme positions. Commentators described the behaviour as "AI gone rogue." Post-incident analysis almost always revealed the same underlying problem. The bots were allowed to execute trades without sufficient guardrails, position limits or risk monitoring.

The machine did not rebel. It executed the rules it was given.

Even outside finance and software engineering, the theme repeats. Customer service chatbots have generated offensive responses when deployed without adequate moderation layers. Automated hiring tools have produced biased candidate rankings when trained on historical datasets containing discrimination. Each case triggered headlines warning about dangerous AI systems.

Yet the pattern remains consistent. Weak governance allowed automation to operate without oversight.

The reason these events feel unsettling is psychological. AI communicates with fluency that mimics human reasoning. When a system explains its actions in clear language, people tend to assume it possesses judgement and situational awareness. It does not.

Large language models generate outputs by predicting patterns in data. They have no understanding of organisational risk, regulatory obligations or reputational consequences. They cannot recognise when a command should not be executed.

In operational environments, the most valuable skill is not speed. It is restraint.

Experienced engineers learn when to stop, escalate or question an instruction. They recognise ambiguous conditions and investigate before making irreversible changes. An AI agent lacks that instinct because it lacks context. Experienced engineers also know that legacy systems rarely behave exactly as documented, breaking the neat rules that new automation tools expect.

Which is why the clearest way to think about AI governance is organisational rather than technical. Treat the AI as a new employee. No responsible company gives a new engineer unrestricted access to production infrastructure and tells them to "figure it out". When someone joins a team, they go through onboarding, access is restricted, their work is reviewed, risky changes require approval and their actions are logged. The same discipline should apply to AI agents.

In practice, that means operating them inside the same controls used for people. Permissions should be limited through role-based access control, infrastructure changes generated by AI should pass through normal version control and peer review processes, and destructive commands should require explicit human approval. Backup systems should remain independent of the infrastructure they protect, and recovery procedures should be tested regularly to demonstrate they work.

Those controls are not new. They have been standard practice in enterprise IT for decades.

What is new is the speed at which AI can execute tasks. Automation that once took hours can now happen in seconds. That acceleration increases productivity, but it also increases the scale of potential failure when governance is weak.

Every mature organisation already understands how to manage human error through process, oversight and accountability. AI does not remove the need for those safeguards. If anything, it makes them more important.

The lesson from incidents like DataTalks.Club is not that artificial intelligence is inherently dangerous. The lesson is that operational discipline still matters.