If you think traditional cyber risk is hard to follow, AI security is worse: the attacks are faster, more convincing, and increasingly aimed at your people, not your firewalls. Yet most executive teams are still relying on governance models built for yesterday's threats, while AI phishing and AI malware quietly erode revenue, trust, and resilience.

Why AI security is now a board issue

Across 2024 and early 2025, security firms reported sharp increases in AI‑driven phishing, with some seeing phishing volumes more than double and credential theft attempts multiply several‑fold. Fraud and risk leaders now say AI-fuelled scams are among the most material threats to their businesses, primarily because AI makes it trivial to personalise attacks at scale.

GenAI tools can already generate flawless emails in any language, mimic executive writing styles, and create deepfake audio and video convincing enough to move money or change customer records. This is shifting cyber risk from "occasional technical incident" to "continuous business risk" that directly affects revenue, margin, customer confidence, and regulatory exposure.

For boards and executive teams, AI security is no longer a technology conversation; it is a governance, accountability, and strategy conversation that belongs squarely on the regular agenda.

How AI phishing hits revenue and trust

Consider AI phishing as the new front door attack. Instead of clumsy, error‑filled emails, attackers now send context‑rich, locally worded messages that reference actual projects, executives, and suppliers. They scrape LinkedIn and public filings, use AI to stitch together believable narratives, and then hit thousands of your staff at once.

One recent analysis found that email scams are now the top fraud vector for organisations, with volumes more than doubling year‑on‑year as AI tools improve the quality and quantity of phishing campaigns. Another dataset showed a large increase in blocked fraudulent content across digital channels, again linked to AI‑assisted scams.

Imagine a large Australian bank. An AI‑crafted phishing campaign targets relationship managers, using their real client names and recent transaction patterns pulled from a previous breach at a third party. Within hours, several staff are tricked into sharing multi‑factor authentication codes over the phone with attackers using AI‑cloned executive voices. The result: fraudulent payments, trading account manipulation, and a trading halt while systems are checked, all under close scrutiny from regulators and the media.

The direct loss might be in the millions, but the bigger hit is to customer confidence, the share price, and the perception that the board had not adequately governed AI security risk.

AI malware: from outage to operational resilience crisis

AI malware takes this a step further. Security researchers are already describing malware that uses AI techniques to change its behaviour, rewrite sections of code, and adapt to evade detection more quickly than traditional defences can respond. This lowers the cost for attackers and broadens the range of viable targets, including mid‑sized organisations that once flew under the radar.

Picture a major not‑for‑profit healthcare provider. An employee clicks on what looks like a routine supplier invoice, generated and personalised by AI phishing. AI‑enabled malware inside the attachment quietly maps the network, identifies backup systems, and times its encryption to coincide with a long weekend when key staff are away. By Tuesday morning, patient systems are locked, call centres are overwhelmed, and services are cancelled across multiple locations.

The financial cost is significant, but the real damage lies in cancelled appointments, delayed care, and front‑page headlines questioning the organisation's security posture and governance. In heavily regulated sectors, such as APRA‑regulated financial services, outages and control failures of this kind also trigger serious questions about compliance with information security standards like CPS 234 and associated prudential guidance.

Regulatory expectations and board accountability

Regulators in Australia and globally are making it clear: cyber risk, including AI‑enabled phishing and AI malware, is a board‑level responsibility, not a problem to delegate entirely to IT. APRA's CPS 234 framework, for example, emphasises that boards of regulated entities must ensure the information security capability is commensurate with threats, that incidents are detected quickly, and that vulnerabilities are managed proactively, including across third parties.

Regulators have also signalled expectations for timely incident notification and have highlighted recurring patterns in cyber events, such as inadequate threat intelligence, insufficient testing of controls, and weak oversight of service providers. In parallel, global guidance from large cloud and security providers stresses that boards need to integrate AI governance with cyber risk management, align security with business objectives, and measure outcomes in terms of resilience, uptime, and fraud reduction.

In practice, this means that if AI‑driven phishing or malware leads to a major breach or outage, regulators and shareholders will ask not just "What did the CISO do?" but "What questions did the board ask, what oversight did it exercise, and how did it assure itself that AI security risks were under control?

Boardroom questions worth asking now

For time‑poor directors and executives, the governance lens is often the most useful. Instead of asking for detailed technical briefings on AI phishing and AI malware, focus on a few high‑impact questions:

  • How is AI changing our threat profile, and do our current risk scenarios and tabletop exercises reflect AI‑enabled attacks on people and processes, not just systems?
  • What specific controls are in place to detect and stop AI phishing, business email compromise, and deepfake‑based fraud, especially for high‑risk roles like finance, treasury, and senior executives?
  • How are we re‑training staff and leaders for AI‑enhanced social engineering, and are we using modern AI security courses and simulations that reflect current attack patterns?
  • Do we have clear thresholds and playbooks for escalating suspected AI‑driven incidents to the executive and the board, including timely regulatory notifications where required?
  • Are third parties — including outsourcers, SaaS providers, and data processors — being assessed against updated control expectations that reflect AI‑enabled threats?

These questions move the conversation from "What are the hackers doing?" to "How are we governing AI security as a core business risk?"

Investing in AI security capability, not just tools

Spending more on technology alone will not solve this problem. Evidence from board‑focused research shows that boards struggle not with the decision to fund security, but with understanding whether those investments improve business performance and resilience. AI security is no exception.

Leading organisations are starting to invest in three areas:

  • AI‑augmented defence. Security teams are deploying AI to detect anomalies, triage alerts, and respond faster, with some reporting substantial improvements in threat detection and time to resolution when AI is integrated thoughtfully.
  • People and culture. Given that many AI‑enabled attacks still succeed through human error, organisations are ramping up targeted awareness, scenario‑based exercises, and role‑specific training, often anchored in updated AI security courses.
  • Governance and measurement. Boards are asking for metrics that connect AI security controls to business outcomes such as reduced fraud losses, improved uptime, and faster recovery from incidents.

From a CEO or director perspective, the key is to challenge management on how AI security spending links to risk reduction and resilience, not just on whether a particular product has been bought.

What executives should do in the next 7 days

AI security can feel overwhelming, but the next week is enough time to take visible, meaningful steps that set the tone for your organisation.

  1. Have a focused conversation with your CISO. Ask for a short briefing, no slides required, that explains how AI phishing and AI malware are already showing up in your sector, what has changed in the last 12 months, and where your current defences and people‑related controls are strongest and weakest.
  2. Request a rapid AI‑related risk review. Ask for a one‑page view of your top AI‑enabled attack scenarios: for example, AI‑driven business email compromise, AI‑assisted credential theft leading to a data breach, or AI malware causing a prolonged outage. Alongside each scenario, ask for a plain‑language summary of financial, customer, and regulatory impacts, and whether they align with your stated risk appetite.
  3. Check your incident playbooks and escalation paths. Confirm that playbooks explicitly cover AI phishing, deepfakes, and adaptive malware, including when and how incidents are escalated to you and, where relevant, to regulators such as APRA. Ask when these playbooks were last tested through simulation and what was learned.
  4. Prioritise role‑specific training for your top risk groups. Ensure that finance, treasury, executive assistants, and frontline staff with payment or data‑change authority are enrolled in up‑to‑date AI security courses or simulations that include AI‑crafted phishing and deepfake scenarios. Ask for completion rates and evidence that behaviours are actually changing.
  5. Set a clear board expectation. Agree that AI security will be a standing item on the risk or audit committee agenda, with at least one dedicated deep‑dive this year focused on AI phishing, AI malware, and the maturity of your response capability. Make it explicit that cyber risk, including AI‑enabled threats, is a shared accountability across the executive, not only the CISO.

The message to your organisation is simple: AI may be making attacks cheaper and more convincing, but with deliberate governance, focused investment, and modern training, it can also strengthen your defences and resilience. The next move is yours. What will you ask your CISO this week?