The FBI's Internet Crime Complaint Center recorded more than $16 billion in cybercrime losses in 2024, a 33% increase from the year prior. Phishing remains the single most reported crime type. The tools to execute it just got considerably cheaper and more convincing.
Here's what should concern you….the tactics haven't fundamentally changed. Fraudsters still want the same things they always have: your money, your credentials, your access. What's changed is the production quality.
Think through this scenario carefully….Your email gets compromised, or theirs. It could start with a direct phishing hit, a spoofed domain, or credentials purchased off an infostealer market. However the entry point is established, the attacker sits quietly in that inbox, reads the thread, learns the names, the cadence of your communication, the inside references. Then they step in. A few exchanges. Nothing alarming. Just enough to collect an email address, a phone number, maybe an account reference or two.
Then, the phone rings. They keep you talking. Could be a customer service pretext. A vendor follow-up. A compliance verification call. Doesn't matter. What matters is they now have a voice sample. Modern AI cloning tools need as little as 30 seconds of clean audio to replicate pitch, tone, cadence, and accent with convincing accuracy. You just gave them that.
Meanwhile, someone else is pulling your LinkedIn photos. Your headshots. Your conference panel appearances. They locate your other social media accounts and pull pictures and videos of you. Building a visual library. And that "someone else" may not even be the same actor. These attack stages are frequently bought and sold between threat groups. One harvests. One builds. One executes. The coordination is distributed, which makes it harder to detect and harder to attribute.
Now they have your name, your email, your phone number, reference data that makes them sound legitimate, your voice, and your face.
What do they do next? They impersonate you. To your bank. To your CFO. To your IT team. A synthetic video call. A voice call that sounds exactly like you, asking for a wire transfer, a password reset, an access grant. In February 2024, engineering firm Arup lost $25 million this way. A finance worker joined what appeared to be a routine video call with the CFO and several colleagues. Every person on that call except the victim was an AI-generated deepfake.
This is not a future scenario. The tools to execute this exist today, they are cheap, and they are accessible to anyone willing to use them.
So what do you actually do about it?
First, understand what you are defending against. This is a multi-stage attack built on aggregated information. No single piece of data breaks the chain. The accumulation does. Your defense has to match that logic.
- Treat verification as a process, not a formality. Any request involving money movement, credential changes, or system access requires an out-of-band confirmation. Not a reply to the same email thread. Not a callback to the number the caller just gave you. A separate, pre-established channel.
- Voice and video are no longer reliable authentication. Train your team on this explicitly. A caller who sounds exactly like the CFO is not automatically the CFO. A video that looks real is not proof of identity. If the request is sensitive, the authentication method needs to be stronger than the medium being used to make it.
- Limit your digital surface area. You cannot control what LinkedIn shows. You can control what you post. Think about the cumulative picture your public presence paints. Name, title, employer, email format, phone number, professional network, photo archive. That is a fraud starter kit sitting in plain view.
- Establish verbal codewords for high-risk transactions. Simple. Uncomfortable for some. Highly effective. A pre-arranged word or phrase that both parties know and that no email thread or LinkedIn scrape would ever surface.
- Build in mandatory cooling-off periods on unusual financial requests. Urgency is a core ingredient in every one of these attacks. A required 30-to-60-minute delay before executing any out-of-pattern transaction eliminates the pressure tactic entirely. Legitimate requests can wait. Fraudulent ones are designed so they cannot.
- Log and flag anomalous communication patterns. If someone who normally emails starts calling, or a vendor contact you have never spoken to by phone suddenly calls with an urgent request, that pattern deserves scrutiny before compliance.
- Run tabletop exercises on this specific scenario. Not generic phishing awareness. This scenario. Walk your finance team, your IT team, and your executive assistants through a synthetic impersonation attempt. See where the gaps are before an attacker does.
None of these recommendations should surprise anyone who has been paying attention. Out-of-band verification, codewords, cooling-off periods, these are not novel concepts. They are basic operational hygiene that has existed in security literature for years.
And yet, I have sat across from companies worth hundreds of millions of dollars that had no formal procedures in place for exactly these scenarios. None. The conversation usually reveals one of two attitudes: "It could never happen to us" or "We are too small to be a target." Both are wrong. Both are dangerous. And in an environment where the cost of entry for a sophisticated impersonation attack is measured in a few dollars and minutes, neither excuse holds anymore.
Small businesses are not beneath notice. They are preferred targets precisely because the assumption of safety keeps defenses low. A $5 million company, heck a $500,000 company, with loose wire transfer procedures and no verification protocol is not too small to defraud. It is just easier to defraud.
The playbook above is not complicated. It does not require a large security team or an enterprise budget. It requires someone in leadership deciding that the downside risk is worth taking seriously before the call comes in, not after.
Now add another layer to that tabletop. Because the human impersonation problem is not the only one you need to solve.
Agentic AI systems, the ones now being deployed across enterprise functions to autonomously manage workflows, execute transactions, and interact with other systems, expand this attack surface dramatically. An agent with access to your financial systems, your email, your calendar, and your HR data is not just a productivity tool. It is a highly privileged identity with the ability to take real-world action. When a compromised email or an injected prompt can instruct an agent to move money, modify records, or exfiltrate data without a human ever approving the specific action, the impersonation scenario above doesn't just target your people. It targets your machines.
Nearly half of security respondents in a recent Dark Reading poll believe agentic AI will represent the top attack vector for cybercriminals by the end of 2026. Only 29% of organizations deploying agentic systems report being prepared to secure them. In mid-2025, a documented exploit against Microsoft Copilot showed that a malicious prompt embedded in an ordinary email could trigger the agent to exfiltrate sensitive data automatically, with no user interaction required.
That gap between deployment and readiness is not theoretical. It is a standing invitation.
A scenario where a synthetic voice call convinces an employee to authorize a transaction is one problem. A scenario where a prompt-injected email convinces an AI agent to execute that same transaction autonomously, with no human in the loop, is a different and larger problem. Run both in your tabletop. Understand the difference. Then figure out what your actual controls are, because right now, most organizations don't have good answers to either question.
The deeper problem remains organizational culture. Most teams are trained to be helpful and responsive. That is exactly the instinct attackers exploit. Urgency plus familiarity plus a convincing presentation is a formula that has worked for decades. Gen AI just made the presentation cheaper to produce and harder to detect on sight. And agentic AI means that presentation no longer needs a human target at all.
Your employees are not the last line of defense because they are the weakest link. They are the last line of defense because every technical control you have built still terminates in a human decision, or increasingly, an autonomous one. Both need to be informed, skeptical, and procedurally supported.
The fraudsters upgraded their tools. The question is whether you upgraded your playbook before they showed up. Have you?