Remember when phishing emails were easy to spot? The terrible grammar, the generic "Dear Valued Customer," the slightly-off logo that looked like it was made in MS Paint. We'd chuckle, hit delete, and feel a smug sense of superiority. Well, those were the good old days. Now, thanks to the magic of generative AI, the same technology that writes your college essays and creates surrealist paintings of your cat, phishing has gotten a major upgrade. And by "upgrade," I mean a terrifying, 1,265% surge in malicious campaigns since the launch of ChatGPT. No, that's not a typo. It's a full-blown digital apocalypse, and your inbox is the battleground.
The Cambrian Explosion of Cybercrime
The timing of this phishing boom is, as one expert put it, "not a coincidence" . The release of powerful, user-friendly AI models like ChatGPT in late 2022 was like giving every aspiring cybercriminal a master key to the internet. Suddenly, crafting a flawless, personalized, and utterly convincing phishing email became as easy as asking for one. The results have been staggering. According to the 2023 SlashNext State of Phishing Report, we're now facing an average of 31,000 phishing attacks per day.
This isn't just about volume; it's about sophistication. The classic red flags are gone. AI-powered attacks are grammatically perfect, contextually aware, and can be tailored to you with terrifying precision. They know your name, your job title, your company's lingo, and maybe even what you had for lunch. This is why Business Email Compromise (BEC) attacks, where scammers impersonate a trusted colleague or executive, now make up a whopping 68% of all phishing emails .
The Deepfake in the Virtual Boardroom
If perfectly crafted emails weren't scary enough, generative AI has brought a new horror to the table: deepfakes. And we're not just talking about funny videos of celebrities. In January 2024, a finance professional at the multinational engineering firm Arup was tricked into wiring $25.6 million to fraudsters. How? The scammers staged a video conference call with deepfake versions of the company's CFO and other senior employees. The employee, initially suspicious of an email request, was convinced by the lifelike video call. Every single person on that call, except for the victim, was a digital puppet.
This isn't an isolated incident. The FBI has issued stark warnings about criminals using AI-powered voice and video cloning to impersonate family members, co-workers, and business partners with "unprecedented realism" . The technology has advanced so rapidly that a deepfake attempt is now estimated to occur every five minutes.
Meet the Evil Twins: WormGPT and FraudGPT
While mainstream AI models like ChatGPT have some ethical guardrails to prevent overtly malicious use, the cybercrime underworld has been busy creating its own, less scrupulous, versions. Enter WormGPT and FraudGPT, the evil twins of the AI world .
These malicious chatbots are sold on dark web forums and are specifically designed for criminal activity. They have no ethical qualms about writing malware, creating phishing pages, or finding security vulnerabilities. For a modest subscription fee, any would-be hacker can access a tool that automates the creation of sophisticated attacks. This has effectively democratized cybercrime, lowering the barrier to entry for even the least technically skilled individuals.
The FBI's Internet Crime Complaint Center has detailed how these tools are used to create everything from fake social media profiles and fraudulent ID documents to pornographic images for sextortion schemes and deepfake videos for investment fraud.
So, Are We All Doomed?
It's easy to feel a sense of despair. If even a video call with your boss can't be trusted, what can? While the threat is real and growing, we're not entirely defenseless. The good news, according to some experts, is that "AI can also be used to defend against sophisticated attacks". But until our AI-powered shields are as good as their AI-powered swords, the onus is on us to be more vigilant than ever.
The FBI recommends a multi-layered approach :

The New Normal
The 1,265% surge isn't just a statistic; it's a paradigm shift. We've entered an era where our ability to distinguish between real and fake is being fundamentally challenged. The digital world is becoming a hall of mirrors, and generative AI is holding the flashlight. So, the next time you get an email from your "CEO" asking for an urgent wire transfer, take a deep breath, and maybe call them on their personal number. Just to be sure. After all, in the age of AI, a little paranoia might just be the healthiest response.
Sources
Security Magazine: Report shows 1265% increase in phishing emails since ChatGPT launched — https://www.securitymagazine.com/articles/100067-report-shows-1265-increase-in-phishing-emails-since-chatgpt-launched
Infosecurity Magazine: Report Links ChatGPT to 1,265% Rise in Phishing Emails — https://www.infosecurity-magazine.com/news/chatgpt-linked-rise-phishing/
CoverLink Insurance: Cyber Case Study: $25 Million Deepfake Scam — https://coverlink.com/case-study/case-study-25-million-deepfake-scam/
FBI: FBI Warns of Increasing Threat of Cyber Criminals Utilizing Artificial Intelligence — https://www.fbi.gov/contact-us/field-offices/sanfrancisco/news/fbi-warns-of-increasing-threat-of-cyber-criminals-utilizing-artificial-intelligence
LevelBlue: WormGPT and FraudGPT — The Rise of Malicious LLMs — https://www.bankinfosecurity.com/criminals-flocking-to-malicious-generative-ai-a-22660
FBI IC3: Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud — https://www.ic3.gov/PSA/2024/PSA241203