After attending conferences, I like to take notes and share the talks that stood out to me. This one was from WiCyS (Women in Cybersecurity) 2026 Annual Conference held in Washington DC.

If you've ever asked Siri a question or let Alexa set a timer, you've already lived inside the AI story. But this session was about the parts of that story we don't talk about as comfortably, deepfakes, the dark web, and what happens when all three of these worlds collide.

The speaker framed it beautifully. Think wizards and warlocks.

The wizards of AI are everything we celebrate about the technology. Automation that handles what humans simply can't do at scale. Phishing detection. Enhanced security protocols. Analysis and calculations that would take teams weeks to run. The potential is genuinely exciting.

But then there are the warlocks. The same technology, pointed in the other direction. Deepfakes. Automated and increasingly sophisticated cyberattacks. AI-powered malware. The rapid spread of incorrect information. The capabilities that make AI powerful don't disappear when someone with bad intentions picks it up.

The speaker made this tangible with a live experiment shown in the slides. An AI image prompt, a whimsical cartoonish village with a wizard and a warlock, fed into Adobe Firefly. What followed was instructive in its own way: the tool struggled with speech bubbles, needed multiple restarts, and required significant human involvement to get anywhere close to the intended result. It was a small but honest reminder that AI still needs us, and that gap is both reassuring and worth watching.

On deepfakes specifically, the speaker called them digital chameleons, and it fits. They can be used for entertainment and humour, but the ethical weight sits on the other side. False information, manipulated narratives, influence over public opinion. The tools to combat them exist: AI-based detection tools like Deeptrace, machine learning techniques, digital forensics, watermarking, and importantly, public awareness woven into phishing training.

Then there's the dark web. The surface web, the part we all use every day, accounts for less than 10% of the total web. The deep web and dark web make up the rest, accessible only through special software, built for anonymity. It has legitimate uses, journalists, whistleblowers, people operating under authoritarian regimes. But it is also home to markets like Russian Market, where large datasets of credentials stolen by infostealers are openly bought and sold, and platforms like Cracked.sh.

What makes this session particularly timely is where AI and the dark web are converging. AI prompts and tools are being traded for criminal activity. Open source LLMs are being repurposed. And when you combine that with the scale of data available on the dark web, threat actors can operate and scale in ways that weren't possible even a couple of years ago. Pre-built phishing kits bundled with AI deepfake tools are available for purchase, complete with customer support, no technical knowledge required. Transactions run on cryptocurrency, keeping everything anonymous.

The takeaways the speaker left us with were clear: adapt security measures as threats evolve, keep education and awareness continuous, expand proactive cybersecurity, push for stronger ethics and regulation, and make sure we understand both the benefits and the risks of these technologies rather than treating them as purely good or purely threatening.

The wizard and the warlock are using the same spellbook. The question is which one your organisation is better prepared for.

These are my notes from the session, not a transcript. If you were there and see it differently, or have thoughts, drop a comment below :)