There's a growing tension in boardrooms right now. According to a 2025 study, business leaders are more worried about AI than they were in 2024, and CEOs are noticeably less excited than they were just a year ago. The initial euphoria is fading, and reality is starting to set in.

Yet despite this growing unease, most businesses still aren't having the right conversations. When companies do loop in their cybersecurity teams, it's usually only for compliance or regulatory checkboxes, not in any real strategic way. And that's a problem, because security needs to be part of the AI conversation from the very beginning, not bolted on at the end.

We're living through an AI gold rush. Many organisations are rushing to adopt AI simply because they feel they have to, without fully understanding the consequences. BYOAI (Bring Your Own AI) is already happening across teams, whether leadership knows about it or not.

So what are the biggest risks? Three stand out.

The most obvious is data exposure. What happens to your data when it feeds into these tools? Then there's the increased attack surface that AI introduces. And sitting over both of these is Shadow AI, employees quietly using AI tools that the organisation hasn't vetted, sanctioned, or even noticed.

Before any AI adoption, there are few questions every organisation should be asking.

Is it safe? Has it been tested?

Can we monitor and control it? Is it strategic?

Does it actually align with our business goals? And is it sensible?

Are we solving a real problem, or just buying into the hype?

These questions aren't just for big internal decisions either. Think about your vendors. If the tools your vendors use include AI-powered note-taking, that puts your data at risk too. The same three questions apply.

The culture around AI shouldn't be one of fear. The goal isn't to scare people away from it, it's to help them use it correctly. There's a big difference between telling people about AI risks and actually training them to navigate it. When people are equipped to use AI well, you get the best out of it, and you avoid the trap of walking away with biased or unreliable information. This is why AI needs to be woven into security awareness training, not as an afterthought, but as a core part of it.

But here's a practical question worth sitting with: when your sales team is on calls with clients, are they checking whether the client is using AI note-taking tools on their end? And if they are, are your people actually trained to handle that situation? These are the kinds of conversations we should all be having.

For a closing line, something simple that ties it together without adding new info:

If you were at BSides Bristol 2025 and caught this talk by Holly Foxcroft, I'd love to hear your takeaways in the comments. These are my notes, not a transcript, so if anything resonates or you see it differently, drop a comment below!