Years ago, RSA was about walking the floor, collecting talks, absorbing the market. Now, for most of the week, I'm in back-to-back meetings at Minna Gallery. Good problem to have. But it also means I experience RSA the way most people experience a city, through windows, in transit, in brief moments of emergence between obligations.

This year I made time for the floor. And what I saw confirmed something I've been thinking about for months.

The Floor Never Changes, Except When It Does

If you've never been to RSA, the showroom floor is difficult to describe without experiencing it. What seems like thousands of vendors are all competing for 15 seconds of your attention. Badge scanners. Branded swag cannons firing in every direction. Lights, noise, demos on loop, representatives trained to intercept your eye contact like heat-seeking missiles.

I've been enough times to have a system. I walk the perimeter to see what the small guys are doing. When it comes to Swag? Socks: yes. Quality notebooks: yes. A rare decent pen: yes. Everything else: the bag stays empty. The swag is a proxy for the pitch. Cheap swag, cheap pitch. It's not a perfect heuristic, but it's held up.

Every year has a theme. Not an official theme, but a gravity. A word or concept that every booth bends toward. Some years it's Zero Trust. Others it's XDR. Cloud. Threat intelligence. Platform consolidation.

This year was AI. Not just prominently. Not just commonly. Everywhere. E-V-E-R-Y-W-H-E-R-E.

That's not the story, though. AI being everywhere at RSA 2026 is less surprising than noting that water is wet. The story is what flavor of AI was everywhere, and what that reveals about a much deeper problem.

The Mirror Problem

ChatGPT shipped in November 2022. Within weeks, it became the defining reference point for what AI looks, feels, and behaves like for the majority of people on earth. Type a question. Get an answer. Iterate. Converse. The chat interface was the product. The interface was the experience. The interface set the expectation.

That expectation didn't stay in the consumer lane.

It followed people into their jobs. Into their evaluation criteria. Into their RFPs. Into their boardrooms. And it followed vendors into their product roadmaps.

What I heard a few times during meetings and what I saw on the RSA floor this year was the result: an industry-wide mirroring of the ChatGPT interaction model applied to enterprise security products. Chatbots on dashboards. Conversational AI layered over SIEMs. Natural language query interfaces bolted onto existing tools. Ask it a question, get an answer. Repeat.

The pattern makes perfect sense from a product adoption standpoint. You reduce the learning curve by matching the mental model your buyer already has. That's just good UX thinking.

The question I come back to is the ChatGPT mental model wrong for security operations?

When the Interface Becomes the Product

ChatGPT's value proposition is breadth. Ask it anything. It reasons across domains, generates text, synthesizes information, and does it all through a familiar conversational surface. That surface works because the underlying task is open-ended. There's no wrong way to explore knowledge.

Security investigation is not open-ended exploration. It's constrained, time-critical, hypothesis-driven work. The analyst isn't looking for information broadly. They're chasing a specific adversary action through a specific environment with a specific set of available data. The questions aren't casual. They're forensic.

When you bolt a ChatGPT-style interface onto a security product, you're not just adding a feature. You're encoding an interaction model. And that interaction model carries assumptions about how the work should flow.

It assumes the analyst knows what to ask. It assumes the analyst can formulate good questions faster than the adversary can move. It assumes the bottleneck is query syntax, not question quality. It assumes conversation is the right abstraction for investigation.

None of those assumptions hold under operational pressure.

The analyst staring at a chatbot sidebar in an active incident isn't hampered by their inability to write a SQL query. They're hampered by not knowing which of 40 possible questions is the right next move. A better chat interface doesn't solve that. A smarter question hierarchy does.

The Expectation Layer Is Now Load-Bearing

What makes this particularly hard to unwind is that the expectation layer has become structural. Buyers now evaluate AI additions to security products partly based on how closely they approximate the ChatGPT experience. Does it feel conversational? Does it respond naturally? Can I ask it in plain English?

These are reasonable product questions. But they're evaluating the interface, not the intelligence. A well-dressed chatbot that generates plausible-sounding responses to analyst questions is not the same thing as a system that autonomously surfaces the right investigative path without being asked.

The difference matters operationally. A lot.

We've seen this failure mode before. Early SIEM deployments were evaluated on dashboard aesthetics. Early EDR products were evaluated on alert volume. Buyers asked for the wrong things because the industry hadn't yet developed the vocabulary to ask for the right things. The ChatGPT mirror is the 2026 version of that failure.

What most buyers should be evaluating: does this product reduce the cognitive load of investigation, or does it just translate that load into a different interface? Does it know what to look for when the analyst doesn't? Does it sequence the investigative steps, or does it wait for instructions?

The chat interface is the least interesting part of an AI security product. What matters is the reasoning engine underneath. Most products on the floor were selling the chat.

Signal Found in Smaller Rooms

A brief counterpoint: BSidesSF ran the weekend, as it always does, priorl to RSA.

Small conference. Tight content. People who show up because they actually want to learn, not because the marketing budget sent them. The presentations are technical. The hallway conversations are operational. The signal-to-noise ratio is night and day compared to the main floor.

I've said it before and I'll keep saying it: BSidesSF is one of the highest-quality security conferences running. It punches well above its size. If you're an analyst, a researcher, or an engineer trying to get smarter, it belongs on your calendar.

The contrast between the two events is useful data. RSA reflects what the industry is selling. BSidesSF reflects what practitioners are actually solving. Keeping one foot in both rooms is worth the sleep deficit.

And speaking of sleep deficits.

My WHOOP Has Opinions

Seven consecutive days of RED recovery scores. Late nights. Cross-city scheduling. The kind of accumulated exhaustion that doesn't announce itself until you sit down on the flight home and realize you haven't stopped moving in a week.

I'm glad I went. I'm glad for the meetings, for the conversations, for the moments of genuine connection with people I don't see often enough. I missed some people I wanted to see. That's always how it goes. The conference is too large and too compressed for perfect coverage.

What I'm taking back isn't a badge scan or a branded water bottle. It's a sharper read on where the industry's mental models are stuck, and a clearer view of the work still required to move them.

The chatbot isn't the answer. It's a reflection of the question people already know how to ask.

Building the system that asks the questions they haven't thought of yet, that's the actual job.