When I went to OpenAI's first AI Privacy Hackathon in Dublin. I showed up thinking it would be a typical hack day a few talks, some coding, lots of coffee, a late finish. It was that, but it was also more thoughtful than I expected. I left with a clearer sense of what "building with privacy in mind" actually looks like when you get past slogans.

The opening conversation

The first talk that really landed for me was the conversation with the Data Protection Commission. It was very direct about what the regulator actually does. I liked that it didn't come across like a courtroom or a standoff. The message was simple they engage with companies before products launch and they look at new tech through GDPR. Same law, new tools.

Two things stuck with me. First, privacy depends on culture, not just compliance. If the whole team doesn't care, privacy won't survive contact with deadlines. Second, a lot of the regulator's recommendations over the last few years are still about transparency. Just telling people what you are doing. That felt almost embarrassing for the industry, but also useful to hear plainly. It reminded me that trust is usually lost through small, avoidable decisions.

A practical privacy tool in action

None

One of the strongest demos for me was about something OpenAI has been working on called Privacy Filter. This was one of those talks where I found myself nodding along because it was so practical.

The setup was familiar with a lot of useful data is messy and personal at the same time. Engineers want to learn from error logs, forum posts, user feedback and training data, but those sources often include emails, phone numbers, IDs, names and random bits of personal detail that don't help the task. They are just there.

Privacy Filter is meant to automatically find those identifiers and mask them. Not so the data becomes useless, but so you can still learn what matters without learning who said it.

What I liked was how the limits of older tools were explained. Rules and pattern matching are fine when the format is neat. But the real world isn't neat. A phone number might be written out in words. An address might be public in one sentence and private in another. A number can look like a phone number, but be part of a math example. Context changes everything. The point was that a model that understands context can make better calls about what should be masked. For me, it was a good reminder that privacy work doesn't always have to be grand or abstract. Sometimes it is just building a tool that does one job properly.

The privacy agent talks

Later on, the focus shifted from specific tools to bigger problems. There were concrete examples of privacy agents acting as a layer inside a company. Agents that catch extra data during customer support calls. Agents that only keep what's needed. Agents that enforce retention or help privacy teams answer internal questions faster. It was grounded in day to day reality.

Then another talk hit a different note. The argument was that manual privacy doesn't scale anymore. People can't read hundreds of notices or manage settings across dozens of products. It's not laziness, it's overload. If rights are real, people need help exercising them.

I don't think anyone in the room disagreed with the problem. Whether agents are the full answer is another question, but I appreciated the honesty of it. We keep writing policies that nobody reads and then act surprised when trust breaks.

Group work

After the talks I joined Group 2. Our theme was necessity and proportionality. Those can sound like legal words you put in a document and move on. In the group, we turned actual questions you can't dodge:

What data do we really need for this feature? What data are we collecting just because it's easy? If we can do the same thing with less, why aren't we doing that? Who gets hurt if we get this wrong?

That part of the day felt like the real point of the hackathon. Not building something flashy, but forcing ourselves to make tradeoffs visible.

Leaving

None

By the end I was tired in the good way. I left with new people I'm glad I met, and also with my head a bit clearer.

If I had to sum up what I took from the day, it's this: Privacy isn't a checkbox you add at the end. It is a set of small decisions that start early. It is about minimising what you collect, being clear about what you're doing, and designing for the fact that normal people don't have time to manage privacy manually. I'm glad I went. It felt more honest than most events. People weren't pretending the answers are easy, but they also weren't acting helpless. They were trying to build useful things without treating users as collateral damage. If there's another one, I'd go again 😃

None