I recently spoke at the Computer History Museum in Mountain View at AWS [Security] Community Day. That conference has expanded beyond security but it was still heavily security-focused which I love. Thanks so much to @john.varghese and AWS for helping me get back there again. I've added a link to my slides on How I Use AI For Penetration Testing here:
You probably heard all about Mythos from Anthropic if you are even remotely following AI or security topics. I wrote a bit about that here which is that we cannot really know anything about it until we can actually try it. But the larger question I have has to do with the viability of relying on AI models for business given their transient and non-transparent nature. The inability to rely on value per token is a problem.
One of the issues I was having related to the whole Anthropic Opus 4.6 meltdown that affected an inconsistent number of people recently — and seems to be heavily skewed to security research users — was implementation of the Yubikey authentication architecture I've been working on and wrote about here:
I dove into some of the details of a Yubikey authentication system in this post:
I took some time off from working on that hoping Anthropic would figure out whatever nerfed Opus 4.6. Now I'm back at it and have extended my Lambda Troubleshooter in an attempt to quickly resolve problems with this and all future Lambda architectures. The larger point here is how effective use of AI non-deterministic solutions in combination with deterministic solutions can slow your token burn rate and help you achieve ROI with AI solutions.
Subscribe on Substack for more posts and more timely updates.
— Teri Radichel