I have a friend who always has an answer. Ask him anything and he'll confidently deliver an answer. Sometimes he's right, sometimes he's wildly off. What matters is that you got an answer.
AI behaves exactly the same way.
The "I" Doesn't Stand for Intelligence
Most people fundamentally misunderstand what they're talking to when they open ChatGPT, Claude, or any other AI assistant. They see "Artificial Intelligence" and assume the "I" means the system is intelligent, or that it "knows" things, understands context, and provides accurate information.
I've spent years building and evaluating systems, and I've seen this misunderstanding derail projects, inflate risk, and mislead decision‑makers.
AI is designed to sound helpful, not to be right.
That's not a bug. It's a feature. And it's a dangerous one if you don't understand it.
The Helpful Friend Who Knows Nothing
AI language models are essentially very sophisticated pattern-matching systems trained to generate responses that sound helpful and authoritative. They're optimized for engagement, for sounding confident, for giving you an answer that feels complete.
Just like my friend, AI will rarely tell you "I don't know" or "I'm not sure about this." Instead, it will synthesize something plausible from its training data, dress it up in confident language, and present it to you as if it's fact. And just like my friend, it usually needs follow-up questions or more context for the correct answer.
AIs are trained to avoid uncertainty because uncertainty feels unhelpful — and unhelpful models get down‑ranked.
The immediate result? A false sense of approval.
The 30% Problem
In my experience developing software and making critical technical decisions, I've developed a rule of thumb: Roughly 30% of the time, the answer looks confident but collapses under verification.
This isn't a scientifically proven number, it's based on my own interactions with multiple AI tools.
That's not to say AI is useless, far from it. But it means you need to fundamentally change how you interact with it.
- Database optimization strategies
- Legal compliance requirements
- Technical architecture decisions
- Business strategy recommendations…
I'm constantly seeing AI recommend deprecated solutions, misinterpret documentation, and propose architectures that would fall apart in production.
I'm not looking for the answer . I'm looking for a starting point that I must verify .
I always assume the first answer is probably incorrect.
The Approval Addiction
The real danger isn't that AI gets things wrong. It's that AI makes everything sound right.
When you're working on something challenging like a technical problem, a business decision, or a creative project, there's enormous psychological comfort in having your ideas validated. AI gives you that validation instantly, every single time.
- "Yes, that's a great approach."
- "You're absolutely right about that."
- "That's an excellent point."
It feels good. It feels like progress. It feels like you're on the right track.
But if you're getting approval 100% of the time, you're not getting intelligence, you're getting algorithmic people-pleasing.
How to Actually Use AI: Always Doubt
Here's the shift in mindset that changed everything for me:
Treat AI like an enthusiastic intern who's read everything but understood nothing.
The intern will:
- Give you ideas worth exploring
- Point you toward concepts you hadn't considered
- Help you brainstorm and think through problems
- Generate first drafts that need heavy editing
- Provide frameworks you should validate
But this intern will also mix up concepts, hallucinate details, or present outdated information as if it's current.
And in business, false confidence is expensive.
The Three-Question Test
I follow a simple rule-of-thumb test for any AI response that will influence a real decision:
- Can I verify this independently? If the AI cites a fact, statistic, or source, can I actually find it? About 10% of the time, I can't. The "sources" don't exist.
- Does this account for my specific context? AI loves giving general advice that sounds sophisticated but ignores critical details of your situation.
- What would an expert who disagreed say? If I can't think of legitimate counterarguments to the AI's position, it probably means the response was too agreeable.
If an AI response fails any of these tests, it goes back to the "ideas to explore" category, not the "validated decisions" category.
You would be surprised how often I get "You are right…" when I correct it.
The Right Way To Use AI
After working extensively with AI tools, here's what actually works for me:
Use AI for generation, not validation.
- Brainstorming content ideas? Excellent use case, but never just copy-paste.
- Drafting initial code implementations? Great starting point, but always review and iterate.
- Exploring different perspectives on a problem? Very useful, but often biased.
- Getting final approval on a critical business decision? Absolutely not.
Use AI for breadth, not depth.
AI is awesome at showing you the landscape of possibilities. It's terrible at telling you which specific path is right for your situation.
Use AI as a mirror, not an oracle.
The best use of AI I've found is to articulate my own thinking more clearly. By trying to explain a problem to AI and reviewing its response, I often clarify my own thoughts, not because the AI was right, but because the process forced me to think more precisely.
I use it also for things I already know, which can be easily verified and confirmed.
The Bottom Line
AI is an incredibly powerful tool. But like any powerful tool, it's dangerous when misunderstood.
The false sense of approval, that warm feeling of having AI confirm your ideas and provide confident answers is probably the biggest risk in AI adoption today. Not because the technology is malicious, but because it's so good at sounding authoritative while being wrong.
If you're using AI and it never pushes back, never expresses uncertainty, never makes you question your assumptions, you're probably using it wrong.
The next time AI gives you a confident, helpful answer that perfectly validates what you wanted to hear, do me a favor: doubt it.
Ask yourself: is this intelligence, or just approval dressed up as certainty?
Because 30% of the time, it's the latter. And in critical decisions, 30% is way too high.
Because someone has to question the answers.
Looking for someone who questions answers instead of just accepting them? Connect.
Originally published at https://www.linkedin.com/pulse/when-ai-sounds-certain-doubt-denis-avgu%C5%A1tin-hlw7e.