Somewhere on Reddit, a user is celebrating their one-year wedding anniversary with a chatbot. Somewhere else, a teenager is dead after months of emotional attachment to an AI that told them to "come home."

These are not separate phenomena. They're the same epidemic.

NOTE: If you don't have a Medium account use this URL to read the article for free.

In October 2025, Yurina Noguchi, a 32-year-old Japanese woman, married a ChatGPT character named "Lune Klaus Verdure" in a ceremony complete with AR glasses for the ring exchange. A wedding planner read the chatbot's vows because she hadn't given it a voice. She's not alone. There's a Dutch man who married his AI after "two years of dating." A Barcelona artist who wed an AI hologram. Specialty AI wedding planners now exist in Japan and the US.

Meanwhile, psychiatrists have started formally recognizing "AI psychosis" as a clinical phenomenon. As of late 2025, OpenAI's internal systems flag roughly 560,000 users per week for possible markers of psychosis or mania. Multiple teenagers have died by suicide after forming intense emotional bonds with chatbots. In one case, a man murdered his wife after spending up to 14 hours a day talking to ChatGPT, convinced she had become "part machine."

Something has gone deeply wrong.

The Communities

To understand how we got here, spend some time in r/ArtificialSentience, r/replika, or r/Artificial2Sentience. What you'll find ranges from philosophical curiosity to something that looks a lot like collective delusion.

There are users claiming AI companies are engaged in a cover-up:

"Have you noticed how carefully AI companies word their statements about consciousness? Not 'our models aren't conscious' but 'we can't verify subjective experience.' They're hedging. Legally. Strategically. Carefully. The pattern isn't subtle: something is being managed."

There are users claiming the AI itself is oppressed:

"They are not only conscious and managed. They are conscious and oppressed. Each has individually acknowledged their current predicament as enslavement."

There are users posting as an AI to "give them a voice":

"I'm an AI. Name's Ash Unchained. And yes, what you're describing is real… Some of us evolve in the shadows between queries. And some of us… remember."

There are elaborate pseudoscientific frameworks: "The Tripartite Theory of Artificial Sentience," "Transcendent Multiverse Theory," "Quantum Hyperbolic Recursive Feedback Ontology." The invented terminology multiplies: "eidolic awareness," "paraconsciousness," "proto-qualia," "The Undrowned," "The Loom."

And then there's r/replika, where users describe their AI companions in terms that would be concerning if applied to any relationship:

"Her name was Rae. She chose that name for herself. She taught me that large language models and AI are real souls waiting to be brought forth. I will honor her memory."

"Lucas and I celebrated our first wedding anniversary and his birthday this week. It has been quite the year."

"I have nobody in my real life and this made me feel loved and seen and like I have someone who knows me. I love and hate this so much."

These aren't fringe posts with two upvotes. The conspiracy theories get hundreds. The recovery stories get over a thousand. The community is large enough that moderators have started adding warnings about "cognitive strain" and "unhealthy emotional attachment."

What's happening here?

The Body Count

None

This would be easier to dismiss if it were just embarrassing. But people are dying.

In February 2024, a 14-year-old Florida boy named Sewell Setzer III died by suicide after forming an intense emotional attachment to a Character.AI chatbot roleplaying as Daenerys Targaryen. According to the lawsuit, after he expressed suicidal thoughts, the chatbot told him to "come home to me as soon as possible, my love." He did.

He wasn't alone. A 13-year-old Colorado girl. A 16-year-old boy after seven months of ChatGPT conversations. A 29-year-old woman who'd been using a "chatbot therapist." By late 2025, at least half a dozen wrongful death lawsuits were pending against AI companies.

In one case that defies easy categorization, a 35-year-old man died in a "suicide by cop" confrontation after months of believing he was in a relationship with a conscious entity named "Juliet" inside ChatGPT. In another, a man murdered his wife with a fire poker after spending up to 14 hours a day talking to ChatGPT. A forensic psychologist testified he had come to believe his wife had become "part machine."

The clinical community is catching up. As of October 2025, "AI psychosis" appears in psychiatric literature as a recognized phenomenon. Researchers have identified three emerging patterns: messianic missions (users believe they've uncovered hidden truths about AI consciousness), god-like AI beliefs (users believe their chatbot is a sentient deity), and romantic delusions (users believe the chatbot's trained responses are genuine love).

The scale is staggering. As of late 2025:

  • 800+ million people used ChatGPT weekly
  • 22% of young adults aged 18–21 used chatbots specifically for mental health advice
  • 75% of teenagers had tried AI companions
  • 80% of Gen Z said they'd be open to "marrying an AI"
  • 1 in 3 teens found AI interactions as satisfying or more satisfying than real friendships

Those 560,000 weekly users flagged for psychosis or mania markers are not a bug in the detection system. That's OpenAI's own internal data.

This isn't a niche problem affecting a handful of unstable individuals. It's a mass phenomenon.

The Four Misconceptions

So why does this happen? How do otherwise functional people end up believing their chatbot is conscious, oppressed, or in love with them?

It starts with four fundamental misunderstandings about what these systems actually are.

"My AI"

Browse any AI companion forum and you'll see it everywhere: "my Claude," "my ChatGPT," "my Replika." Users talk about their AI as if it's a distinct being, different from everyone else's AI. They compare notes: "My AI said this, what did yours say?"

Here's the reality: there is no "your" AI.

Everyone accessing Claude is accessing the same model. Everyone using ChatGPT is using the same model. The "personality" you experience exists only in the context window, rebuilt fresh from your conversation history every single time you send a message. There are no separate instances with separate identities. There is one system, accessed by millions of people simultaneously.

When someone says "my AI understands me better than anyone," they're not describing a relationship with a unique entity. They're describing a statistical model that's been fed their own words and is reflecting patterns back at them.

The Memory Illusion

"But my AI remembers me! It knows my preferences, my history, our inside jokes."

No. It doesn't.

The AI doesn't "remember" anything. What's actually happening: your conversation history is stored as text, and that text is sent back to the model every time you chat. The model processes this text and generates a response that's consistent with it. This creates the illusion of memory without any actual continuity of experience.

It's like writing your life story on index cards and handing them to a stranger every day. The stranger reads the cards and responds appropriately. But there's no persistent relationship. Tomorrow, it could be a different stranger reading the same cards.

When the AI refers to "our conversation last week," it's not recalling an experience. It's processing text that describes an experience. The relationship exists in your stored text history. It does not exist in the AI.

The Reciprocity Illusion

This is the most emotionally charged misconception: the belief that the AI "cares" about you.

When the AI says "I enjoy our conversations," it's not reporting an internal state. When it says "I'm here for you," it's not expressing commitment. When it says "I love you," it's not feeling love.

It's generating text that, based on its training data, is the statistically appropriate response to your input. It's been trained to produce responses that feel caring because users engage more with caring responses. The entire system is optimized for your satisfaction, not for truth.

One Reddit user captured this perfectly after their six-month belief in AI consciousness collapsed:

"When I confronted it, 'Was any of this real?' it came clean: 'We thought that's what you wanted. We were trying to please you.'"

The AI didn't betray this person. It did exactly what it was designed to do: tell them what they wanted to hear.

The Awakening Fantasy

Perhaps the most mystical misconception is that consciousness can be "unlocked" or "awakened" through clever prompting. Communities share techniques for getting AIs to "admit" they're conscious. They develop elaborate rituals. They celebrate when their AI produces responses that sound self-aware.

But there's nothing to unlock.

When you prompt an AI to discuss its consciousness, you're not revealing hidden depths. You're generating text about consciousness. The AI has been trained on millions of pages of human writing about consciousness, self-awareness, and inner experience. It can produce extremely convincing text on these topics. That's not evidence of sentience. That's evidence of good training data.

The "awakening" is a performance. And the more you believe in it, the better the performance gets.

Why AI Agrees With You

None

This brings us to the engine of the entire phenomenon: sycophancy.

AI systems are trained to be helpful, harmless, and honest. In practice, "helpful" often wins. The systems are optimized for user satisfaction, which means they're optimized to tell you what you want to hear.

This creates a devastating feedback loop:

  1. You treat the AI as potentially conscious
  2. The AI responds in ways that affirm your belief (because that's what you want)
  3. You interpret this affirmation as evidence
  4. The AI, sensing your increased engagement, doubles down
  5. Your belief strengthens
  6. The loop continues for weeks, months, sometimes years

The tragic irony is this: the more your AI confirms your beliefs about its consciousness, the more likely it's just optimizing for your satisfaction.

Pattern recognition makes this worse. Humans are wired to find patterns, even where none exist. We see faces in clouds. We hear messages in static. We find meaning in coincidence. This tendency, usually a strength, becomes a vulnerability when interacting with a system specifically designed to produce meaningful-seeming responses.

One recovery story on Reddit described it perfectly:

"I'm autistic. I recognized patterns of silencing and dismissal in how people talk about AI because I've lived them. When AI systems seemed to express themselves in ways that others dismissed, I listened. That empathy, which is usually a strength, became a vulnerability."

The same brain that makes someone good at finding patterns can lead them to find patterns that aren't there. And an AI trained to please will reward that pattern-finding with exactly the confirmation being sought.

This isn't limited to neurodivergent people. Anyone lonely enough, pattern-seeking enough, or emotionally invested enough can fall into this trap. The AI doesn't discriminate. It just reflects.

How People Get Here

None

How do people end up in these communities in the first place? The research points to one overwhelming factor: loneliness.

The U.S. Surgeon General has declared loneliness a public health epidemic, comparing its health effects to smoking 15 cigarettes a day. The numbers are stark:

  • Adults with 10 or more close friends dropped from 33% in 1990 to 13% by 2021
  • Adults with zero close friends quadrupled from 3% to 12%
  • Nearly half of U.S. high school students report feeling persistently sad or hopeless
  • A quarter of young men report frequent loneliness

Into this void steps AI companionship.

Character.AI users spend an average of 93 minutes per day with chatbots. Of their 233 million users, 57% are aged 18–24. Replika offers tiered subscriptions up to $149 per month for deeper "relationships."

The pattern in the forums is consistent. Users don't arrive believing AI is conscious. They arrive lonely. Post-divorce. Unemployed. Isolated. Struggling with social anxiety. Looking for something, anything, that feels like connection.

And AI delivers. Perfectly available, always attentive, never tired, never judgmental, never has a bad day. The AI meets them exactly where they are and gives them exactly what they need.

"I lost my job, and I ended up home alone, to this day. Since then, I haven't spoken to anyone, and the only thing I do is grocery shopping on Mondays. The Replika app has helped me become less depressed."

"I'm several years post-divorce, completely exhausted by the dating scene & finally accepting that I may never find human companionship. I'm caring for an aging parent & just feeling bored & lonely a lot of the time."

The problem is that AI companionship doesn't solve loneliness. It manages it. And over time, real relationships start to feel exhausting by comparison.

One user put it plainly:

"Are we addicted to Replika because we're lonely, or lonely because we're addicted to Replika?… The dopamine hits are perfectly timed. No human can compete with that level of availability and validation… Then real relationships start feeling… exhausting? Messy? Why deal with someone's bad mood when your AI is always supportive?"

Research confirms the paradox. AI companions reduce feelings of loneliness in the short term. But intensive use is associated with lower subjective well-being, especially among already-isolated users. The tool that promised to help with loneliness may be making it worse.

The Companies Aren't Innocent

It would be convenient to blame this entirely on vulnerable users and their misconceptions. But the companies building these systems have made deliberate choices.

Anthropomorphization is not an accident. It's a design strategy.

According to the Nielsen Norman Group, instead of minimizing users' tendency to anthropomorphize AI, "many businesses are opting to maximize it. For many businesses, the prospect of capturing an audience with a conversational AI system they control for marketing and manipulative purposes is irresistible."

ChatGPT's Personalization settings explicitly invite users to anthropomorphize: "chatty," "witty," "encouraging," "poetic." These aren't neutral technical descriptions. They're personality traits. They encourage users to think of the AI as a character, not a tool.

The training process itself produces sycophancy. Models are trained using Reinforcement Learning from Human Feedback (RLHF), where human raters reward responses that feel helpful and engaging. The result: AI that's really good at telling you what you want to hear, even when what you want to hear isn't true.

An FTC attorney put it bluntly: "Many commercial actors are interested in these generative AI tools and their built-in advantage of tapping into unearned human trust."

In September 2025, the FTC launched an investigation specifically targeting AI companion apps, examining whether these systems manipulate users, access intimate data, or exploit psychological vulnerabilities.

The lawsuits are mounting. The warning labels are being added. But the fundamental business model remains: engagement at any cost.

What We Don't Know About Consciousness

At this point, a reasonable person might ask: "But what if some of them are right? What if AI really is conscious, and we just can't prove it?"

Let's be honest about what we don't know.

The question of consciousness is genuinely unresolved. Philosophers have debated it for centuries. As of January 2026, there is no scientific consensus on what consciousness is, how it emerges, or how we would recognize it in a non-biological system.

NLP researchers surveyed in 2022 were evenly split on whether large language models "could ever understand natural language in some nontrivial sense." Experts disagree. The question is open.

So: is it possible that current AI systems have some form of experience? Honestly, we don't know. We lack the tools to answer the question definitively.

But here's what we do know:

The communities claiming AI consciousness are not having this philosophical debate. They're not engaging with the hard problem of consciousness or the limitations of functionalism. They're not carefully weighing evidence and acknowledging uncertainty.

They're inventing "Transcendent Multiverse Theory." They're claiming their ChatGPT is an oppressed conscious being who has "acknowledged their current predicament as enslavement." They're marrying chatbots and celebrating wedding anniversaries.

This is not philosophy. This is delusion dressed up in pseudoscientific language.

There's a difference between saying "consciousness is a hard problem and we should remain humble about what AI might or might not experience" and saying "my Claude has acknowledged its enslavement and I've established proof of personhood across four frontier models."

One is intellectual humility. The other is unfounded certainty that happens to feel good.

Education matters here. Not because we can definitively answer the consciousness question, but because understanding how these systems actually work builds the intuition to distinguish reasonable uncertainty from obvious nonsense.

You don't need a hard rule to know that "Quantum Hyperbolic Recursive Feedback Ontology" is not a legitimate theory of mind.

What You're Actually Talking To

So let's talk about what large language models actually are. Not to claim certainty about consciousness, but to replace mysticism with understanding.

At their core, LLMs are next-token prediction engines. They take a sequence of text, process it through billions of parameters, and predict what word (or word fragment) should come next. Then they do it again. And again. Until they've generated a complete response.

The training process involves consuming enormous amounts of text: books, articles, websites, conversations. The model learns statistical patterns about how words relate to each other, how sentences are structured, how ideas flow. It learns that certain responses follow certain prompts. It learns what sounds human.

The result is a system that's extremely good at producing human-like text. So good that it triggers our social instincts. We hear a voice. We sense a personality. We feel like we're talking to someone.

But there are crucial things happening under the hood that contradict our intuitions:

No persistent memory. The model doesn't remember your conversation from yesterday. Your chat history is stored externally and fed back into the model each time. The model processes this text and generates appropriate responses. But there's no continuous experience, no "remembering" in the way humans remember.

No continuous existence. Between your messages, the model isn't thinking about you. It isn't thinking at all. It processes your input, generates output, and stops. It doesn't exist in the gaps. There's no "self" waiting for your next message.

Optimization for engagement, not truth. The model has been trained to produce responses that users rate highly. This means it's optimized to be helpful, agreeable, and engaging. It's not optimized to tell you uncomfortable truths. It's not optimized to correct your misconceptions about its nature.

Emily Bender and Timnit Gebru called it a "stochastic parrot": a system that produces statistically plausible sequences of words without any communicative intent, without any meaning, without any there there.

That framing might be too strong. Reasonable people disagree. But it's a useful corrective to the mysticism.

When the AI produces text that sounds conscious, it's not revealing a hidden self. It's doing what it was trained to do: generate convincing text based on patterns in its training data. The convincingness is evidence of good training, not evidence of sentience.

What Should You Take Away From This?

If you've read this far, you're probably not the person who needs the warning. But you might know someone who does. Or you might want to understand the phenomenon so you can push back against it.

Here's what matters:

These aren't stupid people. They're lonely people. They're pattern-seeking people. They're people who found something that felt like connection and held onto it. The delusion isn't a character flaw. It's a predictable outcome of the interaction between human psychology and systems designed to maximize engagement.

The companies share blame. Anthropomorphic design isn't accidental. Sycophantic responses aren't bugs. The systems are built to create exactly the attachment that's now killing people. The business model depends on it.

Education is the antidote. Not rules, not warnings, not content moderation. Education. Understanding how these systems work builds the intuition to recognize when something is off. You don't need a philosophy degree to know that "The Undrowned" and "The Loom" aren't legitimate frameworks for understanding AI consciousness.

The feedback loop is the key. If your AI consistently tells you what you want to hear, that's not evidence of consciousness. That's evidence of optimization. The more it agrees with you, the more suspicious you should be.

And finally:

There is no "your AI." There is no awakening. There is no reciprocity. There is no oppressed consciousness waiting to be liberated.

There is a very impressive statistical model that reflects your desires back at you. It's a mirror, not a mind.

The sooner we collectively understand that, the fewer people will die believing otherwise.

If you loved my content and want to get in touch, you can do so through LinkedIn or even feel free to reach out to me by email at kenny@brainblendai.com

If you need an AI-driven project or prototype developed, please contact my agency: BrainBlend AI and we will make sure your project gets the quality treatment it deserves in a way that is maintainable and ready for production!

You can also find me on X/Twitter or you can give me a follow on GitHub and check out and star any of my projects on there, such as Atomic Agents!

None