In Part One, we dissected the clinician's role in the "illusion of objectivity." We explored how the doctor projects a static model (The Map) onto a chaotic life (The Territory).

But there was another person in that room.

To understand the full mechanics of the diagnostic loop, we must turn the forensic lens onto the patient.

The Public Criteria

None

In traditional medicine, a patient often arrives with a hypothesis. You might walk in saying, "I think I've torn my meniscus," or "I'm worried about my liver."

The crucial difference is the verification.

You cannot study for an MRI. You cannot influence a blood test for pancreatic enzymes by learning the symptoms of pancreatitis. The diagnostic markers are technical, hidden, and resistant to your narrative. The doctor can verify the territory independently of your map.

Mental health criteria are different. The verification is the narrative.

If your knee hurts, you walk into the doctor's office saying, "My knee hurts when I bend it." You report the raw sensation. You do not say, "I suspect I have an Anterior Cruciate Ligament tear." You know your knee hurts.

In order to reduce stigma around mental disorders, the criteria have become increasingly public knowledge. Social media hashtags are littered with discussions around mental health, ADHD and Autism.

It starts with a raw, internal feeling: "I don't feel normal, I don't feel like everybody else." "Why am I struggling?" But we do not stay in that uncertainty. We immediately look for a container to hold the distress.

It continues with a suggestion: "My friend with ADHD said I have it too." It perpetuates in the algorithm: "I saw a video on social media, and that was so me."

We then say, "I think I have ADHD," or "I suspect I am Autistic." By the time we sit in the chair, we have already swapped the raw feeling for a specific technical label.

This is the downside of what we call "Better Awareness." We frame widespread information as education, as fact, "Now we know what to look for." However, from a forensic perspective, this awareness acts as a contamination of the data. How do we know the information we are receiving is accurate?

Because there is no biological test to act as a neutral arbiter, the assessment relies entirely on your self-report. The criteria are public knowledge, littered across social media and ingrained in the algorithm; you enter the room knowing exactly which answers trigger the diagnosis.

The Patient Hypothesis

None

In the mental health domain, patients do not arrive with a question; they arrive with a preformed hypothesis. They are a motivated investigator who has already decided on the conclusion. They walk into the assessment having internalised the criteria.

They know that "Rejection Sensitivity" is a marker, and they have it. They know that "social justice" is a symptom, and they feel it. Because they are "aware" and have already hypothesised the answer, they are unable to provide an unbiased account of their life.

When the clinician asks a question, the patient does not search their memory for the objective truth; they access the specific examples they know fit the criteria. Examples that do not fit are explained as "masking" or "camouflaging" or clinician naivety to the "new" explanations of AuDHD. The lack of empirical test means that this is unfalsifiable.

The clinician hears the "correct" examples and validates the patient's feelings. The patient feels heard. But functionally, this is not a discovery of a hidden disease; it is the recitation of a cultural script.

See part 1 for more on that.

The Bayesian Brain

None

To understand how a person can unintentionally curate their own pathology, we can look to some neuroscience, specifically Predictive Coding (often called the Bayesian Brain).

The brain is a prediction machine. Patternicity is fundamental to the human brain; it isn't a neurodivergent trait, but rather a human one. It constantly projects a model of reality outwards and scans the environment for data that confirms its prediction.

When the brain encounters data that contradicts the prediction, without active processing and thought, it filters it out.

When a patient adopts the hypothesis "I have Executive Dysfunction," their brain creates a filter. It actively scans for moments of evidence that fit and amplifies them. Simultaneously, it suppresses memories of evidence that contradicts.

The belief does not just describe the reality; it curates it.

Historical Revisionism

Couple this predictive filter with a preformed hypothesis, and it leads to a process of Historical Revisionism.

We established a metaphor in Part One that a human life can be like a Scatter Plot, a messy, chaotic distribution of traits (plots). Once the hypothesis is adopted, the patient engages in unconscious biographical editing.

They look back at the scatter plot of their childhood and begin cherry-picking. They circle the time where their behaviours align (The Dot that fits). They ignore the behaviours that do not (The Outlying Dots).

Complex, contextual human struggles are stripped of their nuance and re-packaged as "symptoms." A chaotic home life is now caused by "genetic neurodivergence." Relationship instability is now a "dopamine deficiency" chasing a "fix".

The patient is not lying. They are doing what the human brain is designed to do: creating a coherent narrative out of chaos. But in doing so, they are rewriting their history to ensure it forms a straight line pointing inevitably to the diagnosis.

The Nocebo Effect

The placebo effect has an opposite, Nocebo effect; the worsening of symptoms due to negative expectation.

If you believe you have "Executive Dysfunction," the struggle to act is no longer a temporary state; it is a permanent biological defect. The belief validates the paralysis.

See Part Three for elaboration.