For most of the 20th century, humanity got demonstrably smarter. In a phenomenon known as the "Flynn Effect," average IQ scores climbed three to five points a decade across dozens of countries, progress fueled by better nutrition, healthcare, and education.
Then, around the turn of the millennium, it stopped.
In many Global North countries, IQ scores started moving in reverse.
A 2023 evidence synthesis confirmed a measurable decline in US cognitive performance, with average IQ scores falling since the late 1990s. Similar reversals are documented across Europe, from a four-point drop in France to sustained declines in Norway.
What I found most interesting is that it's not our DNA. A Norwegian study of over 730,000 individuals concluded that the cause of our cognitive decline is environmental. Which means it's something we are doing.
So what is it?
While a range of factors are likely at play, one of the most compelling suspects in this cognitive mystery is a deceptively simple question that has become our collective narrative: "Why remember what I can just Google or ask ChatGPT?"
While the question seems logical (and I've blindly followed it for most of my adult life), it's based on a fundamental misunderstanding of what memory is for, and this seemingly harmless habit is quietly eroding the very neural architecture that expert thinking is built upon.
A Mind Without Knowledge Is a Body Without Muscle
"Why remember what I can just Google or ask ChatGPT?" reframes memory as a burden. An obsolete filing cabinet for facts. But, as I've shared in my 2022 TED talk, this fundamentally misunderstands how you build a rich, interconnected library of knowledge in your own mind.
Consider an expert basketball player. When she first learned to dribble, she had to consciously think through every single step. It was clumsy, slow, and took up all her mental energy. In neuro-speak, she was using her working memory to handle every piece of the task.
But after hours of practice, a deliberate, sometimes frustrating, repetition, her brain bundled all those steps into one smooth, efficient package, a so-called "chunk." Now, the act of dribbling is a single, ingrained neural pattern. It runs on autopilot, taking up just one slot in her limited working memory. This is what frees up her mind to do the real work: to scan the court, read the defense, and make the play.
Thinking works in exactly the same way. When you truly learn a concept, whether it's a math equation or a historical argument, you are bundling it into a compact chunk. This is the goal of all effective learning: to create a vast mental library of these chunks that you can access instantly. This is what frees your working memory for higher-order thought: synthesis, creativity, and strategic insight.
Constant cognitive offloading, through Googling, a "second digital brain," and now ChatGPT, short-circuits this process. We get stuck at "knowing about" a topic, never reaching the automaticity of "knowing how."
If you can't recall it without a device, you haven't truly learned it. You've rented the information. A mind full of facts you can look up but not truly know is like a beautiful brick house with no mortar. It looks impressive from a distance, but the moment you lean on it, the moment you need to solve a novel problem or come up with a creative insight, the entire structure collapses.
Three Ways We Short-Circuit Our Own Brains
Outsourcing our thinking to devices undermines the three core processes of deep learning:
- Automaticity. The ability to perform a skill without conscious thought — like reading this sentence — only comes from repetition. It frees up your mental bandwidth. When you outsource basic calculations or recall, you never build that automaticity, leaving your mind constantly bogged down by the basics.
- Schema Construction: A schema is a mental framework that organizes knowledge. Neurologically, these are built from efficient patterns of neural activity, like a well-organized filing system in the brain. They are what separate an expert from a novice. Looking things up gives you a single file; deep learning builds the entire cabinet.
- Prediction Error: The brain learns best when it's surprised — when it detects a mismatch between an expectation and a result. This only works if you have an internal prediction to begin with. If you rely on a calculator for 5 x 10 and a typo gives you "500," your brain feels no error because it never made a prediction. You bypass a fundamental learning mechanism.
This cognitive decline is now colliding with mass AI automation. Some argue this isn't a decline, but a shift to a new kind of intelligence. But I find this view naive.
Creativity and insight do arise from a rich, internalized network of knowledge. Without that foundation, AI collaboration becomes simple order-taking. When the system needs an adaptable, creative thinker to handle a novel crisis, it will find… no one.
A Playbook for a Stronger Mind
The fix isn't nostalgia for pre-Internet memory drills. It's a simple strategic framework:
- Forge Knowledge, Don't Just Find It. The goal isn't memorization instead of thinking; it's memorization for thinking. Reintroduce active recall into your life. Use simple tools like flashcards and low-stakes quizzes to forge a mental library you can draw on instantly. Actively retrieving information is one of the most powerful ways to build lasting knowledge.
- Follow a Stepwise Progression. Learn like an apprentice. Before you use AI to help write a report, write the first draft yourself. Before you use an AI code generator, manually write and debug a small algorithm. Build your cognitive muscles in a safe environment first. Then bring in the power tools to accelerate your growth.
- Wield Technology as a Complement, Not a Crutch. In your work and learning, make it a rule to attempt problems with your mind first. Use AI as a coach to check your work or a collaborator to challenge your thinking — not as a first-pass solution. This turns technology from a tool that weakens you into one that makes you stronger. In the workplace, this means investing in juniors even when AI is faster. You're not just building output; you're building resilience.
The Choice That Makes an Expert
The trendlines point in a worrying direction. As a society, we've begun to favor the easy dopamine hit of instant answers over the satisfying, deep work of cognition.
The rise of AI puts this trade-off into hyperdrive. The neurological principle here is simple. Neural pathways that go unused begin to weaken. Your brain is a masterful gardener; it simply won't water the plants you never tend to.
The ultimate question is not about nostalgia for a pre-digital world. It's about the kind of mind you intend to inhabit.
Every time you face a gap in your knowledge, you make a choice. You can make a short-term withdrawal for an immediate answer, or you can make a long-term investment by doing the work to internalize the knowledge.
The first option feels efficient. The second is what builds your cognitive capital: the interconnected mental library that allows for creative leaps and resilient problem-solving.
References
Bratsberg, B., & Rogeberg, O. (2018). The Flynn effect and its reversal are both environmentally caused. Proceedings of the National Academy of Sciences, 115(26), 6674–6678. https://doi.org/10.1073/pnas.1718793115
Dworak, E. M., Revelle, W., & Condon, D. M. (2023). Looking for Flynn effects in a recent online U.S. adult sample: Examining shifts within the SAPA Project. Intelligence, 98, Article 101734. https://doi.org/10.1016/j.intell.2023.101734
Fan, X., Zhang, Y., & Chai, Z. (2024). Using ChatGPT in educational settings: a blessing in disguise. Innovations in Education and Teaching International, 1–13. https://doi.org/10.1080/14703297.2024.2323537
Flynn, J. R. (2009). Requiem for nutrition as the cause of IQ gains: Raven's gains in Britain 1938–2008. Economics and Human Biology, 7(1), 18–27. https://doi.org/10.1016/j.ehb.2009.01.009
Karpicke, J. D., & Roediger, H. L. (2008). The critical importance of retrieval for learning. Science, 319(5865), 966–968. https://doi.org/10.1126/science.1152408
Oakley, B., Johnston, M., Chen, K., Jung, E., & Sejnowski, T. (2025, May 11). The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI. SSRN. https://ssrn.com/abstract=5250447
Pietschnig, J., Voracek, M., & Gittler, G. (2012). Is the Flynn effect related to migration? Meta-analytic evidence for correlates of stagnation and reversal of generational IQ test score changes. Intelligence, 40(1), 81–91. https://doi.org/10.1016/j.intell.2011.11.003
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google's effects on memory: Cognitive consequences of having information at our fingertips. Science, 333(6043), 776–778. https://doi.org/10.1126/science.1207745
Sundet, J. M., Barlaug, D. G., & Torjussen, T. M. (2004). The end of the Flynn effect?A study of secular trends in mean intelligence test scores of Norwegian conscripts during half a century. Intelligence, 32(4), 349–362. https://doi.org/10.1016/j.intell.2004.06.004
Teasdale, T. W., & Owen, D. R. (2008). Secular declines in cognitive test scores: A reversal of the Flynn Effect. Intelligence, 36(2), 121–126. https://doi.org/10.1016/j.intell.2007.03.004
Trahan, L. H., Stuebing, K. K., Fletcher, J. M., & Hiscock, M. (2014). The Flynn effect: A meta-analysis. Psychological Bulletin, 140(5), 1332–1360. https://doi.org/10.1037/a0037173