Last Friday, I was in Paris for what was, without exaggeration, one of the best lectures I have seen in recent years. Barbara Oakley was on stage, doing what she does better than almost anyone else: making complex insights from cognitive science and neuroscience clear, without flattening them. She touched on the importance of knowledge in the age of AI, drawing connections that, once you have seen them, are hard to unsee.
In her 30 minutes, she also mentioned a preprint that fits her lecture perfectly and that I had already written about earlier on this blog. The paper deserves renewed attention, and hearing her speak allowed me to place a few different emphases. The piece is titled The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI. Oakley co-authored it with, among others, Terrence Sejnowski, and it is precisely the kind of contribution that is often missing from today’s AI debate. It is neither technologically naïve nor nostalgically defensive, but sharply focused on what learning actually is.
The core argument is as simple as it is uncomfortable. We live in a time when information is always available. Why still memorise anything if you can look it up? Why automate skills if a tool can do the work for you? Oakley and her colleagues offer a non-moral, neurocognitive answer because that is not how the brain works. Without internalised knowledge, we quite literally learn to think less well. In that sense, their argument aligns closely with what Paul, Casper, and I have been writing about for years: the role of knowledge in the age of search engines.
The paper makes a distinction that educational debates often blur, between declarative and procedural knowledge. Declarative knowledge involves facts and concepts we can consciously recall. Procedural knowledge involves skills that feel automatic, including reading, calculating, reasoning, and recognising patterns. According to the authors, genuine expertise develops when repeated practice converts declarative knowledge into procedural knowledge. This process relies on well-documented neural mechanisms in the brain.
This is where cognitive offloading, the outsourcing of tasks to technology, becomes problematic. Someone who continuously uses a calculator, relies on spell-check, or thinks with ChatGPT can produce correct answers without ever making that internal transition. The brain mainly learns where the answer can be found, not how it is structurally generated. That difference may seem small, but it is fundamental. Knowing where something is stored is not a schema. It is a pointer.
One of the strongest parts of the preprint is its discussion of predictive learning. Our brains learn by forming expectations and testing them against reality. When something unexpected happens, for example, a wrong answer or a surprising result, a learning signal is triggered in the brain. But this mechanism only works if there are internal expectations to begin with. Without internalised knowledge, there is no prediction error, and therefore no real adjustment. This helps explain why pupils who have never automatised multiplication tables sometimes accept wildly incorrect results. Nothing sets off an internal alarm.
The authors also draw an interesting parallel with artificial neural networks. In AI research, the term grokking describes models that, after long periods of seemingly redundant training, suddenly generalise much better. What was once dismissed as overtraining turns out to be crucial for deep understanding. Oakley and her colleagues convincingly argue that the same applies to humans. What is often labelled “drill” in education is, in reality, the foundation for intuition, creativity and critical thinking.
This makes the paper anything but anti-AI. On the contrary, the authors are strikingly nuanced. Effective collaboration with AI presupposes strong internal models. You can only evaluate output critically, recognise errors or ask meaningful questions if you have internalised sufficient knowledge yourself. AI does not replace that. It widens the gap between those with robust schemas and those without.
The implications for education are substantial and not always comfortable. The paper takes an explicit stance against forms of discovery learning that avoid clear instruction, and against the idea that “learning how to learn” can be separated from content. Learners acquire biologically primary knowledge, such as language, spontaneously. They do not acquire biologically secondary knowledge, including mathematics, reading and science, in the same way. That kind of knowledge requires structure, explanation, practice and repetition. This is not an ideological claim but a neurocognitive one. Others, including some who attended the same conference, question this distinction. For now, however, Oakley’s neurocognitive case is more convincing than the alternatives I have read or heard.
This preprint stands out because it dismantles a false dilemma. It does not pit knowledge against skills, nor humans against machines. Instead, it reframes the issue as one of augmentation without atrophy. Technology can strengthen our thinking, but only when learners first build the underlying cognitive work rather than outsource it too early.
After that lecture in Paris and reading this paper, one thought lingers. In a world where almost everything can be looked up, what you do not need to look up becomes more important than ever. Not despite AI, but precisely because of it. In short, the importance of knowledge may well be greater than ever.