“ Your brain starts rotting away after just ten minutes of AI use,” Vice headlined this week. Similar interpretations of a new preprint on AI use and cognitive performance appeared elsewhere as well. When I discussed the research on Belgian National radio, I tried to present the study more accurately.
The study itself, however, is more interesting than the headlines suggest. Researchers from Carnegie Mellon, Oxford, and MIT, among others, investigated, in a series of randomised experiments, what happens when people work with AI for a short time and then have to function without it again. In total, 1,222 participants took part in experiments involving solving fractions and reading comprehension. The setup was relatively simple. Some participants were given access to GPT-5 during practice, others were not. Afterwards, everyone had to solve similar tasks independently, without AI.
What did the results show? Participants performed better while using AI. No surprises here. But once the researchers removed the AI… Participants who had used it scored worse on average than the control group. Moreover, they gave up more quickly. In particular, participants who primarily used AI to get direct answers performed less well independently afterwards.
That sounds serious, and it is certainly something to keep an eye on. The effect sizes were mostly small to moderate. Moreover, in the stricter replication of the experiment, the effect decreased considerably. Additionally, it involved very short, artificial tasks in an online setting. Solving fractions for ten minutes on Prolific (a platform where you can sign up to participate in experiments of this kind, for example) is not the same as learning for months in a real classroom context. The study therefore shows no “brain damage” or permanent cognitive decline. What it does show, however, is something that has actually been known for some time from research into cognitive offloading. When people systematically outsource tasks to tools, they practice certain skills less themselves.
So that idea is not new. We have seen similar discussions before with calculators, GPS systems, search engines, and spell checkers. People remember fewer phone numbers since smartphones have existed. We navigate less effectively without GPS. Those who always use autocorrect may write less accurately without support. That does not automatically mean that technology is bad. It primarily means that support and dependency often go hand in hand.
What is particularly interesting about this study is that not every use of AI had the same effect. Participants who used AI for hints or clarification performed better afterwards than participants who simply asked for solutions. This aligns nicely with older research on scaffolding, productive struggle, and guidance. Good support does not completely take over thinking, but helps people just enough to continue on their own.
And that is perhaps precisely where the real educational debate surrounding AI lies. Not: “Does AI work?” Of course, AI can perform many tasks, but it can also hinder learning. The more relevant question is: what remains when the AI is removed? Or even more precisely: which forms of AI use strengthen autonomy, and which undermine it?
The authors of the study actually formulate this quite nuancedly. Their criticism focuses less on AI itself than on the fact that current systems are primarily optimised for immediate assistance and user satisfaction. AI systems provide quick answers, resolve friction, and rarely say, “Try again.” That is convenient in the short term, but potentially less ideal for long-term competence development.
That does not mean that we should suddenly ban generative AI from education. But it does mean that the distinction between “performing with support” and “learning without support” is once again taking centre stage. In fact, that is an old educational lesson in a new technological guise. And a discussion that also prompts serious reflection on curricula.