Writing an essay with ChatGPT is seductively easy. You give a prompt, the AI returns a fluent response, and you feel efficient and intelligent for a moment. But what does that convenience do to your brain, memory, and learning process? A new study by researchers at MIT offers one of the most comprehensive answers to date, with EEG measurements, interviews, and detailed analysis of the written texts. The results are both fascinating and slightly alarming.
I should probably confess that I didn’t read all 200 pages of the study this time. I limited myself to the (very extensive) summary and looked up a few parts here and there, mainly where the authors explicitly refer to John Sweller’s cognitive load theory. Seeing the results through that lens makes much sense.
In the experiment, students were divided into three groups. One group wrote essays using only their brain (no tools), the second could use Google, and the third worked exclusively with ChatGPT. Their brain activity was measured using EEG, their essays were analysed using NLP (no, not that kind of NLP), and they were interviewed after each session.
The conclusion is surprisingly straightforward—and quite logical: the more help you used, the less active your brain was. Students who wrote everything themselves showed the strongest and most widespread neural activity. The search engine group came somewhere in between. And those using ChatGPT showed the weakest brain connectivity, especially in the alpha and beta waves associated with attention, planning, and information integration. They literally made less mental effort.
And that came at a cost. The students in the LLM group:
-
Reported less ownership of their texts (which aligns with earlier research I’ve blogged about);
-
Could barely quote from what they had just written (had they even really read it?);
-
Delivered more uniform, predictable texts—often strikingly similar to each other and typical ChatGPT responses.
The underlying cognitive processes were weaker, even when the essays weren’t necessarily bad. That became especially clear in a fourth session where some students switched methods. Those who first used ChatGPT and were then asked to write without tools showed significantly less brain activity than those who made the opposite switch. It was as if their brains had trouble getting started again. A kind of cognitive laziness had set in. That part, to be honest, strikes me as the most worrying.
Of course, it’s tempting to sound the AI alarm, but this study calls for more nuance than panic. The students worked on short essays in a controlled lab setting, not long-term projects, collaborative learning, or revising complex texts. The participant pool was also relatively small (54 in total, 18 per condition), and all were academically strong students from top universities. That context matters.
Still, despite these limitations, the study points to a real risk: if we systematically outsource our thinking, we may learn less, remember less, and feel less connected to our work. Not because ChatGPT is dumb, but because it rarely forces us to think deeply for ourselves.
The comparison to the calculator comes to mind. That tool was once feared, too, but turned out to be incredibly helpful—if you first learned to calculate by hand. The real question is not whether we should ban ChatGPT, but how we can use it thoughtfully: as a tool for revision and reflection, not as a shortcut around the thinking process itself.