If you have ever used ChatGPT to understand a topic quickly, you may have noticed how efficient it feels. Within seconds, you get a clear, neatly packaged answer. It is often better written than what you would find through a random web search. However, according to new research from Wharton and New Mexico State University, that very convenience may come at a price. When learning feels too easy, we may end up learning less deeply.
In a series of seven experiments with more than 10,000 participants, Shiri Melumad and Jin Ho Yun compared how people learn through large language models (like ChatGPT) versus traditional web searches. The results were remarkably consistent. People who used ChatGPT felt they had learned less. They produced advice that was less detailed and less original, and their writing was judged as less informative and persuasive. Even when the information provided was exactly the same, just presented differently, those who received an LLM summary learned less. This was in contrast to those who had to navigate web links themselves.
The explanation is not mysterious if you understand the importance of thinking for retention. When we search the web in the old-fashioned way, we have to explore, compare, and decide which sources to trust. That process takes effort, but it also helps us to construct mental models and connect ideas. When an LLM does that synthesis for us, we skip the part of learning that requires us to think actively. The researchers connect this to the idea of “desirable difficulties” in cognitive psychology. Essentially, learning becomes deeper when it is effortful enough to force engagement.
In one of the most telling follow-up studies, the researchers asked other participants to read and evaluate the advice written by those who had learned through either Google or ChatGPT. The readers were unsure which was which, but they consistently found the LLM-based advice to be less helpful, less trustworthy, and less worth adopting. The difference in learning quality was not just in people’s heads — it was visible in the outcome.
None of this means we should stop using AI for learning. The study itself is careful about its boundaries: LLMs may still be better for acquiring factual or procedural knowledge, and they can certainly make information more accessible. Still, it is a timely reminder that friction can be a feature, not a flaw. If an answer arrives too easily, we may be skipping the very steps that make learning meaningful.
That said, it is worth keeping some perspective. The study is impressively designed, but the tasks were relatively short-term and straightforward in nature. Learning how to grow vegetables or give lifestyle advice is not the same as mastering a discipline or developing expertise. What the research captures is the process of learning rather than its long-term outcomes. It reminds us that ease and efficiency can make learning feel smooth, but that smoothness can also be misleading. The real question for education is not whether AI makes us think less, but how we can design its use so that it encourages us to think more.
It also raises a broader question. Most search engines today already rely on AI to interpret queries, curate results or even give just answers instead of links, so the line between “AI search” and “AI answers” is quickly disappearing. The difference may no longer be about the technology itself, but about the amount of agency it affords the learner. The more a system thinks for us, the less we may think for ourselves.