Extra evidence for one of the main points of my talk past Saturday in NYC: major article on AI in education retracted

Sometimes confirmation comes faster than expected. Last Saturday in New York, I argued that the real issue with AI in education is not a lack of studies, but a lack of good studies. And that a lot of people are drawing conclusions far too quickly from what is available.

Today, I came across this post by Ben Williamson: a widely shared “meta-analysis” on ChatGPT and learning performance has been retracted by its publisher (Nature) for methodological reasons.

That is striking in itself. But the problem started earlier. A meta-analysis of a technology barely two years old should already raise questions. What you often get in such cases is not a synthesis of strong evidence, but an aggregation of weak studies reinforcing each other.

And yet, this study received enormous attention. Hundreds of thousands of views, hundreds of citations, and widely shared as proof that “AI works”.

That is exactly the issue. As I wrote earlier (see here: https://theeconomyofmeaning.com/2026/04/30/ai-in-education-what-800-studies-do-and-dont-tell-us/), we now have hundreds of studies, but only a small fraction provides strong causal evidence. The rest mostly tells us what is possible under ideal conditions, not what leads to durable learning.

So this retraction changes less than it might seem. What it really does is make visible what was already there: a field that is growing faster than its evidence quality.

Maybe that is the real takeaway. Not that AI does not work, but that we are still figuring out when, how, and for whom it possibly works. And that requires a bit more patience than the current debate often allows.

Oh, and helps to make sure I keep writing little rants once in a while.

Leave a Reply