Sometimes, in your classroom, you do not see any difference. Two students complete the same exercise, give the same correct answer, and seem to finish at the same time. Everything suggests they are equally capable. Until you look more closely into the cognitive processes, e.g. in maths learning.
A recent study in the Journal of Neuroscience by Hyesang Chang and colleagues does exactly that, and more. They looked more deeply than a teacher typically can. They compared children with and without mathematical difficulties on a fairly simple task: indicating which of two quantities is larger. Sometimes these were dot arrays, sometimes numbers.
At first glance, nothing remarkable seems to be happening. Both groups perform about equally well. They are similarly accurate and similarly fast. No major differences. But that is precisely where things become interesting.
The researchers did not only look at what the children did, but also at how they arrived at their answers. Using a fairly advanced model, they tried to make the underlying cognitive processes visible. Not the final result, but the path towards it.
And then differences do appear: Children with mathematical difficulties seem to rely more on slower, less efficient processes. They need more time internally, even if that is not always visible in the final response time. Children without such difficulties process the same task more efficiently, even when their observable performance looks similar.
In other words, two identical answers can come from very different cognitive routes.
That matters.
Because in practice, we often focus on what is visible. Correct or incorrect. Fast or slow. But this study suggests that those surface indicators do not always tell the full story. Two students who appear to be at the same level may actually be working very differently beneath the surface. And that has implications for how we interpret learning difficulties.
It is tempting to assume that if a student struggles with maths, the problem lies in their mathematical knowledge or skills. But sometimes the issue may be more fundamental. Not in what they know, but in how they process information.
This also helps explain why some interventions do not always work as expected. If we only target observable performance, we may miss the underlying processes that actually drive it.
At the same time, we should be cautious. Because this is not a study that immediately translates into clear classroom interventions. The methods used are complex, and the models are not something teachers can directly apply. But that does not make the findings irrelevant. On the contrary, they remind us of something important.
The implication is not that we suddenly all need brain scans. It is what makes sense to pay attention to what happens beneath the surface. Not only “did you get it right?”, but also “how did you know?”, “What did you do when you were unsure?”, “What did you change after a mistake?”
The conclusion, then, is also this: metacognition is not an add-on to mathematics. It sits right at its core.