Educational research is a house with many rooms. But some doors are better left open.

There are a few debates in education that escalate as quickly as those about research. Before we even get to the substance, we are already arguing about paradigms. Positivist. Post-positivist. Interpretative. Critical. As if choosing one lens automatically makes all others irrelevant.

After my recent post on a study in Nature (actually 3 studies), I read several responses along those lines online. The piece was said to rely too much on a positivist logic. Not all educational research aims at reproducibility. Education is broader than “what works”. The field is richer than a single scientific tradition. There is truth in all of that. But something also feels off.

It may help to think of educational research differently. Not as a single room, but as a house with many rooms. In some rooms, researchers measure, compare and test. In others, they interpret, describe and try to understand. Elsewhere, they reflect on what education should be.

You need the whole house.

The problem starts when one room presents itself as the entire house. When effect-research suggests it can answer all relevant questions. But also when other rooms act as if questions about effectiveness, reliability or generalisability do not matter. Or worse, as if they are somehow suspect.

One of the responses I saw argued that reproducibility is not always the aim of educational research. That is correct. But it depends on what you are trying to do. Understanding a classroom practice is different from evaluating an intervention and making broader claims about its impact.

The stronger the claim, the higher the bar for evidence.

Another response suggested that education cannot be reduced to “what works”. Again, that is true. Education involves goals, values and choices that research alone cannot settle. But once we make claims about effects, about learning, motivation or inequality, questions about the quality of evidence become unavoidable. “More than what works” is not an alternative to rigorous effect research. It is a necessary complement.

The tension may lie elsewhere. In practice, these worlds constantly overlap. Research offers, at best, possible principles, not recipes. Concepts such as high expectations and quality feedback only take on meaning in real classrooms. Teachers interpret, translate and adapt. That is what makes education powerful, but also what makes it difficult to study and even harder to implement in a consistent way.

This is not a weakness of research. It is a property of practice.

Which is precisely why it is risky to ignore one part of the house. Thinking about education without engaging with effect research is like deliberately closing a room because you do not like its design. You can do it. But you miss something. Sometimes exactly the part that helps you avoid systematically poor decisions.

The reverse is equally true. Thinking that effect sizes, meta-analyses or randomised studies tell the whole story is just as problematic. They say something about patterns, not about purposes or the values embedded in them. About probabilities, not certainties. About averages, not every individual case.

A good educationalist understands this. They know not only the possibilities of different paradigms, but also their limits. They recognise when a question calls for interpretation, comparison, or deeper reflection. They consider when context should take centre stage and when robustness becomes more important.

Not everything in education can be measured. But what we learn from what is measured deserves to be taken seriously. The issue is not that there are many rooms. The issue arises when we act as if we only need one. Or when we refuse to enter a room because we disagree with how it is furnished.  Educational research does not advance by absolutising a single paradigm. But neither does it improve by excluding one on principle.

Leave a Reply