Is Evidence-Informed Education Left, Right, or Neutral? Spoiler: None of the Above

In debates about education policy, ideas are quickly sorted into ideological boxes. Left. Right. Centrist. The same increasingly happens with evidence-informed education. Is it progressive? Is it conservative? Or does it float above politics altogether?

The question sounds simple. The answer is not.

In 2024, I watched a debate in Paris between Paul Howard-Jones and Nick Gibb. On paper, one could position them at different ends of the education spectrum. Yet both appeal to research. Both use the language of evidence. And both see themselves as acting in the interest of better schooling. That alone should make us cautious about placing evidence-informed education neatly on a political axis.

Evidence-informed practice is not a party programme. At its core, it is an approach that brings together three sources of insight: systematic research, professional expertise and the realities of context. That sounds neutral, almost technical. It is not.

Research does not decide our goals. It cannot tell us whether we should prioritise equity over excellence, knowledge over skills, or short-term wellbeing over long-term attainment. Those are value choices. Evidence can inform how we pursue goals. It cannot determine which goals we ought to pursue.

This is where much of the confusion starts.

Direct instruction became prominent through Project Follow Through, which was closely linked to Head Start and the ambition to reduce inequality. In that context, structured teaching approaches were part of a social justice agenda. Today, some people sometimes label knowledge-rich curricula as conservative, because they associate it with particular policy choices in England. At the same time, some of the most influential recent research on knowledge-rich approaches explicitly focuses on narrowing attainment gaps.

The ideological label shifts. The underlying research does not necessarily.

That is precisely why evidence-informed education can frustrate people on different sides of the debate. For some, it appears too prescriptive. For others, it appears insufficiently transformative. It challenges romantic narratives about child-centred discovery learning, but it also resists simplistic claims of a single universally correct method.

Evidence rarely offers slogans. It offers nuance, probability and limitation. It often surprises me that people still read this blog.

For me, being evidence-informed also means being explicit about those limitations. It means acknowledging effect sizes rather than celebrating headlines. It means examining unintended consequences. And it also means asking not only whether something works, but for whom, under what conditions and at what cost.

That is not a neutral stance in the sense of being detached from values. It is a disciplined stance. It demands clarity about what we care about and honesty about what we know.

So is evidence-informed education left, right or neutral?

None of the three.

It is a way of working that cuts across ideological lines. People can use it in the service of different political projects. What it cannot do is remove politics from education. Nor should it pretend to. If anything, it makes the political nature of educational choices more visible. And that visibility is uncomfortable. But it is also necessary.

4 thoughts on “Is Evidence-Informed Education Left, Right, or Neutral? Spoiler: None of the Above

  1. Does educational research actually offer probability? Hattie is often presented as the leading education researcher to use probabilistic claims, through the Common Language Effect Size (CLE) calculations in Visible Learning. However, numerous peer reviews have shown that ALL of Hattie’s CLE calculations are incorrect. For example, in the section on Feedback, Hattie cites Standley (1996) and calculates a probability of 203%; in Reducing Disruptive Behavior, he cites Reid et al. (2004) and calculates a probability of –49%. These are obviously impossible values.

    Despite this, several advocates of the “Science of Learning” (SoL) have begun promoting probabilistic language—terms like “probabilities” or the Education Endowment Foundation’s phrase “best bets.” Yet when asked how these probabilities or “best bets” are actually calculated, none have provided an answer. Until such methods are clearly explained and scrutinized, we should be skeptical of claims such as, “The science of learning isn’t about prescriptions, it’s about probabilities.” Without transparent methodology, statements like this amount to opinions or intuitions presented as scientific fact.

    1. The EEF does not calculate literal probabilities that an intervention will work. Their toolkit synthesises effect sizes, evidence strength and cost to identify what they call “best bets”. In other words: approaches that, across multiple studies, tend to produce positive effects on average.

      When people say education research deals with probabilities, they usually mean something simpler: interventions increase the likelihood of learning but never guarantee it. That is simply how effect sizes from experiments and meta-analyses should be interpreted.

  2. Hattie certainly attempted to calculate literal probabilities, but I agree that most people use the term probability in a much looser, more informal way. And that is exactly my concern: using this simplified notion of probability often becomes a rhetorical device that makes an opinion appear more objective or empirically grounded than it really is.

    I also disagree that effect sizes should be interpreted in probabilistic terms. Studies and meta‑analyses frequently produce widely varying effect sizes, and treating them as probabilities glosses over this inconsistency. Effect sizes represent average differences between groups, not the likelihood that any particular student will learn. Reducing them to probabilities oversimplifies the nature of education research and obscures mechanisms, contextual factors, and the true meaning of the quantitative results.

Leave a Reply