Policymakers and teachers regularly ask whether targeted interventions for learners with special educational needs (SEND) and disabilities actually work, yet research has rarely addressed this question with real calm and distance. If they do work, for whom, when, and under what conditions?
That question sits at the heart of a large systematic review and meta-analysis, recently published in Review of Education, led by Jo Van Herwegen and an impressive international team of co-authors. The value of this paper lies not in bold claims, but in something much rarer in this field: it brings structure and clarity to a fragmented and often ideologically charged research landscape.
The starting point is clear. Across many countries, the number of pupils identified as having SEND (Special Educational Needs and Disabilities) has increased, while their average learning outcomes continue to lag behind those of their peers. Since the pandemic, that gap has tended to widen rather than narrow. There is no shortage of intervention studies, but most focus on a single group, a single subject, or a narrow context. What has been missing is a broad, systematic overview of targeted interventions: not general teaching quality, but additional, purposefully designed support, examined across SEND groups, subjects, and educational settings.
The scale of this review is unusual. For the narrative synthesis, the authors included 467 studies involving almost 59,000 learners, though small sample sizes are evident throughout. For the meta-analysis, 349 studies remained, contributing 1,758 individual outcome measures. The focus is on randomised controlled trials and quasi-experimental designs, preregistered where possible, conducted in line with PRISMA guidelines, and with explicit attention to risk of bias and heterogeneity. Methodologically, this is solid, careful work.
So what emerges?
Across outcomes in reading, writing, mathematics, and overall academic achievement, targeted interventions for learners with SEND yield an average effect size of g = 0.44. Using common conversions, this corresponds to roughly five months of additional learning. That is meaningful. At the same time, heterogeneity is substantial, particularly between studies. This is not a story of “it always works”, but of “it can work, under the right conditions”.
Equally informative is what does not strongly differentiate effects. When outcomes are considered together, there are no consistent differences between SEND categories. In other words, there is no simple hierarchy of diagnoses for which interventions are effective or ineffective. This aligns with a broader shift towards transdiagnostic approaches, where learning needs and instructional demands matter more than labels as such.
When looking more closely by domain, the picture becomes more nuanced. Effects are positive for reading, writing, and mathematics, with the largest average effect observed in mathematics. At the same time, some strikingly large effects, for example, in writing interventions for learners with social, emotional, and mental health difficulties, are driven by very small numbers of studies. This calls for caution. The paper does not invite cherry-picking effect sizes, but rather encourages attention to patterns and to gaps in the evidence base.
Context also matters, though perhaps not in the way often assumed. Overall, there are no large differences between mainstream and special education settings for reading and writing outcomes. For mathematics, effects in mainstream settings are even slightly larger. Likewise, the mode of delivery—individual, small group, or whole-class—shows less differentiation on average than is commonly expected. This may be uncomfortable for those looking for simple prescriptions, but it is a realistic reflection of educational complexity.
One factor that does stand out is the educational phase. For mathematics, interventions in primary education are clearly more effective than those spanning multiple phases. For writing, larger effects appear in secondary and post-secondary education. This suggests that timing matters, and that “earlier is always better” is too crude a rule to apply across domains. That finding, in particular, challenges some deeply ingrained assumptions.
Perhaps most important are the absences this review makes visible. For many SEND groups, the evidence base remains thin. Research in this field repeatedly concentrates on the same areas: early reading, young children, and specific learning difficulties. Other groups are consistently underrepresented. That is not a minor methodological footnote, but a structural issue with clear policy implications.
What makes this review strong is precisely what it does not do. It does not promise easy solutions. It does not claim that inclusion automatically works, nor that targeted interventions are a universal remedy. Instead, it shows that targeted support can improve learning outcomes, provided we take context, instructional quality, and limitations seriously.
For practitioners and policymakers, this is not a handbook. It is, however, a much-needed anchor point in a debate that too often oscillates between optimism and scepticism without sufficient empirical grounding.
That Jo and her team carried out this work with such care is no surprise. That it remains so necessary, unfortunately, is.