It sounds so logical: a good idea that works in one school should work everywhere, right? However, the reality is much more complicated when I look at the many examples I have seen over the past fifteen years. Scaling up effective teaching methods is perhaps the most challenging thing in education. But here, too, research can help, more specifically, implementation science, also known as Implementation Science (IS). Anthony Ryan and colleagues wrote a scoping review that provided an overview of what this branch of science can mean for education.
What exactly is the problem? Many educational innovations work well in a controlled environment, such as a small group of schools or a pilot project. However, the results often decline as soon as you want to apply these methods more broadly. This is also called “voltage drop”: the effectiveness decreases as you scale up. This can be due to different contexts, limited resources or a lack of teacher support. Instead of a one-sided focus on the method itself, IS advocates also put the implementation process central.
One of the most important lessons we can learn from IS is that context matters. What works in a small, involved school in a village setting can go completely wrong in a large, diverse urban school or vice versa. Consider differences in school culture, teacher education level, or parental involvement. Successful scaling, therefore, does not simply mean copying an intervention but adapting it to local circumstances. At the same time, this also entails a risk: too many adaptations can change the intervention’s core and reduce its effect – what is known as “program drift”.
The field of IS offers numerous frameworks and models to analyze and guide implementations. However, the literature shows that there is little consistency in the use of these tools. This makes it difficult to compare results and draw lessons. One solution could be to work with a limited set of proven frameworks, such as the Consolidated Framework for Implementation Research (CFIR). This model helps to systematically map barriers and success factors, from teacher involvement to logistical support within a school.
In their review study, the authors emphasize that it is not enough to just look at an intervention’s effectiveness. It is about how an approach behaves in the reality of the classroom, with all its variables. For example, what obstacles do teachers face? How do students with different backgrounds respond? And how do you ensure that an intervention is introduced and sustainably embedded?
Another point is that we need to pay more attention to long-term research. Many educational innovation studies are cross-sectional, meaning they only provide a snapshot. However, the real challenges of scaling up often only become visible in the long term. Think of maintaining motivation among teachers or preventing program drift.
What can we learn from all these insights? First, scaling starts with a good understanding of the context in which you work. No school is the same; you must acknowledge that in your approach. In addition, consistency in research is important. We can better collaborate and share knowledge by using proven frameworks and developing a common language.
Finally, scaling up successful educational innovations requires a different mindset. It is not enough to simply copy an intervention and hope for the same results. It requires a process-oriented approach in which you continuously learn, adapt and evaluate. Perhaps that is the key to sustainable change in education: not the perfect method, but the willingness to continually adapt that method to the unique challenges of each school.
Abstract of the review study:
Educational reform through the scaling of evidence-based practices has been extremely difficult to achieve in practice. This scoping review examines the extent to which Implementation Science (IS) has been used to investigate the scaling of interventions in school settings and what has or could potentially be learnt from these investigations.Scopus, ProQuest, and EBSCO databases were searched for studies that involved scaling of an intervention in a school setting and made reference to IS. A wide range of methodologies (observational, quantitative, qualitative and mixed methods) in publications including journals articles, book chapters and reports was included. Extracted data were grouped and analysed under Nilsen’s IS classification system of determinant frameworks, evaluation frameworks, process models, classic theories and implementation theories. Inductive analysis of recurring themes in the literature was performed.The use of IS in the study of scaling interventions in school settings is in its early stages, with just 101 studies identified. Of those studies, there has been little systematic and considered use of IS in the scaling of interventions in schools. Twenty-eight factors considered important in the scaling of interventions in school settings were identified but only four in five papers nominated an IS framework, model or theory as a guiding principle for assessing implementation. Only two out of three studies reported an implementation outcome (66%) and, of those studies that did, one in three reported a single implementation outcome (33%). There was also a lack of consistency in terminology, variability in the application of IS tools, and limited longitudinal investigation. The large number of IS conceptual tools (n = 47) employed, combined with variability in application revealed that a fragmented approach to the use of IS currently exists in educational implementation research.We argue that using a limited number of IS conceptual frameworks (preferably over at least a two-year period) would enhance the study of scaling interventions in schools. A reduced range of IS tools and consistent terminology to conceptualise and discuss implementation would enable a solid research base to be established.To move beyond fidelity measurement, the following areas need to be examined and reported: (1) the range of contexts in which the intervention is being implemented; (2) the barriers and facilitators studied; (3) multiple implementation outcomes; and (4) the intervention outcomes.