Direct instruction is nothing new. There is over 50 years of research. But lately there is a new fever surrounding the approach originally constructed by Engelmann and Becker. If you examine the latest PISA-results, you can see that they are not that far off from the results of the biggest experiment in education ever: Project Follow Through.
But between those two datasets there has happened a lot more research on Direct instruction. This research has now been brought together in a new meta-analysis that has gained a lot of attention in my Twitter-timeline.
And the results are pretty clear:
Our results support earlier reviews of the DI effectiveness literature. The estimated effects were consistently positive. Most estimates would be considered medium to large using the criteria generally used in the psychological literature and substantially larger than the criterion of .25 typically used in education research (Tallmadge, 1977). Using the criteria recently suggested by Lipsey et al. (2012), 6 of the 10 baseline estimates and 8 of the 10 adjusted estimates in the reduced models would be considered huge. All but one of the remaining six estimates would be considered large. Only 1 of the 20 estimates, although positive, might be seen as educationally insignificant.
What does this mean? Well, that Direct Instruction seems to be working quite well for reading, math, spelling, language,…
But there is more:
Earlier literature had led us to expect that effect sizes would be larger when students had greater exposure to the programs, and this hypothesis was supported for most of the analyses involving academic subjects. Significantly stronger results appeared for the total group, reading, math, and spelling for students who began the programs in kindergarten; for the total group and reading for students who had more years of intervention; and for math students with more daily exposure. Although we had expected that effects could be lower at maintenance than immediately postintervention, the decline was significant in only two of the analyses (math and language) and not substantial in either. Similarly, while literature across the field of education has suggested that reported effects would be stronger in published than in unpublished sources (Polanin et al., 2016), we found no indication of this pattern.
Contrary to expectations, training and coaching of teachers significantly increased effects in only one analysis (language). We suggest that readers interpret this finding cautiously for we suspect that it reflects the crude nature of our measure—a simple dummy variable noting if teachers were reported as receiving any training or coaching.
Are there no nuances to be made? Well, yes, of course as with all analyses. The researchers went to a great length to examine the quality of the studies, but didn’t include these insights in their analysis. And the researchers also the size and heterogeneity of the samples used in their research.
For instance, we did not attempt to compare the results of each of the DI programs with specific other approaches. Nor did we examine outcomes in subdimensions within the various subject areas, such as differentiating reading fluency and comprehension. In addition, many of our measures were less precise than could be considered optimal. The studies differed, often substantially, in the nature and amount of information given.
Abstract of the meta-analysis by Stockard et al:
Quantitative mixed models were used to examine literature published from 1966 through 2016 on the effectiveness of Direct Instruction. Analyses were based on 328 studies involving 413 study designs and almost 4,000 effects. Results are reported for the total set and subareas regarding reading, math, language, spelling, and multiple or other academic subjects; ability measures; affective outcomes; teacher and parent views; and single-subject designs. All of the estimated effects were positive and all were statistically significant except results from metaregressions involving affective outcomes. Characteristics of the publications, methodology, and sample were not systematically related to effect estimates. Effects showed little decline during maintenance, and effects for academic subjects were greater when students had more exposure to the programs. Estimated effects were educationally significant, moderate to large when using the traditional psychological benchmarks, and similar in magnitude to effect sizes that reflect performance gaps between more and less advantaged students.
Reblogged this on kadir kozan.
[…] things simply by virtue of being immersed in an appropriate environment is contradicted by the overwhelming empirical data that explicit instruction outperforms discovery approaches in […]
Reblogged this on The Echo Chamber.
[…] things simply by virtue of being immersed in an appropriate environment is contradicted by the overwhelming empirical data that explicit instruction outperforms discovery approaches in […]