Last year I had the honor and pleasure of writing an article together with David Daniel about the role psychological research plays in informing teaching. This article has now been published in Canadian Psychology/Psychologie canadienne.
The typical set-up in much of the pedagogical research is a treatment vs. no treatment (or “business as usual”) design. This design, while common, is rather limited in the conclusions one may reach. Something vs. nothing, at best, can merely demonstrate that doing something is better than not doing something (or not) (Willingham & Daniel, under review).
While an active control group, for example comparing the new intervention to one already known to have high impact, would yield much more useful information for teachers, many researchers may not want to gamble with such a design. It is too risky for many researchers if one of the goals is to publish: If the new technique is not significantly better than the active control, it would not be a good candidate for publication in many outlets. This approach erroneously assumes that a constrained number of “best practices” exist and that the goal of the literature is find the singular “king of the mountain.” But, what if we found ANOTHER great strategy that worked JUST as well in the classroom? Wouldn’t that be a wonderful addition to the literature? “Just as well” as something great can be a fantastic contribution to teaching, learning and science. In this case, insignificance would be a valuable outcome.
As mentioned above, the demonstration of equivalence, albeit statistically insignificant, can be incredibly significant to teaching. For example, the non-statistically significant finding that technique X works, as well as the known to be successful technique Y adds breadth and flexibility to the teaching arsenal. Such an emphasis better serves the teaching community by providing alternatives from which to draw and adapt to teaching style, context, etc.
Much, much more in the article.
Abstract of our article:
The need for a primary emphasis on teaching is a necessary, and as yet unfulfilled, goal of psychological science. We argue that an ecological model focused specifically upon understanding and optimizing teaching practice must incorporate the necessary complexity inherent to the teaching and learning process. To do so, we must expand our scope beyond the simple exploration of main effects under controlled conditions to the exploration of dynamic interactions, including the identification of boundary conditions, and the assessment of potential side-effects across relevant variables and contexts. To do so, foci on internal and external validity must be re-balanced in a manner more productive for practical inferences and applications. With an eye on educational practice, we point out that statistically insignificant results, under certain circumstances, can yield very useful strategies for teaching. Therefore, researchers interested in practical applications for teachers should be encouraged to use active control groups in their studies when feasible. We also argue that practical significance must include context-relevant information, for example, a ratio between the degree to which the findings can be used in context without upsetting other learning objectives and the amount of benefit given the costs (both time and energy) of the intervention, as an essential component to evaluating the potential utility of teaching research. Thus, statistically significant results must be weighed with respect to both effect-size and the practicality of implementation by teachers in authentic educational contexts before being considered a candidate for use in the classroom.