Regular testing (with feedback) can have a positive effect on learning, something called the testing effect, it’s the effect that long-term memory is increased when some of the learning period is devoted to retrieving the to-be-remembered information. A review in a special issue of Educational Psychology Review by Van Gog and Sweller shows now that it’s a bit more complicated than that. Most of the studies on the testing effect were actually on relatively simple goals, the researchers state that
“..very little (published) research has investigated whether the testing effect would also apply to learning complex, high element interactivity materials.”
In the special issue (and lucky for us: open access) this not really new, but often forgotten insight is tested (pun intended) with a batch of new studies, and the conclusion is clear:
In sum, these studies seem to suggest that the complexity of learning materials may reduce or even eliminate (as several studies in this special issue suggest) the testing effect. Interestingly, this insight is not new, although, as was once true of the testing effect itself (Glover 1989), it seems to have been nearly forgotten.
Interestingly, the authors try to explain why the effect decreases:
…it seems that the complexity of learning materials reduces or eliminates the testing effect either by reducing the effectiveness of testing relative to restudy because the “organizational/relational processing” afforded by testing no longer presents a benefit or by increasing the effectiveness of restudy relative to testing, by affording processing of relations between elements and maintaining student motivation for engaging in restudy.
Still, there is already this reaction by Karpicke et Aue:
Van Gog and Sweller (2015) claim that there is no testing effect—no benefit of practicing retrieval—for complex materials. We show that this claim is incorrect on several grounds. First, Van Gog and Sweller’s idea of “element interactivity” is not defined in a quantitative, measurable way. As a consequence, the idea is applied inconsistently in their literature review. Second, none of the experiments on retrieval practice with worked-example materials manipulated element interactivity. Third, Van Gog and Sweller’s literature review omitted several studies that have shown retrieval practice effects with complex materials, including studies that directly manipulated the complexity of the materials. Fourth, the experiments that did not show retrieval practice effects, which were emphasized by Van Gog and Sweller, either involved retrieval of isolated words in individual sentences or required immediate, massed retrieval practice. The experiments failed to observe retrieval practice effects because of the retrieval tasks, not because of the complexity of the materials. Finally, even though the worked-example experiments emphasized by Van Gog and Sweller have methodological problems, they do not show strong evidence favoring the null. Instead, the data provide evidence that there is indeed a small positive effect of retrieval practice with worked examples. Retrieval practice remains an effective way to improve meaningful learning of complex materials.
Still, I think that one can only subscribe to the plea of the review authors, Van Gog and Sweller, when they conclude:
It would help teachers and instructional designers to know for which learning tasks they can and cannot expect benefits of having their students take practice tests instead of engage in further study.
The testing effect is a finding from cognitive psychology with relevance for education. It shows that after an initial study period, taking a practice test improves long-term retention compared to not taking a test and—more interestingly—compared to restudying the learning material. Boundary conditions of the effect that have received attention include the test format, retrieval success on the initial test, the retention interval, or the spacing of tests. Another potential boundary condition concerns the complexity of learning materials, that is, the number of interacting information elements a learning task contains. This insight is not new, as research from a century ago already had indicated that the testing effect decreases as the complexity of learning materials increases, but that finding seems to have been nearly forgotten. Studies presented in this special issue suggest that the effect may even disappear when the complexity of learning material is very high. Since many learning tasks in schools are high in element interactivity, a failure to find the effect under these conditions is relevant for education. Therefore, this special issue hopes to put this potential boundary condition back on the radar and provide a starting point for discussion and future research on this topic.