Every month, I record a Dutch podcast with Rinke. He has now sent me a preprint of a meta-analysis by Nathan Welker and colleagues on so-called Content Acquisition Podcasts (CAPs). These are short, multimodal instructional modules that mainly appear in research on (special) teacher education. Think compact micro-instruction, not the kind of podcasts you listen to on your commute.
The study combines 29 pieces of research and reports substantial effects on three outcomes:
- knowledge (g ≈ 0.82)
- application of that knowledge (g ≈ 0.82)
- retention after several weeks (g ≈ 0.86)
These are substantial numbers, especially in education. At first glance, this seems far from trivial. However, before we draw conclusions, a few points require nuance. That’s where I come in.
A few considerations
1. The entire body of literature comes from the same research group
That makes the line of research very consistent, but it limits certainty. Replication by other teams is not a luxury. It is a necessity. To me, an intervention that yields strong results only within one group, regardless of its competence, is not a broadly established concept.
2. Most outcomes were measured with researcher-developed tests
Understandable: you want to measure what was taught in the CAP. However, it complicates comparisons between studies and increases the likelihood that participants primarily learn the material, rather than the underlying content.
3. The scope of the studies is relatively narrow
Much of the research focuses on vocabulary instruction in special education. A domain where explicit, well-structured instruction tends to perform strongly anyway. We therefore know little about the effect of CAPs on more complex learning goals.
4. Strong effects do not mean CAPs are better than alternatives
Some studies have found traditional instruction (e.g., a clear lesson accompanied by a video) to be equally effective or even more effective. CAPs are therefore a form of solid instructional design, not the solution.
5. The theory behind CAPs is solid instructional design
Dual coding, segmenting, focusing on key ideas… None of this is new or specific to podcasts. It mainly reminds us that compact, well-designed instructional chunks tend to work — with or without the “CAP” label.
What can we take from this?
- If you make podcasts and are looking for evidence for “learning through audio”,: this is not that evidence. CAPs are not podcasts in the usual sense, but mini-lessons with visuals and audio, designed using strong instructional principles.
- The effects are interesting and definitely worth reading.
- But the generalisability is limited: one research line, a narrow domain, mostly researcher-developed tests, and few independent replications.
Summarising: the results offer promise within this specific context, but not a general educational finding that we should all implement tomorrow. Those who use CAPs as part of thoughtful instructional design are probably making a sensible choice.