Multimedia works. But not always the way we think.

In my book, I discuss some of Mayer’s multimedia principles. They are increasingly becoming a default in how we think about instruction that combines words and visuals. Avoid unnecessary details, align text and images, combine modalities wisely… it all sounds familiar, and for good reason. But how solid is that whole framework when you stop looking at individual studies and instead consider the full body of research behind it?

That is exactly what this new meta-analysis sets out to do. Cromley and Chen looked at 92 articles by Mayer and colleagues, covering 181 studies and 591 effect sizes. Their aim was not so much to show that multimedia works, but to understand when and under which conditions it works. In other words, not “does it work?” but “when does it work, and when does it work less well?” A more genuinely evidence-informed question.

The first finding will not surprise anyone. Yes, multimedia can work. The average effect is around g = 0.37. That is respectable. So anyone still claiming that multimedia makes no difference is simply wrong. But that is not where things get interesting. As so often, there is considerable variation behind that average, and that variation is where the real story lies.

To begin with, not all design principles are equal. Some stand out. Removing unnecessary or seductive details, carefully combining words and visuals, personalisation, and prompting learners to self-explain all show effects that move towards the medium to large range.

Other principles, often presented as equally self-evident, are much less convincing. Think of segmenting, where information is broken down into smaller, manageable chunks rather than presented all at once. Or contiguity, placing related information close together so learners do not have to search and integrate across space. The voice principle suggests that how something is narrated matters, with a natural human voice traditionally outperforming a synthetic or monotonous one. Social presence refers to the idea that learning may benefit from a sense of human presence in the material, for example, through a visible speaker or an avatar making eye contact.

When you take all studies together, the effects of these principles are small or not statistically significant. That does not mean they never work, but it does suggest their effects are less robust or more context-dependent than often assumed.

A second important nuance is that the type of multimedia itself matters. Classic combinations of text and diagrams perform surprisingly well and quite consistently. Animations, games and simulations also show positive effects, but less stable ones. Virtual reality, on average, shows no significant effect.

The hype around VR has already faded somewhat in the gaming world, but in education, it still appears regularly. It is important to be clear: this does not mean VR never works, but on average, it adds little.

Another interesting finding is that effects seem to decline slightly over time. There are several possible explanations. Early studies were often small-scale and tightly controlled, whereas later studies tend to be more realistic and complex. Or, put less kindly, the initial “wow effect” of new technologies wears off.

There is also a finding that fits well with what we often see in education, but with a twist I did not expect. Effects differ depending on what you measure, but here the effects are often larger for transfer, applying knowledge in new situations, than for simple factual recall. That is notable because transfer is usually the harder outcome to influence.

So what does this mean for practice? Mainly, that “multimedia works” is too simplistic. It works, but not automatically. It depends on the design, the medium, the context, the learners, and what you are trying to achieve. Some principles are more robust than others. Some technologies add less than we might hope.

And perhaps most importantly, much of what works well is not particularly spectacular. Clear structure, avoiding unnecessary distraction, and getting learners to think actively. These are not flashy innovations, but they hold up, even when you look across hundreds of studies.

Leave a Reply