How to better report on effect sizes in meta-analyses?

Yesterday I had to miss the debate on meta-analyses on #rED18 but I did read the post by Robert Coe.

It’s true there has been quite a stir about Hattie and meta-analyses lately, and to me there are different aspects to the discussion.

I did notice that when effect sizes are shown in a different way, people can spot the complexity that’s often being obscured.

Compare the way Hattie notes effect sizes in his infamous lists of effects, e.g.:

And compare that with this graph, taken from Dietrichson et al., 2017.:

In this second example you can see the range of effects that are hidden behind the average effect size. It’s still an abstraction of a more complex reality, but it invites people who are interested to check what makes the difference between effect sizes noted for small-group instruction so big, or it shows that while coaching and mentoring students can have a positive effect, there is also a danger of the opposite.


  • Dietrichson, J., Bøg, M., Filges, T., & Klint Jørgensen, A. M. (2017). Academic interventions for elementary and middle school students with low socioeconomic status: A systematic review and meta-analysis. Review of Educational Research87(2), 243-282.


Leave a comment

Filed under Research, Review

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.