I know others have, such as my dear friend and colleague Paul Kirschner, written about this study, but it would be very strange not to write about it myself on this blog. You’d think that anyone teaching at a university would know what works—and what doesn’t—when it comes to learning. Yet a new study by Joshua Cuevas and colleagues paints a different picture. They asked 107 faculty members at a U.S. teaching-focused university to evaluate ten statements about learning: five well-supported strategies (such as spaced practice, retrieval practice, direct instruction…) and five persistent myths (like learning styles, pure discovery learning, multitasking…).
The results are, to put it mildly, uncomfortable. Almost everyone recognised the effective strategies. But at the same time, many also believed the myths were true. Two-thirds, for example, thought learning styles or letting students discover everything on their own would improve learning. Fewer than half recognised that multitasking and the idea of “digital natives” don’t hold up to evidence.
Strikingly, teacher-education faculty did not outperform colleagues from other disciplines. Only when it came to rejecting pure discovery learning did they do slightly better. In fact, on average, they scored slightly lower overall. And there was no link between how confident faculty were about their knowledge and how well they actually did—a classic case of the Dunning–Kruger effect.
Of course, there are limitations. The test covered just ten items, and it was conducted at a single institution. But since most participants earned their PhDs elsewhere, it’s not far-fetched to suspect the results might be similar in other places. The key takeaway is that even experienced faculty—including those preparing future teachers—struggle to distinguish evidence-based practices from appealing but inaccurate ideas.
The point isn’t that all university instructors teach poorly. It’s that we too often assume content expertise automatically translates into knowing how people learn. And that assumption seems shaky, even in teacher-education programmes. PhD training and teacher preparation should make the learning sciences, and the myths that undermine them, a much more explicit part of the curriculum.
Because as long as faculty are sure they’re right when they’re not, there’s little chance they’ll change what they do.
Abstract of the study:
This study assessed the pedagogical knowledge and metacognitive awareness of pedagogy of faculty (N = 107) at a large state university in the United States. The purpose was to ascertain whether faculty could distinguish effective learning practices from ineffective ones, as determined by empirical research in learning science. Faculty responded to items regarding the efficacy of effective practices and others shown by research to be neuromyths or misconceptions. Faculty across all colleges correctly identified most of the effective practices but also endorsed myths/misconceptions, ultimately showing limited pedagogical knowledge. Tenured faculty showed stronger pedagogical knowledge than newer faculty. Faculty were also assessed on their confidence in their knowledge of pedagogical practices. Respondents demonstrated poor metacognitive awareness as there was no relationship between confidence in pedagogical knowledge and actual pedagogical knowledge. Surprisingly, education faculty scored no better in pedagogical knowledge than faculty of any other college and also showed low metacognitive awareness. Results indicate that universities preparing doctoral students for faculty positions should ensure candidates are exposed to accurate information regarding learning science. The implications for colleges of education are more dire in that they may be failing to prepare candidates in the most essential aspects of the field.
We discussed this before but it is an example of some of the issues. If U follow Hattie, as many DOE’s do, U could argue there’s evidence supporting Learning Styles.
His site currently lists “Matching Teaching to Style of Learning” with an effect size of 0.42 and claims it is “likely to have a small positive impact.”
I do think this is a better overview: https://educationendowmentfoundation.org.uk/education-evidence/teaching-learning-toolkit/learning-styles
That’s the point I’ve been trying to make — the evidence and how it’s presented is incredibly fragmented. Key organisations like WWC, AERO, EEF, Marzano, and Johns Hopkins often reach very different conclusions from Hattie, and there’s no clear consensus on which source is most reliable. Hattie’s influence is undeniable, though — his work has shaped universities, teacher training, and curriculum design. For example, he was part of the team behind Developing Curriculum for Deep Thinking, where he’s cited around 20 times, lending significant weight to his ideas.