I think it was Tim Surma who first shared this study by Fischer & Hänze this morning, but a lot of tweeps already shared the study since that tweet. And I can understand why, check the highlights:
-
Student-activating methods are claimed to enhance student learning and motivation.
-
Our study with 80 university courses and 1713 students challenges this statement.
-
Cognitive involvement and learning outcomes increased with teacher-guided methods.
-
Student-activating methods tended to have negative effects.
-
The analyses endorse cognitive involvement as a mediator for learning outcomes.
Am I surprised? Well, no, because this 2016 (meta) meta-analysis by Schneider & Preckel almost said the same. But this is not a meta-analysis but a field study, although a big one. Does this study have limitations? All studies have. In this case, e.g. the study shows correlation rather than causation. Still, the authors note:
Even though this study may have its shortcomings and the effects of the distinct teaching methods must not be overrated, the results make an important contribution to the empirical base for educational theory building and political decision making.
Do our findings indicate that university teachers should stop being the guide on the side and return to being the sage on the stage (King, 1993)? We refrain from deducing this kind of prescription. However, the empirical data suggest that there might be a disadvantage in using student-activating methods, whereas teacher-guided learning formats seem to be beneficial. We therefore do call into question the blind plea for activating methods in higher education and stress the need for a stronger empirical basis – and as such, for additional meaningful studies. The results presented cast doubt on the quality of activating methods currently employed in university teaching. Any advances towards increased use of activating methods in higher education would need to be accompanied by concrete recommendations concerning measures of quality assurance.
Abstract of the study:
This field study compares the effectiveness of teacher-guided and student-activating teaching methods. Expert observations of 80 university courses were combined with self-report data from 1713 students attending the courses. Controlling for students’ initial interest on the individual level and for course format, homework, and initial interest on the course level, two-level path analyses with the amount of teacher-guided and student-activating methods as predictors, and students’ final interest, subjective learning achievement, and perceived development of academic competencies as criteria – all mediated by the students’ cognitive involvement – revealed opposing effects of the two methods. Teacher-guided methods were associated with an increase in students’ cognitive involvement, interest, learning achievement, and development of academic competencies, whereas student-activating methods tended to show negative effects.
[…] This study is getting a lot of attention: “Back from “guide on the side” to “sage on the sta… […]
Even though this is in the direction I wanted, using self-reports for achievement can be problematic. I recall results in the so-called active learning studies where test performance was greater for the treatment where lectures were broken up by problems, but the students perceived more learning in the passive lecture only treatment. Students may be tricked by feelings of fluency when more time is used to pass over the material, or perhaps they are easily bothered by being forced to participate, I don’t know. Anyway, great share!
That’s why I linked to Schneider & Preckel which is a much better, although more nuanced source.