We’ve written extensively about this topic in More Myths about Learning and Education, our second myth book, but a new working paper by Vladimir Kogan, Brandon Genetin, Joyce Chen and Alan Kalishagain confirms again: student surveys don’t measure what you think they measure. Grades seem to be the biggest influence in this case – we know from earlier research that the gender of the teacher also plays a role. More depressing, the authors tried to compensate for this through intervention, but this intervention failed:
Student surveys are widely used to evaluate university teaching and increasingly adopted at the K-12 level, although there remains considerable debate about what they measure. Much disagreement focuses on the well-documented correlation between student grades and their evaluations of instructors. Using individual-level data from 19,000 evaluations of 700 course sections at a flagship public university, we leverage both within-course and within-student variation to rule out popular explanations for this correlation. Specifically, we show that the relationship cannot be explained by instructional quality, workload, grading stringency, or student sorting into courses. Instead, student grade satisfaction — regardless of the underlying cause of the grades — appears to be an important driver of course evaluations. We also present results from a randomized intervention with potential to reduce the magnitude of the association by reminding students to focus on relevant teaching and learning considerations and by increasing the salience of the stakes attached to evaluations for instructor careers. However, these prove ineffective in muting the relationship between grades and student scores.