Yesterday I received this very interesting Freakonomics podcast on the issue of the scalability of interventions. Often we see – also in education – that something works in an experiment or in a field trial, but when everybody starts adopting the new method or the new insight… it fails. There can be several reasons why, but this new study on MOOC’s suggests a simple explanation in this case: there is no one size fits all approach that helps all students to complete MOOCs in one of the largest field experiments ever. I do think Project Follow Through still holds the title for the biggest ever :).
From the press release (H/T Paul Kirschner):
In one of the largest educational field experiments ever conducted, a team co-led by a Cornell researcher found that promising interventions to help students complete online courses were not effective on a massive scale — suggesting that targeted solutions are needed to help students in different circumstances or locations.
Researchers tracked 250,000 students from nearly every country in 250 massive open online courses (MOOCs) over 2 1/2 years in the study, “Scaling Up Behavioral Science Interventions in Online Education,” published June 15 in the Proceedings of the National Academy of Sciences.
“Behavioral interventions are not a silver bullet,” said Rene Kizilcec, assistant professor of information science and co-lead author.
“Earlier studies showed that short, light-touch interventions at the beginning of a few select courses can increase persistence and completion rates,” he said. “But when scaled up to over 250 different courses and a quarter of a million students, the intervention effects were an order of magnitude smaller.”
The study was co-led by Justin Reich of the Massachusetts Institute of Technology and Michael Yeomans of Imperial College London. The research was conducted on the edX and Open edX platforms, and edX has engaged in work to make the data available to institutional researchers to advance educational science at scale.
The 250 courses the researchers studied came from Harvard University, MIT and Stanford University.
Failure to complete online courses is a well-known and long-standing obstacle to virtual learning, particularly among disadvantaged communities and in developing nations — where online education can be a key path to social advancement. The findings have added relevance with so much education around the world taking place online during the COVID-19 pandemic.
“My advice to instructors is to understand and address the specific challenges in their learning environment,” Kizilcec said. “If students have issues with their internet connection, you can’t help them overcome them with a self-regulation intervention. But if students need to go to bed on time in order to be awake for a morning lecture, or they need to plan ahead for when to start working on homework in order to have it ready to hand in, then a brief self-regulation intervention can in fact help students overcome these obstacles.”
Previous, smaller-scale research, performed by Kizilcec and his co-authors as well as other scholars, found that goal-setting interventions such as writing out a list of intentions at the start of the class improved students’ course completion rates.
In this study, the researchers explored the effects of four interventions:
- plan-making, where students are prompted to develop detailed plans for when, where, and how they complete coursework;
- a related activity in which students reflect on the benefits and barriers of achieving their goal, and plan ahead about how to respond to challenges;
- social accountability, where they pick someone to hold them accountable for their progress in the course, and plan when and what to tell them; and
- value-relevance, where they write about how completing the course reflects and reinforces their most important values.
For the first three interventions, involving planning ahead, the researchers found that the approach was effective in boosting engagement for the first few weeks of the course, but the impact dwindled as the course progressed. The value-relevance intervention was effective in developing countries where student outcomes were significantly worse than others, but only in courses with a global achievement gap; in other courses, it actually had a negative impact in developing countries.
The researchers tested whether they could predict in which courses an achievement gap would occur, in order to decide where the intervention should be added, but found it extremely difficult to predict.
“Not knowing if it will help or hurt students in a given course is a big issue,” he said.
The researchers attempted to use machine learning to predict which interventions might help which students, but found the algorithm was no better than assigning the same intervention to all students.
“It calls into question the potential of AI to provide personalized interventions to struggling students,” Kizilcec said. “Approaches that focus on understanding what works best in individual environments and then tailoring interventions to those environments might be more effective.”
The researchers said their findings suggest that future studies should be designed to consider and reveal the differences among students, in addition to studies assessing overall effects.
Abstract of the study:
Online education is rapidly expanding in response to rising demand for higher and continuing education, but many online students struggle to achieve their educational goals. Several behavioral science interventions have shown promise in raising student persistence and completion rates in a handful of courses, but evidence of their effectiveness across diverse educational contexts is limited. In this study, we test a set of established interventions over 2.5 y, with one-quarter million students, from nearly every country, across 247 online courses offered by Harvard, the Massachusetts Institute of Technology, and Stanford. We hypothesized that the interventions would produce medium-to-large effects as in prior studies, but this is not supported by our results. Instead, using an iterative scientific process of cyclically preregistering new hypotheses in between waves of data collection, we identified individual, contextual, and temporal conditions under which the interventions benefit students. Self-regulation interventions raised student engagement in the first few weeks but not final completion rates. Value-relevance interventions raised completion rates in developing countries to close the global achievement gap, but only in courses with a global gap. We found minimal evidence that state-of-the-art machine learning methods can forecast the occurrence of a global gap or learn effective individualized intervention policies. Scaling behavioral science interventions across various online learning contexts can reduce their average effectiveness by an order-of-magnitude. However, iterative scientific investigations can uncover what works where for whom.