Daniel Willingham is discussing in his post this new Dutch study:
- A double-blind field experiment for evaluating two practicing algorithms.
- Adaptive practicing yields test scores similar to traditional static practicing.
- High ability students perform slightly worse when practicing adaptively.
- Effective personalization of education asks for a more comprehensive approach.
Does this mean that adaptive instruction doesn’t work? Willingham explains why there was not a positive effect of adaptive testing? (bold by me)
One possibility is low dosage. The intervention was only 15 minutes per week and although students could have practiced more, few did. At the same time, the intervention lasted an entire school year, the N was fairly large, and an effect was observed (in the unexpected direction) for the better prepared students.
Another possibility is that the program was effective in getting challenging problems to students, but ineffective in providing instruction. Students in the adaptive condition saw more difficult problems, but they got a lot of them wrong. Perhaps they needed more support and instruction at that point, so the potential benefit of stretching their knowledge and skills was not realized.
Another possibility is that the adaptive group would have shown a benefit on a different outcome measure. As the authors note, the summative test was more like the static practice than the adapative practice. Perhaps the adapative group would have shown a benefit in their readiness and ability to learn in the next unit.
This result obviously does not show that adaptive practice is a bad idea, or cannot be made to work well. It simply adds to the list of ideas that sound like they are more or less foolproof that turn out not to be: think spiral curriculum, or electronic textbooks. Thinking and learning is simply too complicated for us to confidently predict how a change in one variable will affect the entire cognitive and conative systems.