Daniel Willingham wrote a new blog post on the latest PISA results who emphasized on problem solving. But is the test measuring what it means? Actually Willingham really don’t know for sure:
“The authors sought to present problems that students might really encounter, like figuring out how to work a new MP3 player, or finding the quickest route on a map, and or figuring out how to buy a subway ticket from an automated kiosk.
So with this justification, we don’t need to make a strong case that we really understand problem-solving at a psychological level at all. We just say “this is the kind of problem solving that people do, so we measured how well students do it.”
This justification makes me nervous because the universe of possible activities we might agree represent “problem solving” seems so broad, much broader than what we would call activities for “citizenship reading.” A “problem” is usually defined as a situation in which you have a goal and you lack a ready process in memory that you’ve used before to solve the problem or one similar to it. That covers a lot of territory. So how do we know that the test fairly represents this territory?
The taxonomy is supposed to help with that problem. “Here’s the type of stuff that goes into problem solving, and look, we’ve got some problems for each type of stuff.” But I’ve already said that psychologists don’t have a firm enough grasp of problem-solving to advance a taxonomy with much confidence.
So the PISA 2012 is surely measuring something, and what it’s measuring is probably close to something I’d comfortably call “problem-solving.” But beyond that, I’m not sure what to say about it. ” (Read the whole blog post here)