This is the presentation I gave at the National ResearchED conference, September 9 2017. The presentation is in part based on our book Urban Myths about Learning and Education and in part based on the recent article I co-wrote with Paul Kirschner published in Teaching and Teacher Education (yes the one that was mentioned in Nature).
Category Archives: Review
Yep, a study on class size in the new Best Evidence in Brief:
What difference do smaller class sizes, and more teachers, make in early childhood education (ECE)?A meta-analysis by Jocelyn Bowne and colleagues, published in Educational Evaluation and Policy Analysis, attempts to find some answers. The analysis included evaluations of ECE programs in the U.S. between 1960 and 2007. The evaluations were either experimental studies, used a high-quality quasi-experimental design, or showed baseline equivalence of treatment and control participants. In total, 38 studies were included, all of which looked at children ages 3 to 5 years old attending an ECE center for 10 hours a week or more for at least 4 months. Child-teacher ratios ranged from 5:1 to 15:1, and class sizes from 11 to 25.The findings were as follows:
- Above a child-teacher ratio of 7.5:1, changing the ratio had no effect on children’s cognitive and achievement outcomes. Below this, a reduction of the ratio by one child per teacher predicted an effect size of +0.22.
- For class sizes greater than 15, increasing the size of the class had little effect on children’s cognitive and achievement outcomes. Below this, one child fewer in the class size predicted an effect size of +0.10.
The authors caution that these findings are correlational, rather than causal, so changing class sizes or ratios, certainly at scale, may not lead to these results. However, they conclude that “very small and/or well-staffed classrooms might confer some small benefits for children’s cognitive and academic learning.”
I shared this video already on my Dutch blog, but it is too nice not to share it here too.
Well, no surprise, but also no cigar: more education linked to better cognitive functioning later in life
This is a strange study. Researchers from the University of California in Berkeley used the data of 196000 Lumosity-users. This is a big group, making the study already interesting, but hold your horses, I do think there are some major issues.
The study, led by University of California, Berkeley, researchers, examined relationships between educational attainment, cognitive performance and learning in order to quantify the cumulative effect of attending school.
Its findings suggest that higher levels of education may help stave off age-related cognitive decline. In addition, the team found that education didn’t have a large impact on novel learning, or learning something new at various points in time.
The work, which reviewed the performance of around 196,000 subscribers to Lumosity online brain-training games, is believed to be the largest to date to evaluate cognitive effects of prior educational experience on past and future performance. Researchers said their findings may be of value to psychologists, sociologists, neuroscientists, education researchers and policymakers.
Grading educational achievement
Conventional wisdom has long accepted that higher education is likely to boost incomes and helps prepare individuals for a workplace with often-changing skill sets. Yet fewer than 40 percent of adults in the United States are expected to graduate from college in their lifetimes, and the percentage declines for more advanced degrees.
Until now, research has been inconclusive about the cognitive impacts of higher education and whether the quantity of schooling can influence the acquisition and maintenance of cognitive skills over time.
The researchers of the paper, which appears in the August 23 edition of PLOS ONE, are Silvia Bunge, a professor of psychology at UC Berkeley professor and at the Helen Wills Neuroscience Institute; Belén Guerra-Carrillo, a graduate student in Bunge’s Building Blocks of Cognition Laboratory and a National Science Foundation Fellow; and Kiefer Katovich, who was a statistician with Lumos Labs while the study was conducted.
Bunge and her team say higher levels of education are strong predictors of better cognitive performance across the 15- to 60-year-old age range of their study participants, and appear to boost performance more in areas such as reasoning than in terms of processing speed.
The study’s findings are consistent with prior evidence that the brain adapts in response to challenges, a phenomenon called “experience-dependent brain plasticity.” Based on the principles of plasticity, the authors predicted improvements in cognitive skills that are repeatedly taxed in demanding, cognitively engaging coursework.
Differences in performance were small for test subjects with a bachelor’s degree compared to those with a high school diploma, and moderate for those with doctorates compared to those with only some high school education.
The researchers noted that people from lower educational backgrounds learned novel tasks nearly as well as those from higher ones.
“The fact that the cognitive tests were not similar to what is learned in school is a strength of the study: It speaks to the idea that schooling doesn’t merely impart knowledge – it also provides the opportunity to sharpen core cognitive skills,” said Bunge.
The researchers analyzed anonymized data collected from around 196,000 Lumosity subscribers in the United States, Canada and Australia who came from a range of educational attainment and diverse backgrounds. Participants complete eight behavioral assessments of executive functioning and reasoning that are unrelated to educational curricula as part of their subscription.
The research team also looked closely at a subset of nearly 70,000 subscribers who finished Lumosity’s behavioral assessments a second time after about 100 days of additional cognitive training. Testing before and after the assessments measured cognitive performance in areas such as working memory, thinking quickly, responding flexibly to task goals and both verbal and non-verbal reasoning.
“Given the size and wide age range of our sample, it was possible to test whether these age effects are influenced by education – and, importantly, to determine how the cognitive effects of educational attainment differ across the lifespan, as one’s experience with formal education recedes into the past and is supplanted by other life experiences,” the team wrote.
Bunge said that collaborating with Lumosity was a golden opportunity to analyze data from around 196,000 participants – an anonymized dataset that would have taken a lifetime to collect in a laboratory.
Did you spot it? I actually do think education can play a large role in this, but how can the researchers know what the status was of the executive functions before education? Even more: if those executive functions are stable from a certain age on, it’s even more impossible to tell.
But there is another issue, if you take a look at the abstract of the study (italic by me):
Attending school is a multifaceted experience. Students are not only exposed to new knowledge but are also immersed in a structured environment in which they need to respond flexibly in accordance with changing task goals, keep relevant information in mind, and constantly tackle novel problems. To quantify the cumulative effect of this experience, we examined retrospectively and prospectively, the relationships between educational attainment and both cognitive performance and learning. We analyzed data from 196,388 subscribers to an online cognitive training program. These subscribers, ages 15–60, had completed eight behavioral assessments of executive functioning and reasoning at least once. Controlling for multiple demographic and engagement variables, we found that higher levels of education predicted better performance across the full age range, and modulated performance in some cognitive domains more than others (e.g., reasoning vs. processing speed). Differences were moderate for Bachelor’s degree vs. High School (d = 0.51), and large between Ph.D. vs. Some High School (d = 0.80). Further, the ages of peak cognitive performance for each educational category closely followed the typical range of ages at graduation. This result is consistent with a cumulative effect of recent educational experiences, as well as a decrement in performance as completion of schooling becomes more distant. To begin to characterize the directionality of the relationship between educational attainment and cognitive performance, we conducted a prospective longitudinal analysis. For a subset of 69,202 subscribers who had completed 100 days of cognitive training, we tested whether the degree of novel learning was associated with their level of education. Higher educational attainment predicted bigger gains, but the differences were small (d = 0.04–0.37). Altogether, these results point to the long-lasting trace of an effect of prior cognitive challenges but suggest that new learning opportunities can reduce performance gaps related to one’s educational history.
Well, pointing is one way of describing it. Not a really big effect and so maybe again not really suggesting that much is another way as it’s a bit different from what they used earlier on to explain why their data and study is so interesting:
“The fact that the cognitive tests were not similar to what is learned in school is a strength of the study: It speaks to the idea that schooling doesn’t merely impart knowledge – it also provides the opportunity to sharpen core cognitive skills,”
Yeah, but this isn’t the case for the pre- en posttest of the Lumosity-bit in this study, and certainly not if you look to previous recent research rhat has shown this tool has no effect on decision-making and no effect on cognitive function beyond practice effects on the training tasks.
So what we have here is a big dataset, with no way to check if the people didn’t lie, with a big selection-element (they choose to use a brain training tool) and without any information about their functioning before they took education. But ok, we have a big dataset.
Benjamin Bloom developed a taxonomy for goals in education, and frankly: One of the most asked questions I received is ‘is Bloom correct’.
For the people who haven’t heard about the taxonomy before, it is often represented like this:
But did you notice, I didn’t write: this is the taxonomy, I wrote it’s often represented. During Spring a lot of edubloggers wrote about this by the likes of David Didau and Doug Lemov and on Twitter Dylan William proposed this correction:
But now it’s getting really interesting as Lorin W. Anderson co-autor of the revised Taxonom of Bloom wrote a guest post on the blog of Larry Ferlazzo to put some things straight, and big warning: some people will need to adapt their textbooks:
1. The triangle does not appear anywhere in either Taxonomy. The triangular representation was quite likely designed by someone as part of a presentation made to educational practitioners (e.g., teachers, administrators). I believe that the triangular representation was developed in order to indicate that, in the original Taxonomy, the six categories formed a cumulative hierarchy. That is, it was believed by the authors of the original Taxonomy that mastery of each lower category was necessary before moving to the next higher category. For example, you have to comprehend something before you can apply it.
2. The triangular representation of the revised Taxonomy is particularly inappropriate for several reasons. First, the revised Taxonomy contains two dimensions, not one. The authors believed that knowledge was sufficiently important to be a separate dimension. They also believed there were different types or forms of knowledge: factual, conceptual, procedural, and metacognitive. Second, the nouns in the original Taxonomy were replaced by verbs, In this process, remember replaced knowledge at the lowest “level” of the second dimension, termed “cognitive processes.” If you read the text of the original Taxonomy, the equation of “knowledge” with “recall” and “recognition” is quite evident. Remember was followed by understand, apply, analyze, evaluate, and create. Third, the categories (verbs) in the cognitive process dimension did NOT form a cumulative hierarchy. Rather, they were considered to be “tools in a toolbox.” Thus, it was possible (and often quite useful) to apply in order to understand or to evaluate as you apply.
3. In your blog post, Dylan William’s representation, entitled “Bloom’s taxonomy, as it should be” is a far better representation of the revised Taxonomy than the triangular representation. In fact, if you change the nouns to verbs (other than knowledge), add Remember to the list in the upper row, and realize that knowledge is multi-faceted (as I mention above) he has almost reconstructed the two-dimensional table of the revised Taxonomy.
4. Finally, after 40+ years in the business, I am greatly dismayed that many educators get their information from oral presentations and secondary (and in some cases tertiary) sources. This practice tends to result in passing along half-truths and misinterpretations. In this regard, I think you could do a great service by directing the readers of your blog to original sources (even if they won’t read them). With respect to the revised Taxonomy, it would be helpful for anyone who is interested in writing about or making presentations on the revised Taxonomy to take 15 to 20 minutes to read the excellent overview written by David Krathwohl in the journal, Theory into Practice.
So: again the pyramid isn’t there (where did I hear this one before). Next question: but what is a taxonomy? Let’s check Wikipedia:
Taxonomy is the practice and science of classification. The word is also used as a count noun: a taxonomy, or taxonomic scheme, is a particular classification. The word finds its roots in the Greek language τάξις, taxis (meaning ‘order’, ‘arrangement’) and νόμος, nomos (‘law’ or ‘science’). Originally, taxonomy referred only to the classification of organisms or a particular classification of organisms. In a wider, more general sense, it may refer to a classification of things or concepts, as well as to the principles underlying such a classification. Taxonomy is different from meronomy which is dealing with the classification of parts of a whole.
So a taxonomy is a classification that can be used to look at reality so to make our thinking about the reality a bit more structured. It is not an overview of didactical approaches. This is a mistake also in part because of Bloom himself, imho as his original purpose was:
- common language about learning goals to facilitate communication across persons, subject matter, and grade levels;
- basis for determining for a particular course or curriculum the specific meaning of broad educational goals, such as those found in the currently prevalent national, state, and local standards;
- means for determining the congruence of educational objectives, activities, and assessments in a unit, course, or curriculum; and
- panorama of the range of educational possibilities against which the limited breadth and depth of any particular educational course or curriculum could be contrasted.
But if you check the revised version e.g. in the work of David Krathwohl from which I also took previous, than this isn’t the case anymore.
Remains the question if the taxonomy is correct? To me there are 2 criteria hidden in this question:
- to what extent is this taxonomy – by definition a simplification – correct?
- how useful is this taxonomy?
I for myself was trained in another taxonomy by De Block and Heene, who has 3 dimensions. Bloom’s work is more well known all over the world. Both taxonomies are a tool to help teachers think about their lesson objectives in education and to help them not to get stuck in one type of goals or not to overlook other goals. It’s true that Bloom suggests some elements that don’t match with reality as we know from cognitive psychology. But again… as Anderson describes: this is often because other people adapted the theory.
I’m with Anderson on this for the time being. I also like the version Dylan William shared. But if you need to use them? I suggest to look at the 2 criteria I just gave you.
This is the presentation that I used for my deep dive session at the CTTL Academy at St Andrews in July 2017.
2 important sources – besides the many mentioned in my presentation – were:
There is a new Best Evidence in Brief and this time I picked this study:
As struggling readers get older and the words they read get longer, the effort it takes them to decode longer words interferes with their reading comprehension. Jessica Toste and colleagues conducted a study examining the effects of an intervention designed to develop multisyllabic word reading (MWR) automaticity via repeated exposure to multisyllabic words in isolation and in context. The goal of the intervention is for students to focus their attention on text meaning instead of decoding. Given that research shows motivation supports cognitive ability, researchers also wanted to examine the effects of this strategy with and without a motivational component.Fifty-nine struggling third and fourth graders in two charter schools located in a large city in the southwestern U.S. were randomly assigned to one of three groups: MWR only (n=18), MWR with motivational beliefs (MB) training (n=19), or business as usual (22). No significant reading comprehension differences existed at pretest, as measured by subtests of the Woodcock-Johnson III, TOWRE, and WRAT, or among motivational beliefs as measured on the Reading Attribution Scale.In groups of 2-3 students, the MWR and MWR + MB groups received tutoring sessions in reading for forty minutes a week, three times a week for eight weeks in addition to their regular reading instruction. The MWR + MB group also received five minutes of motivational instruction each session, while the MWR-only group practiced math facts for their final five minutes. The MWR lessons consisted of seven components, starting with repeated reading of vowel patterns and progressing to target words in paragraphs. The MB component added self-reflection, positive self-talk, and eliminating negative thoughts throughout the lesson.Results showed that students in both MWR groups performed better than the control group at posttest on word fluency measures, and performed moderately better than the controls on TOWRE phonemic decoding and the WJ letter-word ID and word-attack subtests. The MWR + MB group had higher scores than the MWR group solely on sentence-level comprehension, but had higher scores than controls on the attributions for success subscale, meaning they were more likely to attribute success to internal causes like effort rather than external factors like luck. MWR + MB did not outperform MWR on motivational measures. The authors conclude that developing automaticity in multi-syllable word reading and motivation’s effect on reading comprehension are both promising interventions to develop MWR.
It’s a basic rule in education: combining new insights to prior knowledge is key. But… It’s this notion of ‘peculiarity’ that can help us understand what makes lasting memories. It’s not really a published study, but a press release about a talk in Cannes that triggered my attention. I do think it’s relevant!
From the press release:
It’s this notion of ‘peculiarity’ that can help us understand what makes lasting memories, according to Per Sederberg, a professor of psychology at The Ohio State University.
“You have to build a memory on the scaffolding of what you already know, but then you have to violate the expectations somewhat. It has to be a little bit weird,” Sederberg said.
Sederberg talked about the neuroscience of memory as an invited speaker at the Cannes Lions Festival of Creativity in France on June 19. He spoke at the session “What are memories made of? Stirring emotions and last impressions” along with several advertising professionals and artists.
Sederberg has spent his career studying memory. In one of his most notable studies, he had college students wear a smartphone around their neck with an app that took random photos for a month. Later, the participants relived memories related to those photos in an fMRI scanner so that Sederberg and his colleagues could see where and how the brain stored the time and place of those memories.
From his own research and that of others, Sederberg has ideas on which memories stick with us and which ones fade over time.
The way to create a long-lasting memory is to form an association with other memories, he said.
“If we want to be able to retrieve a memory later, you want to build a rich web. It should connect to other memories in multiple ways, so there are many ways for our mind to get back to it.”
A memory of a lifetime is like a big city, with many roads that lead there. We forget memories that are desert towns, with only one road in. “You want to have a lot of different ways to get to any individual memory,” Sederberg said.
The difficulty is how to best navigate the push and pull between novelty and familiarity. Novelty tells us what is important to remember. On the other hand, familiarity tells us what we can ignore, but helps us retrieve information later, Sederberg said.
Too much novelty, and you have no way to place it in your cognitive map, but too much familiarity and the information is similarly lost.
What that means is that context and prediction play critical roles in shaping our perception and memory. The most memorable experiences are those that arise in a familiar and stable context, yet violate some aspect of what we predict would occur in that context, he said.
“Those peculiar experiences are the things that stand out, that make a more lasting memory.”
Paul Kirschner and yours truly just got a new article published in Teaching and Teacher Education on 2 common myths in education: the digital native and the multitasker. You can read it for free here (until Aug. 4)
- Information-savvy digital natives do not exist.
- Learners cannot multitask; they task switch which negatively impacts learning.
- Educational design assuming these myths hinders rather than helps learning.
The abstract of our paper:
Current discussions about educational policy and practice are often embedded in a mind-set that considers students who were born in an age of omnipresent digital media to be fundamentally different from previous generations of students. These students have been labelled digital natives and have been ascribed the ability to cognitively process multiple sources of information simultaneously (i.e., they can multitask). As a result of this thinking, they are seen by teachers, educational administrators, politicians/policy makers, and the media to require an educational approach radically different from that of previous generations. This article presents scientific evidence showing that there is no such thing as a digital native who is information-skilled simply because (s)he has never known a world that was not digital. It then proceeds to present evidence that one of the alleged abilities of students in this generation, the ability to multitask, does not exist and that designing education that assumes the presence of this ability hinders rather than helps learning. The article concludes by elaborating on possible implications of this for education/educational policy.