Category Archives: Research

Cognitive biases and educational research, an overview by John Hattie

John Hattie published an interesting post with an overview of possible cognitive biases translated to educational research. I do urge you to read the full post here, but wanted to share the biases here too:

Cognitive Bias Category Description
Authority Bias Tendency to attribute greater weight and accuracy to the opinions of an authority figure – irrespective of whether this is deserved – and to be influenced by it

EDUCATION: Don’t be swayed by famous titled gurus. Carefully unpick and test of all their assumptions – especially if they are making claims outside the specific area of expertise. Be particularly suspicious of anyone who writes and publishes a white paper [!!!]

 

Confirmation Bias

 

Post-Purchase Rationalization

 

Choice-Support Bias

The tendency to collect and interpret information in a way that conforms with rather than opposes our existing beliefs.

 

And when information is presented which contradicts current beliefs this can transition into Belief Perseverance i.e. where individuals hold beliefs that are utterly at odds with the data.

 

EDUCATION: We tend to select education approaches, products, and services that accord with our world view and will often continue to do so, even when convincing evidence is presented that our world view may be distorted. Be prepared to go against the grain and to question sacred assumptions.

 Observer Expectancy Effect

 

Observer Effect

 

Hawthorne Effect

 

Placebo Effect

 

 

The tendency for any intervention, even a sugar pill, to result in improved outcomes – mainly because everyone involved thinks the intervention will work and this creates a self-fulfilling prophecy

 

EDUCATION: If educational ‘sugar pills’ can generate positive effect sizes, then well-crafted education ‘medicines’ should generate a double whammy of effect plus placebo turbo boost – so opt for the latter.

 

 Ostrich effect The tendency to avoid monitoring information that might give psychological discomfort. Originally observed in contexts where financial investors refrained from monitoring their portfolios during downturns.

 

EDUCATION: The importance of collecting robust and regular data from a range of sources about the implementation of new interventions and analyzing this ruthlessly. Collect evidence to know thy impact.

 

 Anecdotal Fallacy

 

The tendency of taking anecdotal information at face value and giving it the same status as more rigorous data in making judgments about effectiveness

 

EDUCATION: Do not take spurious claims about impact at face value and do not invest in training based on participant satisfaction testimonials alone.

 

 

 Halo Effect Tendency to generalize from a limited number of experiences or interactions with an individual, company, or product to make a holistic judgment about every aspect of the individual or organization.

 

EDUCATION: Sometimes the whole is less than the sum of its parts. Just because an educational support organization has world-leading expertise in area A does not mean that they are also world leading in area B.

 

Not Invented Here Tendency of avoiding using a tried and tested product because it was invented elsewhere – typically claiming “but we are different here.”

EDUCATION: Be open to using and adapting existing expertise. Avoid reinventing the educational wheel – unless you work in terrain where wheels are useless [you probably don’t].

 

Ikea Effect  Tendency to have greater buy-in to a solution where the end-user is directly involved in building or localizing the product.

EDUCATION: Make the effort to localize and adapt tested solutions. This will generate greater emotional buy-in than standardized deployment.

 

Bandwagon Effect 

Illusory Truth Effect

 

Mere Exposure Effect

Tendency to believe that something works because a large number of other people believe it works.

EDUCATION: It might work and it might not. Test all claims carefully and don’t blindly join the bandwagon to keep up with the Joneses.

 

Clustering Illusion 

Cherry Picking

Tendency to remember and overemphasize streaks of positive or negative data that are clustered together in large parcels of random data i.e. seeing phantom patterns.

EDUCATION: Ask yourself: Are the claims made by educational researchers or service providers based on longitudinal data with a common long-term pattern, or from a small snapshot that could have been cherrypicked?

 

Conservativism The tendency to revise ones’ beliefs insufficiently when presented with information that contradicts our current beliefs.

EDUCATION: If the evidence is robust, it just might be true. There was a time when people who declared that the earth wasn’t flat were burned as heretics. But test all evidence as claims carefully.

 

Courtesy Bias The tendency to give an opinion that is more socially palatable than our true beliefs.

EDUCATION: Participant satisfaction scores from training events in some cultural contexts may be a grade or higher than the scores people would give if they were less polite.

 

 

Law of the Instrument If you have a hammer, everything looks like a nail.

EDUCATION: Start with the problem or ‘wicked issue’ you are trying to solve, and then work backwards to find the right instruments – rather than searching for nails to bang.

 

Bike-shedding The tendency to avoid complex projects like world peace to focus on projects that are simple and easy to grasp by the majority of participants – like building a bike shed.

EDUCATION: Don’t be afraid to go after the real problems. Build a bike shed if the world really needs bike sheds. If it doesn’t, then fix what needs fixing most.

 

Sunk Cost Fallacy 

 

Tendency to continue with a project that is not bearing fruit, simply because so much has been invested in it already and withdrawal would be an admission of failure.

EDUCATION: Review implementation of new approaches regularly and set clear kill parameters/hurdles that must be achieved for the project to stay live. Ruthlessly prune anything that does not pass the hurdle test.

1 Comment

Filed under Education, Research, Review

New interesting study on using humor in the classroom!

Humor in the classroom is something that can be quite tricky. I did saw enough teachers and teacher trainees who tried to be funny and who weren’t. Even worse: I did saw a few teachers who actually were very funny, but who were bad at teaching. Still, it’s true humor can have a positive impact on learning.

And now there is a very interesting study by Cooper and a lot of other people (check the list) on this very topic based on 3 experiments. So, what are their conclusions?

Overwhelmingly, students reported that they appreciated when instructors used humor….
…Students acknowledged that science courses can be stressful and that science content is especially difficult, but that humor helps lighten the mood of science classes, decreases stress levels, and improves students’ perceived ability to remember science content.

For the majority of students in this study, when science instructors used humor that students did not think was funny, it did not have an effect on their attention to course content, how relatable they perceived the instructor to be, or their sense of belonging to the class. Thus, if an instructor tells a joke that falls flat, it is likely not harming students.

Phew!

However, this is not the case if students find an instructor’s use of humor to be offensive. We found that if students perceive a science instructor’s use of humor as offensive, it can negatively influence how relatable students perceive the instructor to be. Previous research also suggests that negative and hostile humor can harm student-instructor relationships, particularly if students previously perceived the instructor to be immediate, or physically and psychologically close with students, because the negative humor contradicts their warm and open style . Further, we found that instructors’ use of offensive humor tends to decrease student sense of belonging to the course, which has been shown to be an important predictor of student retention. Over 40% of students reported that offensive humor can also decrease their attention to course content. Offensive humor may negatively affect student attention because it increases student cognitive load, or the amount of information that a student can hold in their working memory. This may be particularly true for students if the joke is offensive because it targeted a social identity group that they belong to.

And in relation to gender:

Notably, if a college science instructor is able to tell a joke that males and females think is funny, our findings suggest that both genders benefit equally. Similarly, if a college science instructor tells a joke that males and females both perceive as offensive, there is little evidence to suggest that females would be more harmed than male students. Therefore, based on our findings, females are more likely to be negatively affected by humor because they find more subjects offensive, not because of their response to the offensive humor.

All very interesting, but… as with every study there are limitations and in this case there is a big one that the researchers correctly note: this research was conducted across multiple classes at one institution in the Southwestern United States.

Abstract of the study:

For over 50 years instructor humor has been recognized as a way to positively impact student cognitive and affective learning. However, no study has explored humor exclusively in the context of college science courses, which have the reputation of being difficult and boring. The majority of studies that explore humor have assumed that students perceive instructor humor to be funny, yet students likely perceive some instructor humor as unfunny or offensive. Further, evidence suggests that women perceive certain subjects to be more offensive than men, yet we do not know what impact this may have on the experience of women in the classroom. To address these gaps in the literature, we surveyed students across 25 different college science courses about their perceptions of instructor humor in college science classes, which yielded 1637 student responses. Open-coding methods were used to analyze student responses to a question about why students appreciate humor. Multinomial regression was used to identify whether there are gender differences in the extent to which funny, unfunny, and offensive humor influenced student attention to course content, instructor relatability, and student sense of belonging. Logistic regression was used to examine gender differences in what subjects students find funny and offensive when joked about by college science instructors. Nearly 99% of students reported that they appreciate instructor humor and reported that it positively changes the classroom atmosphere, improves student experiences during class, and enhances the student-instructor relationship. We found that funny humor tends to increase student attention to course content, instructor relatability, and student sense of belonging. Conversely, offensive humor tends to decrease instructor relatability and student sense of belonging. Lastly, we identified subjects that males were more likely to find funny and females were more likely to find offensive if a college science instructor were to joke about them.

Leave a comment

Filed under Education, Funny, Research

Help wanted: looking for research on the effectiveness of co-teaching

First of all: this request has nothing to do with the new myth-book we’re writing, but I’m asking this because it’s one of the most often asked question I receive myself lately. A few years back I already did a big search and the title of the 2001 meta-analysis on the subject was quite telling: where are the data?

The past few days I’ve been searching again and I did find a lot of case studies, studies on how to make co teaching more effective, etc, but besides a couple of master theses, I’m still stuck. Also the research I found is often related to inclusive education, but nowadays co-teaching is implemented more widely.

I’m not asking this because I’m against co-teaching, but as it can be a rather expensive approach, I do think it’s worth knowing how effective it is. I can think of reasons for it to be both effective (e.g. it’s a kind of teacher professional development, you work on collective teacher efficacy,…)  or ineffective, but can somebody help me to answer where are the data of large scale experiments?

Much obliged!

Leave a comment

Filed under Education, Research

Is there a publication bias in educational research? (Best Evidence in Brief)

There is a new best evidence in brief with this time many interesting studies, but one stands out as it has to do with all educational research:

Research syntheses combine the results of all qualifying studies on a specific topic into one overall finding or effect size. When larger studies with more significant effect sizes are published more often than smaller studies with less significant or even null findings, but are of equal study quality, this is referred to as publication bias. The danger of publication bias is that it does not accurately represent all of the research on a given topic, but instead emphasizes the most dramatic.
In this month’s Educational Psychology Review, an article by Jason Chow and Erik Eckholm of Virginia Commonwealth University examines the amount of publication bias present in education and special education journals. They examined the differences in mean effect sizes between published and unpublished studies included in meta-analyses (one kind of research synthesis), whether a pattern emerged regarding individual characteristics common in published vs. unpublished studies, and the number of publication bias tests carried out in these meta-analyses.
From 20 journals, 222 meta-analyses met inclusion criteria for the meta-review, with 29 containing enough information to also be eligible for effect size calculations. The researchers found that for the 1,752 studies included in those meta-analyses, published studies had significantly higher effect sizes than the unpublished studies (ES=+0.64), and studies with larger effect sizes were more likely to be published than those with smaller effect sizes. Fifty-eight percent (n=128) of the meta-analyses did not test for publication bias. The authors discuss the implications of these findings.

Leave a comment

Filed under Education, Research, Review

An interesting reason to promote applied STEM: it’s related to good things for students with learning disabilities

There has been a STEM (or STEAM)- hype for a while and this new paper by Plasman and Gottfried can give the hype a maybe unexpected boost for students with learning disabilities. Everybody benefited but specifically those students with learning disabilities  who took applied STEM courses significantly increased their educational outcomes in the following ways:

  • lowered chances of dropout,
  • increased math test scores, and
  • increased enrollment in postsecondary education.

Sounds pretty neat, no? It is, the study seems pretty robust, but based on this study we can’t tell if it’s a mere correlation rather than a causal relationship. Also, related, it’s hard to tell which is the mechanism that is underneath this relationship. So – as always – more research is needed. (H/T Daniel Willingham)

Abstract of the study:

Applied science, technology, engineering, and math (STEM) coursetaking is becoming more commonplace in traditional high school settings to help students reinforce their learning in academic STEM courses. Throughout U.S. educational history, vocational education has been a consistent focus for schools to keep students on the school-to-career pathway. However, very few studies have examined the role of applied STEM coursetaking in improving schooling outcomes for students with learning disabilities. This is a major missing link as students with learning disabilities tend to exhibit much higher dropout rates than students from the general population. This study examines mechanisms displayed through applied STEM courses and the role they play in helping students with learning disabilities complete high school and transition into college. Using a nationally representative data set of high school students and their full transcripts (i.e., Education Longitudinal Study of 2002), we found that students with learning disabilities who took applied STEM courses significantly increased their educational outcomes in the following ways: lowered chances of dropout, increased math test scores, and increased enrollment in postsecondary education. While the general student population also benefited by taking applied STEM courses, the advantages were greater for those students with learning disabilities.

1 Comment

Filed under Education, Research

Our learning capabilities are limited during slow wave sleep… (no, really)

It’s a myth we already discussed in our first book on myths about learning and education, but people keep dreaming of learning in our sleep.

This new study gives more insights about what is and isn’t possible: while the human brain is still able to perceive sounds during sleep, it is unable to group these sounds according to their organization in a sequence.

From the press release:

Hypnopedia, or the ability to learn during sleep, was popularized in the ’60s, with for example the dystopia Brave New World by Aldous Huxley, in which individuals are conditioned to their future tasks during sleep. This concept has been progressively abandoned due to a lack of reliable scientific evidence supporting in-sleep learning abilities.

Recently however, few studies showed that the acquisition of elementary associations such as stimulus-reflex response is possible during sleep, both in humans and in animals. Nevertheless, it is not clear if sleep allows for more sophisticated forms of learning.

A study published this August 6 in the journal Scientific Reportsby researchers from the ULB Neuroscience Institute (UNI) shows that while our brain is able to continue perceiving sounds during sleep like at wake, the ability to group these sounds according to their organization in a sequence is only present at wakefulness, and completely disappears during sleep.

Juliane Farthouat, while a Research Fellow of the FNRS under the direction of Philippe Peigneux, professor at the Faculty of Psychological Science and Education at Université libre de Bruxelles, ULB, used magnetoencephalography (MEG) to record the cerebral activity mirroring the statistical learning of series of sounds, both during slow wave sleep (a part of sleep during which brain activity is highly synchronized) and during wakefulness.

During sleep, participants were exposed to fast flows of pure sounds, either randomly organized or structured in such a way that the auditory stream could be statistically grouped into sets of 3 elements.

During sleep, brain MEG responses demonstrated preserved detection of isolated sounds, but no response reflecting statistical clustering.

During wakefulness, however, all participants presented brain MEG responses reflecting the grouping of sounds into sets of 3 elements.

The results of this study suggest intrinsic limitations in de novo learning during slow wave sleep, that might confine the sleeping brain’s learning capabilities to simple, elementary associations.

Abstract of the study:

Hypnopedia, or the capacity to learn during sleep, is debatable. De novo acquisition of reflex stimulus-response associations was shown possible both in man and animal. Whether sleep allows more sophisticated forms of learning remains unclear. We recorded during diurnal Non-Rapid Eye Movement (NREM) sleep auditory magnetoencephalographic (MEG) frequency-tagged responses mirroring ongoing statistical learning. While in NREM sleep, participants were exposed at non-awakenings thresholds to fast auditory streams of pure tones, either randomly organized or structured in such a way that the stream statistically segmented in sets of 3 elements (tritones). During NREM sleep, only tone-related frequency-tagged MEG responses were observed, evidencing successful perception of individual tones. No participant showed tritone-related frequency-tagged responses, suggesting lack of segmentation. In the ensuing wake period however, all participants exhibited robust tritone-related responses during exposure to statistical (but not random) streams. Our data suggest that associations embedded in statistical regularities remain undetected during NREM sleep, although implicitly learned during subsequent wakefulness. These results suggest intrinsic limitations in de novo learning during NREM sleep that might confine the NREM sleeping brain’s learning capabilities to simple, elementary associations. It remains to be ascertained whether it similarly applies to REM sleep.

1 Comment

Filed under Education, Myths, Psychology, Research

Are the criticisms about randomized controlled trials in education correct? (Best Evidence in Brief)

There is a new best evidence in brief and in the new edition there is a bit of a meta-subject as it’s not about research but about research about research (you’ll have to read that twice, I guess).

The use of randomized controlled trials (RCTs) in education research has increased over the last 15 years. However, the use of RCTs has also been subject to criticism, with four key criticisms being that it is not possible to carry out RCTs in education; the research design of RCTs ignores context and experience; RCTs tend to generate simplistic universal laws of “cause and effect”; and that they are descriptive and contribute little to theory.
To assess these four key criticisms, Paul Connolly and colleagues conducted a systematic review of RCTs in education research between 1980 and 2016 in order to consider the evidence in relation to the use of RCTs in education practice.
The systematic review found a total of 1,017 RCTs completed and reported between 1980 and 2016, of which just over three-quarters have been produced in the last 10 years. Just over half of all RCTs were conducted in North America and just under a third in Europe. This finding addresses the first criticism, and demonstrates that, overall, it is possible to conduct RCTs in education research.
While the researchers also find evidence to oppose the other key criticisms, the review suggests that some progress remains to be made. The article concludes by outlining some key challenges for researchers undertaking RCTs in education.
I want to add this abstract too:

Background: The use of randomised controlled trials (RCTs) in education has increased significantly over the last 15 years. However, their use has also been subject to sustained and rather trenchant criticism from significant sections of the education research community. Key criticisms have included the claims that: it is not possible to undertake RCTs in education; RCTs are blunt research designs that ignore context and experience; RCTs tend to generate simplistic universal laws of ‘cause and effect’; and that they are inherently descriptive and contribute little to theory.

Purpose: This article seeks to assess the above four criticisms of RCTs by considering the actual evidence in relation to the use of RCTs in education in practice.

Design and methods: The article is based upon a systematic review that has sought to identify and describe all RCTs conducted in educational settings and including a focus on educational outcomes between 1980 and 2016. The search is limited to articles and reports published in English.

Results: The systematic review found a total of 1017 unique RCTs that have been completed and reported between 1980 and 2016. Just over three quarters of these have been produced over the last 10 years, reflecting the significant increase in the use of RCTs in recent years. Overall, just over half of all RCTs identified were conducted in North America and a little under a third in Europe. The RCTs cover a wide range of educational settings and focus on an equally wide range of educational interventions and outcomes. The findings not only disprove the claim that it is not possible to do RCTs in education but also provide some supporting evidence to challenge the other three key criticisms outlined earlier.

Conclusions: While providing evidence to counter the four criticisms outlined earlier, the article suggests that there remains significant progress to be made. The article concludes by outlining some key challenges for researchers undertaking RCTs in education.

1 Comment

Filed under Education, Research, Review

What works and doesn’t work with instructional video, a new short overview

There is a special issue of Computers in Human Behavior on learning from video and in their Editorial Fiorella and Mayer give an overview of effective and ineffective methods that are being trialed in the special issue:

What are the effective methods?

…two techniques that appear to improve learning outcomes with instructional video are segmenting—breaking the video into parts and allowing students to control the pace of the presentation—and mixed perspective—filming from both a first-person perspective and third-person.

And what isn’t worth the effort?

…some features that do not appear to be associated with improved learning outcomes with instructional video are matching the gender of the instructor to the gender of the learner, having the instructor’s face on the screen, inserting pauses throughout the video, and adding practice without feedback.

Abstract of the editorial:

In this commentary, we examine the papers in a special issue on “Developments and Trends in Learning with Instructional Video”. In particular, we focus on basic findings concerning which instructional features improve learning with instructional video (i.e., breaking the lesson into segments paced by the learner; recording from both first- and third-person perspectives) and which features or learner attributes do not (i.e., matching the instructor’s gender to the learner’s gender; having the instructor’s face on the screen; adding practice without feedback; inserting pauses throughout the video; and spatial ability). In addition, we offer recommendations for future work on designing effective video lessons.

Leave a comment

Filed under Education, Research, Review, Technology

Not so good news about the Good Behavior Game (Best Evidence in Brief)

There was a new best evidence in brief past week and this item isn’t that great news:

An evaluation conducted for the Education Endowment Foundation in the UK looked at whether the  Good Behaviour Game (GBG) improved students’ reading skills and behavior.
The GBG intervention is a classroom management approach designed to improve student behavior and build confidence and resilience. The game is played in groups and rewards students for good behavior. More than 3,000 Year 3 (equivalent to second grade in the U.S.) students from 77 UK schools took part in a randomized controlled trial of GBG over two years. Around a quarter of the students in the schools were eligible for free school meals, around a fifth were students with special educational needs, and 23% had English as an additional language.
The analysis indicated that, on average, GBG had no significant impact on students’ reading skills (effect size = +0.03) or their behavior (concentration, disruptive behavior, and pro-social behavior) when compared to the control group students. However, there was some tentative evidence that boys at risk of developing conduct problems showed improvements in behavior.

Leave a comment

Filed under Education, Research, Review

Good read: An Enormous Study of the Genes Related to Staying in School

While I was on leave in the States I kept reading interesting studies and news articles and I was really impressed by both this study and this article about the study by Ed Yong for The Atlantic. Both the authors of the study and Ed know how touchy the subject of genes and intelligence can be. That is why the researchers wrote an accompanying FAQ that explains what they found and what it means.

What I like in The Atlantic article is the good nuanced reporting, such as:

This isn’t to say that staying in school is “in the genes.” Each genetic variant has a tiny effect on its own, and even together, they don’t control people’s fates. The team showed this by creating a “polygenic score”—a tool that accounts for variants across a person’s entire genome to predict how much formal education they’re likely to receive. It does a lousy job of predicting the outcome for any specific individual, but it can explain 11 percent of the population-wide variation in years of schooling.

And the explaining what it means

“…Now, consider that household income explains just 7 percent of the variation in educational attainment, which is less than what genes can now account for. “Most social scientists wouldn’t do a study without accounting for socioeconomic status, even if that’s not what they’re interested in,” says Harden. The same ought to be true of our genes.”

Do read the complete article in The Atlantic or even better, the original study:

Abstract of the study:
Here we conducted a large-scale genetic association analysis of educational attainment in a sample of approximately 1.1 million individuals and identify 1,271 independent genome-wide-significant SNPs. For the SNPs taken together, we found evidence of heterogeneous effects across environments. The SNPs implicate genes involved in brain-development processes and neuron-to-neuron communication. In a separate analysis of the X chromosome, we identify 10 independent genome-wide-significant SNPs and estimate a SNP heritability of around 0.3% in both men and women, consistent with partial dosage compensation. A joint (multi-phenotype) analysis of educational attainment and three related cognitive phenotypes generates polygenic scores that explain 11–13% of the variance in educational attainment and 7–10% of the variance in cognitive performance. This prediction accuracy substantially increases the utility of polygenic scores as tools in research.

Leave a comment

Filed under Education, Research, Review