Category Archives: Myths
This study actually answers a question that I’ve had for quite a while: how come some ideas move through academia even if they’re not that good, while great insights sometimes seem to take ages to get around. This new study from Allison Morgan and her colleagues suggests something that is both close related to epidemiology and memes, but it has most to do with… prestige – and once and a while with the quality of the idea.
From the press release:
How ideas move through academia may depend on where those ideas come from–whether from big-name universities or less prestigious institutions–as much as their quality, a recent study from the University of Colorado Boulder suggests.
The new research borrows a page from epidemiology, exploring how ideas might flow from university to university, almost like a disease. The findings from CU Boulder’s Allison Morgan and her colleagues suggest that the way that universities hire new faculty members may give elite schools an edge in spreading their research to others.
In particular, the team simulated how ideas might spread out faster from highly-ranked schools than from those at the bottom of the pile–even when the ideas weren’t that good. The results suggest that academia may not function like the meritocracy that some claim, said Morgan, a graduate student in the Department of Computer Science.
She and her colleagues began by drawing on a dataset, originally published in 2015, that described the hiring histories of more than 5,000 faculty members in 205 computer science programs in the U.S. and Canada.
That dataset revealed what might be a major power imbalance in the field–with a small number of universities training the majority of tenure track faculty across both countries.
“This paper was really about investigating the implications of the imbalance,” Morgan said. “What does it mean if the elite institutions are producing the majority of the faculty who are, in turn, training the future teachers in the field?”
To answer that question, the researchers turned the 2015 dataset into a network of connected universities. If a university placed one of its Ph.D. students in a job at another school, then those two schools were linked. The resulting “roadmap” showed how faculty might carry ideas from their graduate schools to the universities that hired them.
The researchers then ran thousands of simulations on that network, allowing ideas that began at one school to percolate down to others. The team adjusted for the quality of ideas by making some more likely to shift between nodes than others.
The findings, published in October in the journal EPJ Data Science, show that it matters where an idea gets started. When mid-level ideas began at less prestigious schools, they tended to stall, not reaching the full network. The same wasn’t true for so-so thinking from major universities.
“If you start a medium- or low-quality idea at a prestigious university, it goes much farther in the network and can infect more nodes than an idea starting at a less prestigious university,” Morgan said.
That pattern held up even when the researchers introduced a bit of randomness to the mix–allowing ideas to pop from one end of the network to another by chance. That simulated how university departments might learn about an idea through factors other than hiring, such as journals, conferences or word of mouth.
The results seem to paint a dim picture of academia, said study coauthor Samuel Way, a postdoctoral research associate in computer science. He explained that recent sociological research demonstrates that workplaces benefit by having a lot of diversity–whether in gender, race or in how employees are trained.
“If you have five people who all have the exact same training and look at the world through the same lens, and you give them a problem that stumps one of them, it might stump all of them,” Way said.
He added that it may be possible for the academic world to blunt the impact of the sorts of biases the team revealed, including by adopting practices like double-blind peer review–in which the reviewers of a study can’t see the names or affiliations of the authors.
“In a setting like science where it’s incredibly difficult to come up with an objective measure of the quality of an idea, double-blind peer review may be the best you can do,” Way said.
The study did, however, contain a bit of good news: The bias toward big-name universities mattered a lot less for high-quality ideas. In other words, great thinking can still catch fire in academia, no matter where it comes from.
“I think it’s heartwarming in a way,” Morgan said. “We see that if you have a high-quality idea, and you’re from the bottom of the hierarchy, you have as good a chance of sending that idea across the network, as if it came from the top.”
Abstract of the study:
The spread of ideas in the scientific community is often viewed as a competition, in which good ideas spread further because of greater intrinsic fitness, and publication venue and citation counts correlate with importance and impact. However, relatively little is known about how structural factors influence the spread of ideas, and specifically how where an idea originates might influence how it spreads. Here, we investigate the role of faculty hiring networks, which embody the set of researcher transitions from doctoral to faculty institutions, in shaping the spread of ideas in computer science, and the importance of where in the network an idea originates. We consider comprehensive data on the hiring events of 5032 faculty at all 205 Ph.D.-granting departments of computer science in the U.S. and Canada, and on the timing and titles of 200,476 associated publications. Analyzing five popular research topics, we show empirically that faculty hiring can and does facilitate the spread of ideas in science. Having established such a mechanism, we then analyze its potential consequences using epidemic models to simulate the generic spread of research ideas and quantify the impact of where an idea originates on its longterm diffusion across the network. We find that research from prestigious institutions spreads more quickly and completely than work of similar quality originating from less prestigious institutions. Our analyses establish the theoretical trade-offs between university prestige and the quality of ideas necessary for efficient circulation. Our results establish faculty hiring as an underlying mechanism that drives the persistent epistemic advantage observed for elite institutions, and provide a theoretical lower bound for the impact of structural inequality in shaping the spread of ideas in science.
This Twitter-rant is too good not to share here (H/T Tim van der Zee):
As professional myth busters, Paul, Casper and myself are always interested in how to beat them. This new study confirms and at the same time nuances a previous insight: those on the fence about an idea can be swayed after hearing facts related to the misinformation. Do note that as often is the case: this study was on a relatively small sample.
From the press release:
After conducting an experimental study, the researchers found that listening to a speaker repeating a belief does, in fact, increase the believability of the statement, especially if the person somewhat believes it already. But for those who haven’t committed to particular beliefs, hearing correct information can override the myths.
For example, if a policymaker wants people to forget the inaccurate belief that “Reading in dim light can damage children’s eyes,” they could instead repeatedly say, “Children who spend less time outdoors are at greater risk to develop nearsightedness.” Those on the fence are more likely to remember the correct information and, more importantly, less likely to remember the misinformation, after repeatedly hearing the correct information. People with entrenched beliefs are likely not to be swayed either way.
The sample was not nationally representative, so the researchers urge caution when extrapolating the findings to the general population, but they believe the findings would replicate on a larger scale. The findings, published in the academic journal Cognition, have the potential to guide interventions aimed at correcting misinformation in vulnerable communities.
“In today’s informational environment, where inaccurate information and beliefs are widespread, policymakers would be well served by learning strategies to prevent the entrenchment of these beliefs at a population level,” said study co-author Alin Coman, assistant professor of psychology at Princeton’s Woodrow Wilson School of Public and International Affairs and Department of Psychology.
Coman and Madalina Vlasceanu, a graduate student at Princeton, conducted a main study, with a final total of 58 participants, and a replication study, with 88 participants.
In the main study, a set of 24 statements was distributed to participants. These statements, which contained eight myths and 16 correct pieces of information in total, fell into four categories: nutrition, allergies, vision and health.
Myths were comprised of statements commonly endorsed by people as true, but that are actually false, such as “Crying helps babies’ lungs develop.” The correct and related piece of information would be: “Pneumonia is the prime cause of death in children.”
First, the participants were asked to carefully read these statements, which were described as statements “frequently encountered on the internet.” After reading, participants rated whether they believed the statement was true on a scale from one to seven (one being “not at all” to seven being “very much so.”) Next, they listened to an audio recording of a person remembering some of the beliefs the participants had read initially. In the recording, the speaker spoke naturally, as someone would recalling information. The listeners were asked to determine whether the speakers were accurately remembering the original content. Each participant listened to an audio recording containing two of the correct statements from each of two categories.
Participants were then given the category name — nutrition, allergies, vision, or health — and were instructed to recall the statements they first read. Finally, they were presented with the initial statements and asked to rate them based on accuracy and scientific support.
The researchers found that listeners do experience changes in their beliefs after listening to information shared by another person. In particular, the ease with which a belief comes to mind affects its believability.
If a belief was mentioned by the person in the audio, it was remembered better and believed more by the listener. If, however, a belief was from the same category as the mentioned belief (but not mentioned itself), it was more likely to be forgotten and believed less by the listener. These effects of forgetting and believing occur for both accurate and inaccurate beliefs.
The results are particularly meaningful for policymakers interested in having an impact at a community level, especially for health-relevant inaccurate beliefs. Coman and his collaborators are currently expanding upon this study, looking at 12-member groups where people are exchanging information in a lab-created social network.
Abstract of the study:
Belief endorsement is rarely a fully deliberative process. Oftentimes, one’s beliefs are influenced by superficial characteristics of the belief evaluation experience. Here, we show that by manipulating the mnemonic accessibility of particular beliefs we can alter their believability. We use a well-established socio-cognitive paradigm (i.e., the social version of the selective practice paradigm) to increase the mnemonic accessibility of some beliefs and induce forgetting in others. We find that listening to a speaker selectively practicing beliefs results in changes in believability. Beliefs that are mentioned become mnemonically accessible and exhibit an increase in believability, while beliefs that are related to those mentioned exrience mnemonic suppression, which results in decreased believability. Importantly, the latter effect occurs regardless of whether the belief is scientifically accurate or inaccurate. Furthermore, beliefs that are endorsed with moderate-strength are particularly susceptible to mnemonically-induced believability changes. These findings, we argue, have the potential to guide interventions aimed at correcting misinformation in vulnerable communities.
There is a new interesting study published in Frontiers on how the believe in neuromyths doesn’t seem to matter as the best teachers believe as much in neuromyths as regular teachers. You can check the study here and read a good analysis by Christian Jarrett at BPS Digest here. Ok, I want to add maybe just one thing to the analysis. The researchers picked teachers that were selected as winners of best teacher elections. The authors acknowledge this is a weak spot, as we don’t know how those teachers were selected. If you read the new book by Dylan William, you will discover how it’s almost impossible to find out which teachers are actually really good or which ones are doing a bad job. It’s hard to observe the difference between a bad teacher having a good day and a great teacher having a bad day.
It may surprise you that at first I really hoped this study to be correct, and for several reasons, such as:
- it would make my life much easier as I can stop writing about myths and move on,
- our children would have great teachers even if they believe in nonsense.
But next I remembered that previous research has shown over and over again that people who are really interested in the brain, are easier caught in neuromyths. So it seems not implausible that really good teachers just look for a lot of stuff that may help them to become even better teachers. Which is nice, and I think actually the case.
But than I suddenly realized how dangerous this result can potentially be. Imagine it to be correct it could also mean that whatever we teach our teachers, it has little impact. In that case quid teacher training? Sad thing is, if you look at the work by John Hattie there is sometimes a case there to be made. But it would maybe also mean that one can teach and others just can’t… by nature. Because their knowledge doesn’t make much of a difference.
Of course it’s all a bit more complicated than that and there are probably often a lot of difference between what people think and how they act, and even more: sometimes how a teacher acts will be similar despite believing or not believing a myth, because the action is the same but there is a different reasoning behind it.
But I do want to argue that the authors of the study have overlooked a potential danger of neuromyths. Teaching those myths often take away important time of professional development and teacher training, time that isn’t spent on effective methods. Another possible explanation of the results could well be: even the best teachers don’t know these excellent techniques. In that case it means there is still a lot to gain. Which again is good news. Well, kind of.
In the meantime I need to get back to writing our second book on myths about learning and education.
I just received a notice from a preprint from a new review study on learning styles by
and they add an interesting element to the learning styles discussion. Besides the knowledge that adapting to learning styles – and by extension multiple intelligences – doesn’t work and isn’t supported by evidence, they also explain why it’s improbable that this will ever be possible. I know, this isn’t entirely new, but worth sharing. No, it’s not because they have the possibility to look into the future, but they based it on the present knowledge and the trends that can be acknowledged when looking into how research has been evolving:
…from the discussion on the functioning of the brain, it is clear that learning styles violate the connectivity principle. Additionally, most of the evidence indicates that teaching in the styles preferred by students does not improve academic performance. However, only 14 studies deny this hypothesis (Cuevas & Dawson, 2018, Moser and Zumbach, 2018, Pashler, McDaniel, Rohrer, & Bjork, 2008), 7 prove it (Cuevas & Dawson, 2018; Moser & Zumbach, 2018) and 6 are nonconforming (Moser & Zumbach, 2018). Therefore, the trend of the evidence on learning styles is negative limited and since the construct does not show connectivity, it can be classified as an improbable phenomenon. Consequently, the recommendation made by Coffield, Moseley, Hall, & Ecclestone (2004, p. 140), of not basing pedagogical interventions on learning styles remains valid.
What do they mean by connectivity principle?
This principle establishes that any theory that attempts to explain a phenomenon must consider previously confirmed empirical facts directly related to the phenomenon. In such a way that it does not contradict this verified knowledge.
Abstract from the preprint:
Leaning styles are a widespread idea that has high levels of acceptance in education and psychology. The promises of adopting the construct range from gains in academic performance, to the development of respect for the self and others. Nevertheless, from a scientific perspective it remains highly controversial. Most studies indicate that matching teaching to the learning styles of students does not improve learning, and that their psychometric instruments do not show enough reliability and validity. In this sense, this paper investigated if the postulates of learning styles are consistent with the way the human brain process information. Moreover, the trend of the accumulated evidence about learning styles was analyzed, using a simple algorithm, to determine if they are a proven, debatable, improbable or denied phenomenon. Results show: (1) that learning styles, along with the multiple intelligence theory and the left or right-brained hypothesis, are not compatible with what is currently know about the inner workings of the brain; (2) that the trend of the evidence, although still limited, does not favor learning styles; (3) that as a phenomenon
styles are classified as improbable.
It’s a myth we already discussed in our first book on myths about learning and education, but people keep dreaming of learning in our sleep.
This new study gives more insights about what is and isn’t possible: while the human brain is still able to perceive sounds during sleep, it is unable to group these sounds according to their organization in a sequence.
From the press release:
Hypnopedia, or the ability to learn during sleep, was popularized in the ’60s, with for example the dystopia Brave New World by Aldous Huxley, in which individuals are conditioned to their future tasks during sleep. This concept has been progressively abandoned due to a lack of reliable scientific evidence supporting in-sleep learning abilities.
Recently however, few studies showed that the acquisition of elementary associations such as stimulus-reflex response is possible during sleep, both in humans and in animals. Nevertheless, it is not clear if sleep allows for more sophisticated forms of learning.
A study published this August 6 in the journal Scientific Reportsby researchers from the ULB Neuroscience Institute (UNI) shows that while our brain is able to continue perceiving sounds during sleep like at wake, the ability to group these sounds according to their organization in a sequence is only present at wakefulness, and completely disappears during sleep.
Juliane Farthouat, while a Research Fellow of the FNRS under the direction of Philippe Peigneux, professor at the Faculty of Psychological Science and Education at Université libre de Bruxelles, ULB, used magnetoencephalography (MEG) to record the cerebral activity mirroring the statistical learning of series of sounds, both during slow wave sleep (a part of sleep during which brain activity is highly synchronized) and during wakefulness.
During sleep, participants were exposed to fast flows of pure sounds, either randomly organized or structured in such a way that the auditory stream could be statistically grouped into sets of 3 elements.
During sleep, brain MEG responses demonstrated preserved detection of isolated sounds, but no response reflecting statistical clustering.
During wakefulness, however, all participants presented brain MEG responses reflecting the grouping of sounds into sets of 3 elements.
The results of this study suggest intrinsic limitations in de novo learning during slow wave sleep, that might confine the sleeping brain’s learning capabilities to simple, elementary associations.
Abstract of the study:
Hypnopedia, or the capacity to learn during sleep, is debatable. De novo acquisition of reflex stimulus-response associations was shown possible both in man and animal. Whether sleep allows more sophisticated forms of learning remains unclear. We recorded during diurnal Non-Rapid Eye Movement (NREM) sleep auditory magnetoencephalographic (MEG) frequency-tagged responses mirroring ongoing statistical learning. While in NREM sleep, participants were exposed at non-awakenings thresholds to fast auditory streams of pure tones, either randomly organized or structured in such a way that the stream statistically segmented in sets of 3 elements (tritones). During NREM sleep, only tone-related frequency-tagged MEG responses were observed, evidencing successful perception of individual tones. No participant showed tritone-related frequency-tagged responses, suggesting lack of segmentation. In the ensuing wake period however, all participants exhibited robust tritone-related responses during exposure to statistical (but not random) streams. Our data suggest that associations embedded in statistical regularities remain undetected during NREM sleep, although implicitly learned during subsequent wakefulness. These results suggest intrinsic limitations in de novo learning during NREM sleep that might confine the NREM sleeping brain’s learning capabilities to simple, elementary associations. It remains to be ascertained whether it similarly applies to REM sleep.
I really like science, I like the self-correcting part of science even more.
Mobile phones and other wireless devices that produce electromagnetic ﬁelds (EMF) and pulsed radiofrequency radiation (RFR) are widely documented to cause potentially harmful health impacts that can be detrimental to young people. New epigenetic studies are proﬁled in this review to account for some neurodevelopmental and neurobehavioral changes due to exposure to wireless technologies. Symptoms of retarded memory, learning, cognition, attention, and behavioral problems have been reported in numerous studies and are similarly manifested in autism and attention deﬁcit hyperactivity disorders, as a result of EMF and RFR exposures where both epigenetic drivers and genetic (DNA) damage are likely contributors. Technology beneﬁts can be realized by adopting wired devices for education to avoid health risk and promote academic achievement.
Sounds pretty alarming, no? Should we worry? Well, no.
The respected journal Child Development recently published a commentary that attributed a number of negative health consequences to RF radiation, from cancer to infertility and even autism (Sage & Burgio, 2017). It is our view that this piece has potential to cause serious harm and should never have been published. But how do we justify such a damning verdict? In considering our responses, werealized that this case raised more general issues about distinguishing scientiﬁcally valid from invalid views when evaluating environmental impacts on physical and psychological health, and we offer here some more general guidelines for editors and reviewers who may be confronted with similar issues. As shown in Table 1, we identify seven questions that can be asked about causal claims, using the Sage and Burgio (2017) article to illustrate these.
That’s right David Grimes and Dorothy Bischop took a closer look to the alarming article, and well…
Exposure to nonionizing radiation used in wireless communication remains a contentious topic in the public mind—while the overwhelming scientiﬁc evidence to date suggests that microwave and radio frequencies used in modern communications are safe, public apprehension remains considerable. A recent article in Child Development has caused concern by alleging a causative connection between nonionizing radiation and a host of conditions, including autism and cancer. This commentary outlines why these claims are devoid of merit, and why they should not have been given a scientiﬁc veneer of legitimacy. The commentary also outlines some hallmarks of potentially dubious science, with the hope that authors, reviewers, and editors might be better able to avoid suspect scientiﬁc claims.
Last week I received a complaint that I was too kind for Howard Gardner as we didn’t call his multiple intelligences theory a myth. The reason why we used the label ‘nuanced’ is because the basis philosophy that people differ can’t be labeled as wrong. Still we gave a lot of reasons why one should be cautious about this very popular theory. And lately Gardner himself outed the theory as being outdated and ill-researched.
But this paragraph tweeted by Stuart Ritchie is making me grinch:
This quote is coming from this video:
Now, the video is older than the own-debunking I mentioned before on this blog, but Casper made a good comment on Twitter about the mixed message Gardner is giving:
So: Gardner says his theory isn’t supported at all, he even acknowledges that the theories he opposes to do have scientific evidence supporting their theory, but he still defends his own theory because of it’s usefulness. Btw, this is one of the most common replies to any of the myths Paul, Casper and myself have tackled: “hey, I know it’s rubbish, but I think it is useful.”
Remember this magazine cover?
Looks pretty similar to this cover from 1976:
Or this one from 1985?
Or do you remember this picture?
Yeah, I debunked this story already here but these pictures suit the image of the egocentric, smartphone obsesses youth, and for sure this selfie-taking generation will be more narcissistic for sure? Well… no.
We already knew from 2012 research by Twenge et al that the narcissistic turn actually could have happened in the eighties, but now there is a new study by Wetzel et al stating that the present group of students… is probably less narcissistic than generations before them and there never has been a epidemic of narcissism at all, as this conclusion sums it up:
In contrast to popular opinion, our findings did not show that today’s college students are more narcissistic than college students in the 1990s or the 2000s, at least in the three universities examined in the present study. In fact, we found small decreases both in overall narcissism and in its leadership, vanity, and entitlement facets. Importantly, these decreases already started between the 1990s and the 2000s and continued more strongly in the late 2000s and 2010s. Our study suggests that today’s college students are less narcissistic than their predecessors and that there may never have been an epidemic of narcissism.
Abstract of the study:
Are recent cohorts of college students more narcissistic than their predecessors? To address debates about the so-called “narcissism epidemic,” we used data from three cohorts of students (1990s: N = 1,166; 2000s: N = 33,647; 2010s: N = 25,412) to test whether narcissism levels (overall and specific facets) have increased across generations. We also tested whether our measure, the Narcissistic Personality Inventory (NPI), showed measurement equivalence across the three cohorts, a critical analysis that had been overlooked in prior research. We found that several NPI items were not equivalent across cohorts. Models accounting for nonequivalence of these items indicated a small decline in overall narcissism levels from the 1990s to the 2010s (d = −0.27). At the facet level, leadership (d = −0.20), vanity (d = −0.16), and entitlement (d = −0.28) all showed decreases. Our results contradict the claim that recent cohorts of college students are more narcissistic than earlier generations of college students.