Whenever I speak about proven programs in education, someone always brings up what they consider a damning point. “Sure, there are programs proven to work. But it all comes down to the principal. A great principal can get any program to work. A weak principal can’t get any program to work. So if it’s all about the quality of principals, what do proven programs add?”
To counter this idea, consider Danica Patrick, one of the winningest NASCAR racecar drivers a few years ago. If you gave Danica and a less talented driver identical cars on an identical track, Danica was sure to win.But instead of the Formula 1 racecar she drove, what if you gave Danica a Ford Fiesta? Obviously, she wouldn’t have a chance. It takes a great car and a great driver to win NASCAR races.
Back to school principals, the same principle applies. Of course it…
View original post 310 more words
It’s a yearly tradition for Gartner to publish a string of hype cycles, including one for education in July. And I admit: I didn’t pay attention to it.
So, there is a new one, but besides the many issues one can have with the hype cycle by this company, I do think this time it’s pretty bland as if everybody with a bit of knowledge about EdTech could have written it.
- On the Rise
- AV Over IP in Education
- Social CRM: Education
- Emotion AI
- Virtual Reality/Augmented Reality Applications in Education
- At the Peak
- Blockchain in Education
- Artificial Intelligence Education Applications
- Design Thinking
- Exostructure Strategy
- Classroom 3D Printing
- Digital Assessment
- SaaS SIS
- Sliding Into the Trough
- Education Analytics
- Competency-Based Education Platforms
- Bluetooth Beacons
- Semantic Knowledge Graphing
- Citizen Developers
- Digital Credentials
- Alumni CRM
- Master Data Management
- Adaptive Learning Platforms
- Climbing the Slope
- Student Retention CRM
- Enterprise Video Content Management
- Entering the Plateau
- Integration Brokerage
Do read this great little tweet tirade on #edtech predicting the future and cognitive science by Benjamin Riley (Deans for Impact)
When I read the first tweet of this thread by Benjamin Riley I had the feeling we were up to something good. And Benjamin didn’t disappoint. I won’t make it into a habit of posting something like this on this blog, but I do wanted to share this here as I know that many of my readers would otherwise miss this:
And thus the conclusion?
It’s a myth we already discussed in our first book on myths about learning and education, but people keep dreaming of learning in our sleep.
This new study gives more insights about what is and isn’t possible: while the human brain is still able to perceive sounds during sleep, it is unable to group these sounds according to their organization in a sequence.
From the press release:
Hypnopedia, or the ability to learn during sleep, was popularized in the ’60s, with for example the dystopia Brave New World by Aldous Huxley, in which individuals are conditioned to their future tasks during sleep. This concept has been progressively abandoned due to a lack of reliable scientific evidence supporting in-sleep learning abilities.
Recently however, few studies showed that the acquisition of elementary associations such as stimulus-reflex response is possible during sleep, both in humans and in animals. Nevertheless, it is not clear if sleep allows for more sophisticated forms of learning.
A study published this August 6 in the journal Scientific Reportsby researchers from the ULB Neuroscience Institute (UNI) shows that while our brain is able to continue perceiving sounds during sleep like at wake, the ability to group these sounds according to their organization in a sequence is only present at wakefulness, and completely disappears during sleep.
Juliane Farthouat, while a Research Fellow of the FNRS under the direction of Philippe Peigneux, professor at the Faculty of Psychological Science and Education at Université libre de Bruxelles, ULB, used magnetoencephalography (MEG) to record the cerebral activity mirroring the statistical learning of series of sounds, both during slow wave sleep (a part of sleep during which brain activity is highly synchronized) and during wakefulness.
During sleep, participants were exposed to fast flows of pure sounds, either randomly organized or structured in such a way that the auditory stream could be statistically grouped into sets of 3 elements.
During sleep, brain MEG responses demonstrated preserved detection of isolated sounds, but no response reflecting statistical clustering.
During wakefulness, however, all participants presented brain MEG responses reflecting the grouping of sounds into sets of 3 elements.
The results of this study suggest intrinsic limitations in de novo learning during slow wave sleep, that might confine the sleeping brain’s learning capabilities to simple, elementary associations.
Abstract of the study:
Hypnopedia, or the capacity to learn during sleep, is debatable. De novo acquisition of reflex stimulus-response associations was shown possible both in man and animal. Whether sleep allows more sophisticated forms of learning remains unclear. We recorded during diurnal Non-Rapid Eye Movement (NREM) sleep auditory magnetoencephalographic (MEG) frequency-tagged responses mirroring ongoing statistical learning. While in NREM sleep, participants were exposed at non-awakenings thresholds to fast auditory streams of pure tones, either randomly organized or structured in such a way that the stream statistically segmented in sets of 3 elements (tritones). During NREM sleep, only tone-related frequency-tagged MEG responses were observed, evidencing successful perception of individual tones. No participant showed tritone-related frequency-tagged responses, suggesting lack of segmentation. In the ensuing wake period however, all participants exhibited robust tritone-related responses during exposure to statistical (but not random) streams. Our data suggest that associations embedded in statistical regularities remain undetected during NREM sleep, although implicitly learned during subsequent wakefulness. These results suggest intrinsic limitations in de novo learning during NREM sleep that might confine the NREM sleeping brain’s learning capabilities to simple, elementary associations. It remains to be ascertained whether it similarly applies to REM sleep.
Well, funny or better said: sad on Sunday…
This is indeed a very relevant paper that adds to the insights I once shared from Per Kornhall: https://theeconomyofmeaning.com/2016/04/24/this-talk-by-per-kornhall-at-researched-about-education-in-sweden-is-a-mustsee/
This excerpt from the conclusion of the paper is also quite damning:
The sharp rise in absenteeism, ADHD diagnoses, depression, and anxiety among Swedish pupils is not unexpected in a learning environment that continuously overloads the pupils’ working memory, as they have to piece together information on their own. Supporting evidence for the view that the postmodern, social-constructivist paradigm has contributed to the increase in psychiatric disorders among Swedish adolescents comes from Québec.
Haeck, Lefebvre, and Merrigan (2014) found that hyperactivity, anxiety, and physical aggression increased among Québecois pupils relative to pupils in the rest of Canada following a school reform in Québec in the early 2000s that was similar to the Swedish reforms.
Kindly helpers have pointed me towards a new working paper from Magnus Henrekson and Johan Wennström. Henrekson is a professor of economics and heads the Research Institute of Industrial Economics in Sweden. Wennström is a journalist, former government adviser and PhD student. They are concerned with the state of Swedish education.
I have written about Swedish education before. No doubt, there has been a decline in standards, but it can be hard to figure out why. My knowledge of the system has been largely based on third person accounts, speculation and a newspaper article by a Swedish professor that I had translated using Google (and which now appears to be paywalled).
With their paper, Henrekson and Wennström have provided much needed detail and they have been kind enough to publish it in English. It is a compelling read.
Previously, there have been two arguments about Sweden. The first is that any…
View original post 656 more words
Are the criticisms about randomized controlled trials in education correct? (Best Evidence in Brief)
There is a new best evidence in brief and in the new edition there is a bit of a meta-subject as it’s not about research but about research about research (you’ll have to read that twice, I guess).
The use of randomized controlled trials (RCTs) in education research has increased over the last 15 years. However, the use of RCTs has also been subject to criticism, with four key criticisms being that it is not possible to carry out RCTs in education; the research design of RCTs ignores context and experience; RCTs tend to generate simplistic universal laws of “cause and effect”; and that they are descriptive and contribute little to theory.To assess these four key criticisms, Paul Connolly and colleagues conducted a systematic review of RCTs in education research between 1980 and 2016 in order to consider the evidence in relation to the use of RCTs in education practice.The systematic review found a total of 1,017 RCTs completed and reported between 1980 and 2016, of which just over three-quarters have been produced in the last 10 years. Just over half of all RCTs were conducted in North America and just under a third in Europe. This finding addresses the first criticism, and demonstrates that, overall, it is possible to conduct RCTs in education research.While the researchers also find evidence to oppose the other key criticisms, the review suggests that some progress remains to be made. The article concludes by outlining some key challenges for researchers undertaking RCTs in education.
Background: The use of randomised controlled trials (RCTs) in education has increased significantly over the last 15 years. However, their use has also been subject to sustained and rather trenchant criticism from significant sections of the education research community. Key criticisms have included the claims that: it is not possible to undertake RCTs in education; RCTs are blunt research designs that ignore context and experience; RCTs tend to generate simplistic universal laws of ‘cause and effect’; and that they are inherently descriptive and contribute little to theory.
Purpose: This article seeks to assess the above four criticisms of RCTs by considering the actual evidence in relation to the use of RCTs in education in practice.
Design and methods: The article is based upon a systematic review that has sought to identify and describe all RCTs conducted in educational settings and including a focus on educational outcomes between 1980 and 2016. The search is limited to articles and reports published in English.
Results: The systematic review found a total of 1017 unique RCTs that have been completed and reported between 1980 and 2016. Just over three quarters of these have been produced over the last 10 years, reflecting the significant increase in the use of RCTs in recent years. Overall, just over half of all RCTs identified were conducted in North America and a little under a third in Europe. The RCTs cover a wide range of educational settings and focus on an equally wide range of educational interventions and outcomes. The findings not only disprove the claim that it is not possible to do RCTs in education but also provide some supporting evidence to challenge the other three key criticisms outlined earlier.
Conclusions: While providing evidence to counter the four criticisms outlined earlier, the article suggests that there remains significant progress to be made. The article concludes by outlining some key challenges for researchers undertaking RCTs in education.
There is a special issue of Computers in Human Behavior on learning from video and in their Editorial Fiorella and Mayer give an overview of effective and ineffective methods that are being trialed in the special issue:
What are the effective methods?
…two techniques that appear to improve learning outcomes with instructional video are segmenting—breaking the video into parts and allowing students to control the pace of the presentation—and mixed perspective—filming from both a first-person perspective and third-person.
And what isn’t worth the effort?
…some features that do not appear to be associated with improved learning outcomes with instructional video are matching the gender of the instructor to the gender of the learner, having the instructor’s face on the screen, inserting pauses throughout the video, and adding practice without feedback.
Abstract of the editorial:
In this commentary, we examine the papers in a special issue on “Developments and Trends in Learning with Instructional Video”. In particular, we focus on basic findings concerning which instructional features improve learning with instructional video (i.e., breaking the lesson into segments paced by the learner; recording from both first- and third-person perspectives) and which features or learner attributes do not (i.e., matching the instructor’s gender to the learner’s gender; having the instructor’s face on the screen; adding practice without feedback; inserting pauses throughout the video; and spatial ability). In addition, we offer recommendations for future work on designing effective video lessons.
I translated Daniel Willinghams book When Can You Trust The Experts into Dutch because I think his book is so important. Daniel sent out this tweet yesterday with the mission statement generator from this book.
It’s more than a bit tongue in cheek…