Happy World Teacher Day and remember: every lesson shapes a life!

Leave a comment

Filed under Education

Yes, retrieval and testing does work, but what limits the effect? Insights from a new meta-analysis

A new meta-analysis does confirm memory retrieval can be beneficial for learning, but also shows there are limits:

  • the frequency and difficulty of questions.
  • Simply asking a question is not enough; students must respond to see a positive effect on learning

Probably you now want to know how much is too much? Well, you’re in for a bit of a disappointment I’m afraid, as you can learn from the press release which explains it better than the original paper imho:

“Frequency is a critical factor. There appears to be a trade-off in how often you test students,” Chan said. “If I lecture nonstop throughout class, this lessens their ability to learn the material. However, too many questions, too often, can have a detrimental effect, but we don’t yet know exactly why that happens or how many questions is too many.”

The answer to that question may depend on the length of the lecture and the type or difficulty of the material, Chan said. Given the different dynamics of a class lecture, it may not be possible to develop a universal lecture-to-question ratio. Regardless, Chan says testing students throughout the lecture is a simple step instructors at any level and in any environment can apply to help students learn.

“This is a cheap, effective method and anyone can implement it in their class,” he said. “You don’t need to give every student an iPad or buy some fancy software – you just need to ask questions and have students answer them in class.”

Chan, Christian Meissner, a professor of psychology at Iowa State; and Sara Davis, a postdoctoral fellow at Skidmore College and former ISU graduate student, examined journal articles from the 1970s to 2016 detailing more than 150 different experiments for their analysis. The researchers looked at what factors influenced the magnitude of this effect, when it happens and when the effect is reversed.

Why testing helps

There are several explanations as to why testing students is beneficial for new learning. The researchers evaluated four main theories for the meta-analysis to examine the strengths and weakness of these explanations from the existing research. The data strongly supported what researchers called the integration theory.

“This theory claims that testing enhances future learning by facilitating the association between information on the test and new, especially related, information that is subsequently studied, leading to spontaneous recall of the previously tested information when they learn related information,” Meissner said. “When this testing occurs, people can better tie new information with what they have learned previously, leading them to integrate the old and the new.”

Learning new information requires an encoding process, which is different from the process needed to retrieve that information, the researchers explained. Students are forced to switch between the two when responding to a question. Changing the modes of operation appears to refocus attention and free the brain to do something different.

A majority of the studies in the analysis focused on college students, but some also included older adults, children and people with traumatic brain injuries. The researchers were encouraged to find that testing could effectively enhance learning across all these groups.

“Memory retrieval can optimize learning in situations that require people to maintain attention for an extended period of time. It can be used in class lectures as well as employee training sessions or online webinars,” Davis said. “Future research could examine factors that can maximize this potential.”

Abstract of the meta-analysis:

A growing body of research has shown that retrieval can enhance future learning of new materials. In the present report, we provide a comprehensive review of the literature on this finding, which we term test-potentiated new learning. Our primary objectives were to (a) produce an integrative review of the existing theoretical explanations, (b) summarize the extant empirical data with a meta-analysis, (c) evaluate the existing accounts with the meta-analytic results, and (d) highlight areas that deserve further investigations. Here, we identified four nonexclusive classes of theoretical accounts, including resource accounts, metacognitive accounts, context accounts, and integration accounts. Our quantitative review of the literature showed that testing reliably potentiates the future learning of new materials by increasing correct recall or by reducing erroneous intrusions, and several factors have a powerful impact on whether testing potentiates or impairs new learning. Results of a metaregression analysis provide considerable support for the integration account. Lastly, we discuss areas of under-investigation and possible directions for future research.

1 Comment

Filed under Education, Research, Review

What Makes a Top Teacher?

3-Star learning experiences

Paul A. Kirschner & Mirjam Neelen

What Makes a Top Teacher? This is a question with both a simple and a complex answer (and probably a whole spectrum in between). First, the simple answer. A top teacher is someone whose efforts inside and outside the classroom have a positive effect on a student’s learning progress, meaning an increase of knowledge and skills. The more progress, the better the teacher.

We can already hear some people mocking or expressing their anger and disgust. “Oh, dear!”, they’ll say (if they try to be polite). They’ll go on to grumble that this is such an old-fashioned thing to say and that a school in the 21st century shouldn’t teach kids ‘things’ but should rather help them to become curious, adaptive and engaged individuals with strong problem-solving and critical thinking skills, give them grit, make them flexible team workers, and so forth. They might…

View original post 1,587 more words

Leave a comment

by | October 2, 2018 · 7:24 pm

Using AI to discover learning disabilities in children

Whenever you hear or read artificial intelligence one starts to dream. Ok, I admit: I do. Every time I use Siri I’m reminded what a long way we still have to go, except for when my children ask silly questions. But this study uses AI in a whole different way and way more serious: to check if the humans made a mistake when labelling a child with learning disabilities.

From the press release:

Scientists using machine learning – a type of artificial intelligence – with data from hundreds of children who struggle at school, identified clusters of learning difficulties which did not match the previous diagnosis the children had been given.

The researchers from the Medical Research Council (MRC) Cognition and Brain Sciences Unit at the University of Cambridge say this reinforces the need for children to receive detailed assessments of their cognitive skills to identify the best type of support.

The study, published in Developmental Science, recruited 550 children who were referred to a clinic – the Centre for Attention Learning and Memory – because they were struggling at school.

The scientists say that much of the previous research into learning difficulties has focussed on children who had already been given a particular diagnosis, such as attention deficit hyperactivity disorder (ADHD), an autism spectrum disorder, or dyslexia. By including children with all difficulties regardless of diagnosis, this study better captured the range of difficulties within, and overlap between, the diagnostic categories.

Dr Duncan Astle from the MRC Cognition and Brain Sciences Unit at the University of Cambridge, who led the study said: “Receiving a diagnosis is an important landmark for parents and children with learning difficulties, which recognises the child’s difficulties and helps them to access support. But parents and professionals working with these children every day see that neat labels don’t capture their individual difficulties – for example one child’s ADHD is often not like another child’s ADHD.

“Our study is the first of its kind to apply machine learning to a broad spectrum of hundreds of struggling learners.”

The team did this by supplying the computer algorithm with lots of cognitive testing data from each child, including measures of listening skills, spatial reasoning, problem solving, vocabulary, and memory. Based on these data, the algorithm suggested that the children best fit into four clusters of difficulties.

These clusters aligned closely with other data on the children, such as the parents’ reports of their communication difficulties, and educational data on reading and maths. But there was no correspondence with their previous diagnoses. To check if these groupings corresponded to biological differences, the groups were checked against MRI brain scans from 184 of the children. The groupings mirrored patterns in connectivity within parts of the children’s brains, suggesting that that the machine learning was identifying differences that partly reflect underlying biology.

Two of the four groupings identified were: difficulties with working memory skills, and difficulties with processing sounds in words.

Difficulties with working memory – the short-term retention and manipulation of information – have been linked with struggling with maths and with tasks such as following lists. Difficulties in processing the sounds in words, called phonological skills, has been linked with struggling with reading.

Dr Astle said: “Past research that’s selected children with poor reading skills has shown a tight link between struggling with reading and problems with processing sounds in words. But by looking at children with a broad range of difficulties we found unexpectedly that many children with difficulties with processing sounds in words don’t just have problems with reading – they also have problems with maths.

“As researchers studying learning difficulties, we need to move beyond the diagnostic label and we hope this study will assist with developing better interventions that more specifically target children’s individual cognitive difficulties.”

Dr Joni Holmes, from the MRC Cognition and Brain Sciences Unit at the University of Cambridge, who was senior author on the study said: “Our work suggests that children who are finding the same subjects difficult could be struggling for very different reasons, which has important implications for selecting appropriate interventions.”

The other two clusters identified were: children with broad cognitive difficulties in many areas, and children with typical cognitive test results for their age. The researchers noted that the children in the grouping that had cognitive test results that were typical for their age may still have had other difficulties that were affecting their schooling, such as behavioural difficulties, which had not been included in the machine learning.

Dr Joanna Latimer, Head of Neurosciences and Mental Health at the MRC, said: “These are interesting, early-stage findings which begin to investigate how we can apply new technologies, such as machine learning, to better understand brain function. The MRC funds research into the role of complex networks in the brain to help develop better ways to support children with learning difficulties.”

Abstract of the paper:

Our understanding of learning difficulties largely comes from children with specific diagnoses or individuals selected from community/clinical samples according to strict inclusion criteria. Applying strict exclusionary criteria overemphasizes within group homogeneity and between group differences, and fails to capture comorbidity. Here, we identify cognitive profiles in a large heterogeneous sample of struggling learners, using unsupervised machine learning in the form of an artificial neural network. Children were referred to the Centre for Attention Learning and Memory (CALM) by health and education professionals, irrespective of diagnosis or comorbidity, for problems in attention, memory, language, or poor school progress (n = 530). Children completed a battery of cognitive and learning assessments, underwent a structural MRI scan, and their parents completed behavior questionnaires. Within the network we could identify four groups of children: (a) children with broad cognitive difficulties, and severe reading, spelling and maths problems; (b) children with age‐typical cognitive abilities and learning profiles; (c) children with working memory problems; and (d) children with phonological difficulties. Despite their contrasting cognitive profiles, the learning profiles for the latter two groups did not differ: both were around 1 SD below age‐expected levels on all learning measures. Importantly a child’s cognitive profile was not predicted by diagnosis or referral reason. We also constructed whole‐brain structural connectomes for children from these four groupings (n = 184), alongside an additional group of typically developing children (n = 36), and identified distinct patterns of brain organization for each group. This study represents a novel move toward identifying data‐driven neurocognitive dimensions underlying learning‐related difficulties in a representative sample of poor learners.

Leave a comment

Filed under Education, Psychology, Research

Funny on Sunday: a voice controlled house is a great idea…

Leave a comment

Filed under Funny

Nothing new: personalized education (but does the article add something new?)

Nihil sub sole novum, we may think the idea of personalized education is new, although defenders of the idea such as Zuckerberg and Gates often refer to a study by Benjamin Bloom from decades ago. But in a new paper published in Nature David Dockterman argues that the idea is even much older than that. But if that’s the case, why didn’t it catch on and even more important: why would it now?

The article pleas for a new kind of pedagogy – and of course that got me triggered – but than seems to fall in many mistakes other people thinking about reform in education have done before by not being critical enough towards both the need for personalization and possible consequences. Biesta describes three tasks of education: the personal development, qualification and socialization. The author does mention something similar by stating

It isn’t enough to scale an instructional system around a single aspect of learner need, like content competence or social acceptance. A robust personalized learning model must respond to whatever needs matter for each individual learner.

But the starting point is the individual. This hides a world view. Nothing wrong with that, but when discussing this one needs to know and acknowledge this. It might also explain in part why some reforms have been failing over and over again…

Abstract of the paper:

Current initiatives to personalize learning in schools, while seen as a contemporary reform, actually continue a 200+ year struggle to provide scalable, mass, public education that also addresses the variable needs of individual learners. Indeed, some of the rhetoric and approaches reformers are touting today sound very familiar in this historical context. What, if anything, is different this time? In this paper I provide a brief overview of historical efforts to create a scaled system of education for all children that also acknowledged individual learner variability. Through this overview I seek patterns and insights to inform and guide contemporary efforts in personalized learning.

1 Comment

Filed under Education, Review

6 studies and insights every student needs to know!

Today Filip Raes shared these 6 tweets with the world – and I helped him with some of them:

1 Comment

Filed under Education, Research

More research on how to dispel myths: redirect

As professional myth busters, Paul, Casper and myself are always interested in how to beat them. This new study confirms and at the same time nuances a previous insight: those on the fence about an idea can be swayed after hearing facts related to the misinformation. Do note that as often is the case: this study was on a relatively small sample.

From the press release:

After conducting an experimental study, the researchers found that listening to a speaker repeating a belief does, in fact, increase the believability of the statement, especially if the person somewhat believes it already. But for those who haven’t committed to particular beliefs, hearing correct information can override the myths.

For example, if a policymaker wants people to forget the inaccurate belief that “Reading in dim light can damage children’s eyes,” they could instead repeatedly say, “Children who spend less time outdoors are at greater risk to develop nearsightedness.” Those on the fence are more likely to remember the correct information and, more importantly, less likely to remember the misinformation, after repeatedly hearing the correct information. People with entrenched beliefs are likely not to be swayed either way.

The sample was not nationally representative, so the researchers urge caution when extrapolating the findings to the general population, but they believe the findings would replicate on a larger scale. The findings, published in the academic journal Cognition, have the potential to guide interventions aimed at correcting misinformation in vulnerable communities.

“In today’s informational environment, where inaccurate information and beliefs are widespread, policymakers would be well served by learning strategies to prevent the entrenchment of these beliefs at a population level,” said study co-author Alin Coman, assistant professor of psychology at Princeton’s Woodrow Wilson School of Public and International Affairs and Department of Psychology.

Coman and Madalina Vlasceanu, a graduate student at Princeton, conducted a main study, with a final total of 58 participants, and a replication study, with 88 participants.

In the main study, a set of 24 statements was distributed to participants. These statements, which contained eight myths and 16 correct pieces of information in total, fell into four categories: nutrition, allergies, vision and health.

Myths were comprised of statements commonly endorsed by people as true, but that are actually false, such as “Crying helps babies’ lungs develop.” The correct and related piece of information would be: “Pneumonia is the prime cause of death in children.”

First, the participants were asked to carefully read these statements, which were described as statements “frequently encountered on the internet.” After reading, participants rated whether they believed the statement was true on a scale from one to seven (one being “not at all” to seven being “very much so.”) Next, they listened to an audio recording of a person remembering some of the beliefs the participants had read initially. In the recording, the speaker spoke naturally, as someone would recalling information. The listeners were asked to determine whether the speakers were accurately remembering the original content. Each participant listened to an audio recording containing two of the correct statements from each of two categories.

Participants were then given the category name — nutrition, allergies, vision, or health — and were instructed to recall the statements they first read. Finally, they were presented with the initial statements and asked to rate them based on accuracy and scientific support.

The researchers found that listeners do experience changes in their beliefs after listening to information shared by another person. In particular, the ease with which a belief comes to mind affects its believability.

If a belief was mentioned by the person in the audio, it was remembered better and believed more by the listener. If, however, a belief was from the same category as the mentioned belief (but not mentioned itself), it was more likely to be forgotten and believed less by the listener. These effects of forgetting and believing occur for both accurate and inaccurate beliefs.

The results are particularly meaningful for policymakers interested in having an impact at a community level, especially for health-relevant inaccurate beliefs. Coman and his collaborators are currently expanding upon this study, looking at 12-member groups where people are exchanging information in a lab-created social network.

Abstract of the study:

Belief endorsement is rarely a fully deliberative process. Oftentimes, one’s beliefs are influenced by superficial characteristics of the belief evaluation experience. Here, we show that by manipulating the mnemonic accessibility of particular beliefs we can alter their believability. We use a well-established socio-cognitive paradigm (i.e., the social version of the selective practice paradigm) to increase the mnemonic accessibility of some beliefs and induce forgetting in others. We find that listening to a speaker selectively practicing beliefs results in changes in believability. Beliefs that are mentioned become mnemonically accessible and exhibit an increase in believability, while beliefs that are related to those mentioned exrience mnemonic suppression, which results in decreased believability. Importantly, the latter effect occurs regardless of whether the belief is scientifically accurate or inaccurate. Furthermore, beliefs that are endorsed with moderate-strength are particularly susceptible to mnemonically-induced believability changes. These findings, we argue, have the potential to guide interventions aimed at correcting misinformation in vulnerable communities.

Leave a comment

Filed under Myths, Research

Funny on Sunday: teachers in September vs…

Leave a comment

Filed under Education, Funny

Rethinking Technology in Education

Robert Slavin's Blog

Antonine de Saint Exupéry, in his 1931 classic Night Flight, had a wonderful line about early airmail service in Patagonia, South America:

“When you are crossing the Andes and your engine falls out, well, there’s nothing to do but throw in your hand.”

blog_10-4-18_Saint_Exupery_363x500

I had reason to think about this quote recently, as I was attending a conference in Santiago, Chile, the presumed destination of the doomed pilot. The conference focused on evidence-based reform in education.

Three of the papers described large scale, randomized evaluations of technology applications in Latin America, funded by the Inter-American Development Bank (IDB). Two of them documented disappointing outcomes of large-scale, traditional uses of technology. One described a totally different application.

One of the studies, reported by Santiago Cueto (Cristia et al., 2017), randomly assigned 318 high-poverty, mostly rural primary schools in Peru to receive sturdy, low-cost, practical computers, or to serve as a…

View original post 1,152 more words

2 Comments

Filed under Education