Category Archives: Review

About nature versus nurture: the four laws of behavioural genetics

This tweet by Steve Stewart-Williams is so relevant I wanted to share it here on this blog as I know a lot of people who follow my posts aren’t on Twitter.

If you feel angry after reading the first two laws, do read on. Both articles mentioned in the tweet are also must reads.

2 Comments

Filed under Psychology, Research, Review

A lecture by and a conversation with Steven Pinker: why we should be happier with the world today

Leave a comment

Filed under Media literacy, Myths, Research, Review, Trends

A new review study about Collaborative Problem Solving, still a lot to research

I found this new review study via Jeroen Janssen and it’s quite interesting as it tackles Collaborative Problem Solving and how much is already known about how to advance this. Spoiler: not that much, it seems, while it now is being measured e.g. by PISA.

But first what is it? That is already a bit more difficult question to answer, but I do think this description of what is needed makes it more clear:

…team members must be able to define the problem, understand who knows what on the team, identify gaps in what is known and what is required, integrate these to generate candidate solutions, and monitor progress in achieving the group goals. From the social perspective, the success of a team requires that members establish shared understanding, pursue joint and complementary actions, and coordinate their behavior in service of generating and evaluating solutions.

So what is (un)known? This overview at the end of the article with suggestions for further research, makes a lot more clear:

  1. CPS has been identified as an important skill in the international community and workforce, but recent assessments have revealed that students and adults have low CPS proficiency. This calls for an analysis of CPS mechanisms, frequent problems, and methods of solving these problems. Psychological scientists could play a major role in this broad effort by partnering with stakeholders.
  2. CPS is rarely trained in schools and the workforce, and the existing training is not informed by psychological science. This opens the door to the value of psychological scientists’ being part of national and international efforts drawing from their expertise in science, learning and training.
  3. Psychological scientists have developed a body of empirical research and theory of team science over the years, but much of this work has focused on group learning, work, memory, and decision making rather than CPS per se. We need to sort out how much of the existing research in team science applies to CPS. Psychological scientists are encouraged to direct their focus on CPS per se in the team science research landscape.
  4. Intelligent digital technologies have the potential to automatically analyze large samples of group interactions at multiple levels of language, discourse, and interactivity. This is landmark progress because existing research on teams has had small samples and time-consuming annotation of the interactions. There is a need for psychological scientists to partner with the developers of these technologies to recommend psychological characteristics to track and to scrutinize the validity of automated measures.
  5. A curriculum for training CPS competencies has not been developed and adequately tested. There is a need to develop a program of research on CPS curriculum design for both students and instructors. Psychological scientists are an important asset to generate potential curricula and to test their efficacies.

Abstract of the open access review:

Collaborative problem solving (CPS) has been receiving increasing international attention because much of the complex work in the modern world is performed by teams. However, systematic education and training on CPS is lacking for those entering and participating in the workforce. In 2015, the Programme for International Student Assessment (PISA), a global test of educational progress, documented the low levels of proficiency in CPS. This result not only underscores a significant societal need but also presents an important opportunity for psychological scientists to develop, adopt, and implement theory and empirical research on CPS and to work with educators and policy experts to improve training in CPS. This article offers some directions for psychological science to participate in the growing attention to CPS throughout the world. First, it identifies the existing theoretical frameworks and empirical research that focus on CPS. Second, it provides examples of how recent technologies can automate analyses of CPS processes and assessments so that substantially larger data sets can be analyzed and so students can receive immediate feedback on their CPS performance. Third, it identifies some challenges, debates, and uncertainties in creating an infrastructure for research, education, and training in CPS. CPS education and assessment are expected to improve when supported by larger data sets and theoretical frameworks that are informed by psychological science. This will require interdisciplinary efforts that include expertise in psychological science, education, assessment, intelligent digital technologies, and policy.

3 Comments

Filed under Education, Research, Review

Very interesting and relevant talk by Robert A. Bjork on learning, memory and forgetting (and how bad we are in judging effective learning)

H/T to @triciatailored

1 Comment

Filed under Education, Research, Review

What do you need to succeed in life?

The answer is of course sheer luck, besides talent and intelligence. This new systematically review doesn’t say intelligence and talent aren’t needed, but suggests that non-cognitive skills can also be important, although there are also some serious warning lights surrounding the existing body of evidence.

From the press release:

The study, published in the journal Nature Human Behaviour is the first to systematically review the entire literature on effects of non-cognitive skills in children aged 12 or under, on later outcomes in their lives such as academic achievement, and cognitive and language ability.

“Traits such as attention, self-regulation, and perseverance in childhood have been investigated by psychologists, economists, and epidemiologists, and some have been shown to influence later life outcomes,” says Professor John Lynch, School of Public Health, University of Adelaide and senior author of the study.

“There is a wide range of existing evidence under-pinning the role of non-cognitive skills and how they affect success in later life but it’s far from consistent,” he says.

One of the study’s co-authors, Associate Professor Lisa Smithers, School of Public Health, University of Adelaide says: “There is tentative evidence from published studies that non-cognitive skills are associated with academic achievement, psychosocial, and cognitive and language outcomes, but cognitive skills are still important.”

One of the strongest findings of their systematic review was that the quality of evidence in this field is lower than desirable. Of over 550 eligible studies, only about 40% were judged to be of sufficient quality.

“So, while interventions to build non-cognitive skills may be important, particularly for disadvantaged children, the existing evidence base underpinning this field has the potential for publication bias and needs to have larger studies that are more rigorously designed. That has important implications for researchers and funding agencies who wish to study effects of non-cognitive skills,” says Professor Lynch.

Abstract of the study:

Success in school and the labour market relies on more than high intelligence. Associations between ‘non-cognitive’ skills in childhood, such as attention, self-regulation and perseverance, and later outcomes have been widely investigated. In a systematic review of this literature, we screened 9,553 publications, reviewed 554 eligible publications and interpreted results from 222 better-quality publications. Better-quality publications comprised randomized experimental and quasi-experimental intervention studies (EQIs) and observational studies that made reasonable attempts to control confounding. For academic achievement outcomes, there were 26 EQI publications but only 14 were available for meta-analysis, with effects ranging from 0.16 to 0.37 s.d. However, within subdomains, effects were heterogeneous. The 95% prediction interval for literacy was consistent with negative, null and positive effects (−0.13 to 0.79). Similarly, heterogeneous findings were observed for psychosocial, cognitive and language, and health outcomes. Funnel plots of EQIs and observational studies showed asymmetric distributions and potential for small study bias. There is some evidence that non-cognitive skills associate with improved outcomes. However, there is potential for small study and publication bias that may overestimate true effects, and the heterogeneity of effect estimates spanned negative, null and positive effects. The quality of evidence from EQIs underpinning this field is lower than optimal and more than one-third of observational studies made little or no attempt to control confounding. Interventions designed to develop children’s non-cognitive skills could potentially improve outcomes. The interdisciplinary researchers interested in these skills should take a more strategic and rigorous approach to determine which interventions are most effective.

1 Comment

Filed under Education, Psychology, Research, Review

Going viral… in academia? Prestige rules most of the time

This study actually answers a question that I’ve had for quite a while: how come some ideas move through academia even if they’re not that good, while great insights sometimes seem to take ages to get around. This new study from Allison Morgan and her colleagues suggests something that is both close related to epidemiology and memes, but it has most to do with… prestige – and once and a while with the quality of the idea.

From the press release:

How ideas move through academia may depend on where those ideas come from–whether from big-name universities or less prestigious institutions–as much as their quality, a recent study from the University of Colorado Boulder suggests.

The new research borrows a page from epidemiology, exploring how ideas might flow from university to university, almost like a disease. The findings from CU Boulder’s Allison Morgan and her colleagues suggest that the way that universities hire new faculty members may give elite schools an edge in spreading their research to others.

In particular, the team simulated how ideas might spread out faster from highly-ranked schools than from those at the bottom of the pile–even when the ideas weren’t that good. The results suggest that academia may not function like the meritocracy that some claim, said Morgan, a graduate student in the Department of Computer Science.

She and her colleagues began by drawing on a dataset, originally published in 2015, that described the hiring histories of more than 5,000 faculty members in 205 computer science programs in the U.S. and Canada.

That dataset revealed what might be a major power imbalance in the field–with a small number of universities training the majority of tenure track faculty across both countries.

“This paper was really about investigating the implications of the imbalance,” Morgan said. “What does it mean if the elite institutions are producing the majority of the faculty who are, in turn, training the future teachers in the field?”

To answer that question, the researchers turned the 2015 dataset into a network of connected universities. If a university placed one of its Ph.D. students in a job at another school, then those two schools were linked. The resulting “roadmap” showed how faculty might carry ideas from their graduate schools to the universities that hired them.

The researchers then ran thousands of simulations on that network, allowing ideas that began at one school to percolate down to others. The team adjusted for the quality of ideas by making some more likely to shift between nodes than others.

The findings, published in October in the journal EPJ Data Science, show that it matters where an idea gets started. When mid-level ideas began at less prestigious schools, they tended to stall, not reaching the full network. The same wasn’t true for so-so thinking from major universities.

“If you start a medium- or low-quality idea at a prestigious university, it goes much farther in the network and can infect more nodes than an idea starting at a less prestigious university,” Morgan said.

That pattern held up even when the researchers introduced a bit of randomness to the mix–allowing ideas to pop from one end of the network to another by chance. That simulated how university departments might learn about an idea through factors other than hiring, such as journals, conferences or word of mouth.

The results seem to paint a dim picture of academia, said study coauthor Samuel Way, a postdoctoral research associate in computer science. He explained that recent sociological research demonstrates that workplaces benefit by having a lot of diversity–whether in gender, race or in how employees are trained.

“If you have five people who all have the exact same training and look at the world through the same lens, and you give them a problem that stumps one of them, it might stump all of them,” Way said.

He added that it may be possible for the academic world to blunt the impact of the sorts of biases the team revealed, including by adopting practices like double-blind peer review–in which the reviewers of a study can’t see the names or affiliations of the authors.

“In a setting like science where it’s incredibly difficult to come up with an objective measure of the quality of an idea, double-blind peer review may be the best you can do,” Way said.

The study did, however, contain a bit of good news: The bias toward big-name universities mattered a lot less for high-quality ideas. In other words, great thinking can still catch fire in academia, no matter where it comes from.

“I think it’s heartwarming in a way,” Morgan said. “We see that if you have a high-quality idea, and you’re from the bottom of the hierarchy, you have as good a chance of sending that idea across the network, as if it came from the top.”

Abstract of the study:

The spread of ideas in the scientific community is often viewed as a competition, in which good ideas spread further because of greater intrinsic fitness, and publication venue and citation counts correlate with importance and impact. However, relatively little is known about how structural factors influence the spread of ideas, and specifically how where an idea originates might influence how it spreads. Here, we investigate the role of faculty hiring networks, which embody the set of researcher transitions from doctoral to faculty institutions, in shaping the spread of ideas in computer science, and the importance of where in the network an idea originates. We consider comprehensive data on the hiring events of 5032 faculty at all 205 Ph.D.-granting departments of computer science in the U.S. and Canada, and on the timing and titles of 200,476 associated publications. Analyzing five popular research topics, we show empirically that faculty hiring can and does facilitate the spread of ideas in science. Having established such a mechanism, we then analyze its potential consequences using epidemic models to simulate the generic spread of research ideas and quantify the impact of where an idea originates on its longterm diffusion across the network. We find that research from prestigious institutions spreads more quickly and completely than work of similar quality originating from less prestigious institutions. Our analyses establish the theoretical trade-offs between university prestige and the quality of ideas necessary for efficient circulation. Our results establish faculty hiring as an underlying mechanism that drives the persistent epistemic advantage observed for elite institutions, and provide a theoretical lower bound for the impact of structural inequality in shaping the spread of ideas in science.

1 Comment

Filed under Education, Myths, Research, Review

A dissection of Howard Gardner’s Frames

This Twitter-rant is too good not to share here (H/T Tim van der Zee):

 

2 Comments

Filed under Myths, Psychology, Review

How can schools optimize support for children with ADHD, a new review

With all the craze about meta-analyses, sometimes people forget that a good systematic review can be even more helpful. This new systematic review even including some meta-analysis led by the University of Exeter seems no exception as it want to give guidance on how schools can best support children with ADHD to improve symptoms and maximise their academic outcomes.

From the press release:

The study, led by the University of Exeter and involving researchers at the EPPI-Centre (University College London), undertook a systematic review which analysed all available research into non-medication measures to support children with ADHD in schools. Published in Review of Education, the paper found that interventions which include one-to-one support and a focus on self-regulation improved academic outcomes.

Around five per cent of children have ADHD, meaning most classrooms will include at least one child with the condition. They struggle to sit still, focus their attention and to control impulses much more than ordinary children of the same age. Schools can be a particularly challenging setting for these children, and their difficulty in waiting their turn or staying in their seat impacts peers and teachers. Research shows that medication is effective, but does not work for all children, and is not acceptable to some families.

The research was funded by the National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care (CLAHRC) South West Peninsula – or PenCLAHRC. The team found 28 randomised control trials on non-drug measures to support children with ADHD in schools. In a meta-analysis, they analysed the different components of the measures being carried out to assess the evidence for what was most effective.

The studies varied in quality, which limits the confidence the team can have in their results. They found that important aspects of successful interventions for improving the academic outcomes of children are when they focus on self-regulation and are delivered in one-to-one sessions.

Self- regulation is hard for children who are very impulsive and struggle to focus attention. Children need to learn to spot how they are feeling inside, to notice triggers and avoid them if possible, and to stop and think before responding. This is much harder for children with ADHD than most other children, but these are skills that can be taught and learned.

The team also found some promising evidence for daily report cards. Children are set daily targets which are reviewed via a card that the child carries between home and school and between lessons in school. Rewards are given for meeting targets. The number of studies looking at this was lower, and their findings did not always agree. But using a daily report card is relatively cheap and easy to implement. It can encourage home-school collaboration and offers the flexibility to respond to a child’s individual needs

Tamsin Ford, Professor of Child Psychiatry at the University of Exeter Medical School, said: “Children with ADHD are of course all unique. It’s a complex issue and there is no one-size-fits-all approach. However, our research gives the strongest evidence to date that non-drug interventions in schools can support children to meet their potential in terms of academic and other outcomes. More and better quality research is needed but in the mean-time, schools should try daily report cards and to increase children’s ability to regulate their emotions. These approaches may work best for children with ADHD by one-to-one delivery”

Abstract of the full paper:

Non-pharmacological interventions for attention-deficit/hyperactivity disorder are useful treatments, but it is unclear how effective school-based interventions are for a range of outcomes and which features of interventions are most effective. This paper systematically reviews randomized controlled trial evidence of the effectiveness of interventions for children with ADHD in school settings. Three methods of synthesis were used to explore the effectiveness of interventions, whether certain types of interventions are more effective than others and which components of interventions lead to effective academic outcomes. Twenty-eight studies (n=1,807) were included in the review. Eight types of interventions were evaluated and a range of different ADHD symptoms, difficulties and school outcomes were assessed across studies. Meta-analysis demonstrated beneficial effects for interventions that combine multiple features (median effect size g=0.37, interquartile range 0.32, range 0.09 to 1.13) and suggest some promise for daily report card interventions (median g=0.0.62, IQR=0.25, range 0.13 to 1.62). Meta-regression analyses did not give a consistent message regarding which types of interventions were more effective than others. Finally, qualitative comparative analysis demonstrated that self-regulation and one-to-one intervention delivery were important components of interventions that were effective for academic outcomes. These two components were not sufficient though; when they appeared with personalisation for individual recipients and delivery in the classroom, or when interventions did not aim to improve child relationships, interventions were effective. This review provides updated information about the effectiveness of non-pharmacological interventions specific to school settings and gives tentative messages about important features of these interventions for academic outcomes.

 

Leave a comment

Filed under Education, Research, Review

Beste Evidence in Brief: Effective programs in elementary math

There is a new Best Evidence in Brief and this time I pick this study:

Marta Pellegrini from the University of Florence and Cynthia Lake, Amanda Inns, and Robert E. Slavin from our own Johns Hopkins Center for Research and Reform in Education have released  a new report on effective programs in elementary math. The report reviews research on the mathematics achievement outcomes of all programs with at least one study meeting the inclusion criteria of the review. A total of 78 studies were identified that evaluated 61 programs in grades K-5.
The studies were very high in quality, with 65 (83%) randomized and 13 (17%) quasi-experimental evaluations. Key findings were as follows:
  • Particularly positive outcomes were found for tutoring programs.
  • One-to-one and one-to-small group models had equal impacts, as did teachers and paraprofessionals as tutors.
  • Technology programs showed modest positive impacts.
  • Professional development approaches focused on helping teachers gain in understanding of math content and pedagogy had no impact on student achievement, but more promising outcomes were seen in studies focused on instructional processes, such as cooperative learning.
  • Whole-school reform, social-emotional approaches, math curricula, and benchmark assessment programs found few positive effects, although there were one or more effective individual approaches in most categories.
The findings suggest that programs emphasizing personalization, engagement, and motivation are most impactful in elementary mathematics instruction, while strategies focused on textbooks, professional development for math knowledge or pedagogy, and other strategies that do not substantially impact students’ daily experiences have little impact.

Leave a comment

Filed under Education, Research, Review

Yes, retrieval and testing does work, but what limits the effect? Insights from a new meta-analysis

A new meta-analysis does confirm memory retrieval can be beneficial for learning, but also shows there are limits:

  • the frequency and difficulty of questions.
  • Simply asking a question is not enough; students must respond to see a positive effect on learning

Probably you now want to know how much is too much? Well, you’re in for a bit of a disappointment I’m afraid, as you can learn from the press release which explains it better than the original paper imho:

“Frequency is a critical factor. There appears to be a trade-off in how often you test students,” Chan said. “If I lecture nonstop throughout class, this lessens their ability to learn the material. However, too many questions, too often, can have a detrimental effect, but we don’t yet know exactly why that happens or how many questions is too many.”

The answer to that question may depend on the length of the lecture and the type or difficulty of the material, Chan said. Given the different dynamics of a class lecture, it may not be possible to develop a universal lecture-to-question ratio. Regardless, Chan says testing students throughout the lecture is a simple step instructors at any level and in any environment can apply to help students learn.

“This is a cheap, effective method and anyone can implement it in their class,” he said. “You don’t need to give every student an iPad or buy some fancy software – you just need to ask questions and have students answer them in class.”

Chan, Christian Meissner, a professor of psychology at Iowa State; and Sara Davis, a postdoctoral fellow at Skidmore College and former ISU graduate student, examined journal articles from the 1970s to 2016 detailing more than 150 different experiments for their analysis. The researchers looked at what factors influenced the magnitude of this effect, when it happens and when the effect is reversed.

Why testing helps

There are several explanations as to why testing students is beneficial for new learning. The researchers evaluated four main theories for the meta-analysis to examine the strengths and weakness of these explanations from the existing research. The data strongly supported what researchers called the integration theory.

“This theory claims that testing enhances future learning by facilitating the association between information on the test and new, especially related, information that is subsequently studied, leading to spontaneous recall of the previously tested information when they learn related information,” Meissner said. “When this testing occurs, people can better tie new information with what they have learned previously, leading them to integrate the old and the new.”

Learning new information requires an encoding process, which is different from the process needed to retrieve that information, the researchers explained. Students are forced to switch between the two when responding to a question. Changing the modes of operation appears to refocus attention and free the brain to do something different.

A majority of the studies in the analysis focused on college students, but some also included older adults, children and people with traumatic brain injuries. The researchers were encouraged to find that testing could effectively enhance learning across all these groups.

“Memory retrieval can optimize learning in situations that require people to maintain attention for an extended period of time. It can be used in class lectures as well as employee training sessions or online webinars,” Davis said. “Future research could examine factors that can maximize this potential.”

Abstract of the meta-analysis:

A growing body of research has shown that retrieval can enhance future learning of new materials. In the present report, we provide a comprehensive review of the literature on this finding, which we term test-potentiated new learning. Our primary objectives were to (a) produce an integrative review of the existing theoretical explanations, (b) summarize the extant empirical data with a meta-analysis, (c) evaluate the existing accounts with the meta-analytic results, and (d) highlight areas that deserve further investigations. Here, we identified four nonexclusive classes of theoretical accounts, including resource accounts, metacognitive accounts, context accounts, and integration accounts. Our quantitative review of the literature showed that testing reliably potentiates the future learning of new materials by increasing correct recall or by reducing erroneous intrusions, and several factors have a powerful impact on whether testing potentiates or impairs new learning. Results of a metaregression analysis provide considerable support for the integration account. Lastly, we discuss areas of under-investigation and possible directions for future research.

1 Comment

Filed under Education, Research, Review