Category Archives: Psychology

Well, no surprise, but also no cigar: more education linked to better cognitive functioning later in life

This is a strange study. Researchers from the University of California in Berkeley used the data of 196000 Lumosity-users. This is a big group, making the study already interesting, but hold your horses, I do think there are some major issues.

First read this excerpt from the press release and see if you can spot the mistake:

The study, led by University of California, Berkeley, researchers, examined relationships between educational attainment, cognitive performance and learning in order to quantify the cumulative effect of attending school.

Its findings suggest that higher levels of education may help stave off age-related cognitive decline. In addition, the team found that education didn’t have a large impact on novel learning, or learning something new at various points in time.

The work, which reviewed the performance of around 196,000 subscribers to Lumosity online brain-training games, is believed to be the largest to date to evaluate cognitive effects of prior educational experience on past and future performance. Researchers said their findings may be of value to psychologists, sociologists, neuroscientists, education researchers and policymakers.

Grading educational achievement

Conventional wisdom has long accepted that higher education is likely to boost incomes and helps prepare individuals for a workplace with often-changing skill sets. Yet fewer than 40 percent of adults in the United States are expected to graduate from college in their lifetimes, and the percentage declines for more advanced degrees.

Until now, research has been inconclusive about the cognitive impacts of higher education and whether the quantity of schooling can influence the acquisition and maintenance of cognitive skills over time.

The researchers of the paper, which appears in the August 23 edition of PLOS ONE, are Silvia Bunge, a professor of psychology at UC Berkeley professor and at the Helen Wills Neuroscience Institute; Belén Guerra-Carrillo, a graduate student in Bunge’s Building Blocks of Cognition Laboratory and a National Science Foundation Fellow; and Kiefer Katovich, who was a statistician with Lumos Labs while the study was conducted.

Bunge and her team say higher levels of education are strong predictors of better cognitive performance across the 15- to 60-year-old age range of their study participants, and appear to boost performance more in areas such as reasoning than in terms of processing speed.

The study’s findings are consistent with prior evidence that the brain adapts in response to challenges, a phenomenon called “experience-dependent brain plasticity.” Based on the principles of plasticity, the authors predicted improvements in cognitive skills that are repeatedly taxed in demanding, cognitively engaging coursework.

Differences in performance were small for test subjects with a bachelor’s degree compared to those with a high school diploma, and moderate for those with doctorates compared to those with only some high school education.

The researchers noted that people from lower educational backgrounds learned novel tasks nearly as well as those from higher ones.

“The fact that the cognitive tests were not similar to what is learned in school is a strength of the study: It speaks to the idea that schooling doesn’t merely impart knowledge – it also provides the opportunity to sharpen core cognitive skills,” said Bunge.

The researchers analyzed anonymized data collected from around 196,000 Lumosity subscribers in the United States, Canada and Australia who came from a range of educational attainment and diverse backgrounds. Participants complete eight behavioral assessments of executive functioning and reasoning that are unrelated to educational curricula as part of their subscription.

The research team also looked closely at a subset of nearly 70,000 subscribers who finished Lumosity’s behavioral assessments a second time after about 100 days of additional cognitive training. Testing before and after the assessments measured cognitive performance in areas such as working memory, thinking quickly, responding flexibly to task goals and both verbal and non-verbal reasoning.

“Given the size and wide age range of our sample, it was possible to test whether these age effects are influenced by education – and, importantly, to determine how the cognitive effects of educational attainment differ across the lifespan, as one’s experience with formal education recedes into the past and is supplanted by other life experiences,” the team wrote.

Bunge said that collaborating with Lumosity was a golden opportunity to analyze data from around 196,000 participants – an anonymized dataset that would have taken a lifetime to collect in a laboratory.

Did you spot it?  I actually do think education can play a large role in this, but how can the researchers know what the status was of the executive functions before education? Even more: if those executive functions are stable from a certain age on, it’s even more impossible to tell.

But there is another issue, if you take a look at the abstract of the study (italic by me):

Attending school is a multifaceted experience. Students are not only exposed to new knowledge but are also immersed in a structured environment in which they need to respond flexibly in accordance with changing task goals, keep relevant information in mind, and constantly tackle novel problems. To quantify the cumulative effect of this experience, we examined retrospectively and prospectively, the relationships between educational attainment and both cognitive performance and learning. We analyzed data from 196,388 subscribers to an online cognitive training program. These subscribers, ages 15–60, had completed eight behavioral assessments of executive functioning and reasoning at least once. Controlling for multiple demographic and engagement variables, we found that higher levels of education predicted better performance across the full age range, and modulated performance in some cognitive domains more than others (e.g., reasoning vs. processing speed). Differences were moderate for Bachelor’s degree vs. High School (d = 0.51), and large between Ph.D. vs. Some High School (d = 0.80). Further, the ages of peak cognitive performance for each educational category closely followed the typical range of ages at graduation. This result is consistent with a cumulative effect of recent educational experiences, as well as a decrement in performance as completion of schooling becomes more distant. To begin to characterize the directionality of the relationship between educational attainment and cognitive performance, we conducted a prospective longitudinal analysis. For a subset of 69,202 subscribers who had completed 100 days of cognitive training, we tested whether the degree of novel learning was associated with their level of education. Higher educational attainment predicted bigger gains, but the differences were small (d = 0.04–0.37). Altogether, these results point to the long-lasting trace of an effect of prior cognitive challenges but suggest that new learning opportunities can reduce performance gaps related to one’s educational history.

Well, pointing is one way of describing it. Not a really big effect and so maybe again not really suggesting that much is another way as it’s a bit different from what they used earlier on to explain why their data and study is so interesting:

“The fact that the cognitive tests were not similar to what is learned in school is a strength of the study: It speaks to the idea that schooling doesn’t merely impart knowledge – it also provides the opportunity to sharpen core cognitive skills,”

Yeah, but this isn’t the case for the pre- en posttest of the Lumosity-bit in this study, and certainly not if you look to previous recent research rhat has shown this tool has no effect on decision-making and no effect on cognitive function beyond practice effects on the training tasks.

So what we have here is a big dataset, with no way to check if the people didn’t lie, with a big selection-element (they choose to use a brain training tool) and without any information about their functioning before they took education. But ok, we have a big dataset.

Advertisements

2 Comments

Filed under Education, Psychology, Research, Review

Almost all cognitive abilities are positively related, also in adolescence

If you believe in talents, than you might think that missing one talent can be compensated by being better in another field. Sadly, often some have more than others as  almost all cognitive abilities are positively related. A new study confirms this as it shows that cognitive abilities – in this case vocabulary and matrix reasoning – seem to reinforce each other in adolescence and for young adults.

From the press release:

One of the most striking findings in psychology is that almost all cognitive abilities are positively related – on average, people who are better at a skill like reasoning are generally also better at a skill like vocabulary. This fact allows scientists and educational practitioners to summarize people’s skills on a wide range of domains as one factor – often called ‘g’, for ‘general intelligence’. Despite this, the mechanisms underlying ‘g’ and its development remain somewhat mysterious.

“What this so-called ‘g-factor’ means is still very much up for debate,” explains researcher Rogier Kievit of the Cognition and Brain Science Unit at the University of Cambridge. “Is it a causal factor, an artefact of the way we create cognitive tests, the result of our educational environment, a consequence of genetics, an emergent phenomenon of a dynamic system or perhaps all of these things to varying degrees?”

In a new study, scientists from Cambridge, London, and Berlin led by Kievit directly compared different proposed explanations for the phenomenon of ‘g’ and how it develops over time.Data was used from a Wellcome-funded longitudinal cohort (NSPN), where 785 late adolescents, ages 14 to 24, were tested on two occasions approximately 1.5 years apart. They focused two subtests reflecting key domains of ‘g’, namely fluid reasoning (solving abstract puzzles) and vocabulary (knowing the definitions of words). Their findings are published in Psychological Science, a journal of the Association for Psychological Science.

The team observed that the best explanation for the improvement in skills over time was the so-called ‘mutualism’ model. This model proposes that cognitive abilities help each other during development: In other words, better reasoning skills allow individuals to improve their vocabulary more quickly, and better vocabularies are associated with faster improvement in reasoning ability.

These findings are crucial to our understanding of cognitive abilities, as they suggest that small differences early on in childhood may lead to larger differences later on, and help partially explain how ‘g’ arises.

The work has implications for important outcomes in adolescence.

“Our findings may be relevant for early detection of developmental challenges,” says Kievit. “Often screening tests for difficulties focus only on individual outcomes (i.e., ‘Is a child achieving the desired level on some test?’), but studying the dynamics between cognitive domains is likely to paint a richer, more accurate picture of the expected trajectory of development.”

And the findings may also shed light on more long-term life outcomes.

“General cognitive ability is strikingly predictive of various important life outcomes ranging from academic and professional success, to mental and physical health and even longevity – to understand why this is so, we must better understand what this g-factor really is,” Kievit explains.

The researchers note that their observations regarding links between cognitive abilities are exciting, but they do not address whether the relationships are directly causal in nature.

“We hope to further tease apart the underlying mechanisms in future work,” Kievit concludes.

Abstract of the study:

One of the most replicable findings in psychology is the positive manifold: the observation that individual differences in cognitive abilities are universally positively correlated. Investigating the developmental origin of the positive manifold is crucial to understanding it. In a large longitudinal cohort of adolescents and young adults (N = 785; n = 566 across two waves, mean interval between waves = 1.48 years; age range = 14–25 years), we examined developmental changes in two core cognitive domains, fluid reasoning and vocabulary. We used bivariate latent change score models to compare three leading accounts of cognitive development: g-factor theory, investment theory, and mutualism. We showed that a mutualism model, which proposes that basic cognitive abilities directly and positively interact during development, provides the best account of developmental changes. We found that individuals with higher scores in vocabulary showed greater gains in matrix reasoning and vice versa. These dynamic coupling pathways are not predicted by other accounts and provide a novel mechanistic window into cognitive development.

1 Comment

Filed under Education, Psychology, Research

Best Evidence in Brief: Children with ADHD more likely to have language problems

There is a new Best Evidence in Brief and while I skipped the previous one because the mentioned research was less interesting to my personal taste, this time there is a lot to choose from.

I picked this one first:

Children with Attention-Deficient Hyperactivity Disorder (ADHD) can have trouble with hyperactivity, impulsivity, inattention, and distractibility, all of which can affect language and communication and can lead to low academic performance and antisocial behavior.

A systematic review published in the Journal of Child Psychology and Psychiatry seeks to establish the types of language problems children with ADHD experience in order to inform future research into how these language problems contribute to long-term outcomes for children with ADHD.

Hannah Korrel and colleagues examined the last 35 years of ADHD research and identified 21 studies using 17 language measures, which included more than 2,000 participants (ADHD children = 1,209; non-ADHD children =1,101) for inclusion in the systematic review.
The study found that children with ADHD had poorer performance than non-ADHD children on 11 of the 12 measures of overall language (effect size=1.09). Children with ADHD also had poorer performance on measures of expressive, receptive, and pragmatic language compared with non-ADHD children.

1 Comment

Filed under Education, Psychology, Research, Review

Is our brain too complex for simple tests?

This new paper featured this month in a special edition of Neuron states an interesting thesis: most tasks we use today to test the brain are too simple.

From the press release:

Xaq Pitkow and Dora Angelaki, both faculty members in Baylor’s Department of Neuroscience and Rice’s Department of Electrical and Computer Engineering, said the brain’s ability to perform “approximate probabilistic inference” cannot be truly studied with simple tasks that are “ill-suited to expose the inferential computations that make the brain special.”

A new article by the researchers suggests the brain uses nonlinear message-passing between connected, redundant populations of neurons that draw upon a probabilistic model of the world. That model, coarsely passed down via evolution and refined through learning, simplifies decision-making based on general concepts and its particular biases.

The article, which lays out a broad research agenda for neuroscience, is featured this month in a special edition of Neuron, a journal published by Cell Press. The edition presents ideas that first appeared as part of a workshop at the University of Copenhagen last September titled “How Does the Brain Work?”

“Evolution has given us what we call a good model bias,” Pitkow said. “It’s been known for a couple of decades that very simple neural networks can compute any function, but those universal networks can be enormous, requiring extraordinary time and resources.

“In contrast, if you have the right kind of model — not a completely general model that could learn anything, but a more limited model that can learn specific things, especially the kind of things that often happen in the real world — then you have a model that’s biased. In this sense, bias can be a positive trait. We use it to be sensitive to the right things in the world that we inhabit. Of course, the flip side is that when our brain’s bias is not matched to reality, it can lead to severe problems.”

The researchers said simple tests of brain processes, like those in which subjects choose between two options, provide only simple results. “Before we had access to large amounts of data, neuroscience made huge strides from using simple tasks, and they’ll remain very useful,” Pitkow said. “But for computations that we think are most important about the brain, there are things you just can’t reveal with some of those tasks.” Pitkow and Angelaki wrote that tasks should incorporate more diversity — like nuisance variables and uncertainty — to better simulate real-world conditions that the brain evolved to handle.

They suggested that the brain infers solutions based on statistical crosstalk between redundant population codes. Population codes are responses by collections of neurons that are sensitive to certain inputs, like the shape or movement of an object. Pitkow and Angelaki think that to better understand the brain, it can be more useful to describe what these populations compute, rather than precisely how each individual neuron computes it. Pitkow said this means thinking “at the representational level” rather than the “mechanistic level,” as described by the influential vision scientist David Marr.

The research has implications for artificial intelligence, another interest of both researchers.

“A lot of artificial intelligence has done impressive work lately, but it still fails in some spectacular ways,” Pitkow said. “They can play the ancient game of Go and beat the best human player in the world, as done recently by DeepMind’s AlphaGo about a decade before anybody expected. But AlphaGo doesn’t know how to pick up the Go pieces. Even the best algorithms are extremely specialized. Their ability to generalize is often still pretty poor. Our brains have a much better model of the world; We can learn more from less data. Neuroscience theories suggest ways to translate experiments into smarter algorithms that could lead to a greater understanding of general intelligence.”

Abstract of the study:

It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors.

1 Comment

Filed under Psychology, Research

Learning diversity through music

A new study states that listening to music from other cultures furthers one’s pro-diversity beliefs.

I’m not that surprised as one of my own little studies I conducted showed a similar effect (check here).

From the press release:

Jake Harwood turned his lifelong hobby as a musician into a scholarly question: Could the sharing of music help ease interpersonal relations between people from different backgrounds, such as Americans and Arabs?

To explore the issue, and building on his years of research on intergroup communication, Harwood began collaborating two to three years ago with his graduate students and other researchers on a number of studies, finding that music is not merely a universal language. It appears to produce a humanizing effect for members of groups experiencing social and political opposition.

“Music would not have developed in our civilizations if it did not do very important things to us,” said Harwood, a professor in the University of Arizona Department of Communication. “Music allows us to communicate common humanity to each other. It models the value of diversity in ways you don’t readily see in other parts of our lives.”

Harwood is presenting his team’s research during the International Communication Association’s 67th annual conference, to be held May 25-29 in San Diego.

In one study, Harwood worked with UA graduate researchers Farah Qadar and Chien-Yu Chen to record a mock news story featuring an Arab and an American actor playing music together. The researchers showed the video clip to U.S. participants who were not Arab. The team found that when viewing the two cultures collaborating on music, individuals in the study were prone to report more positive perceptions — less of a prejudiced view — of Arabs.

“The act of merging music is a metaphor for what we are trying to do: Merging two perspectives in music, you can see an emotional connection, and its effect is universal,” said Qadar, who graduated from the UA in 2016 with a master’s degree in communication.

The team published those findings in an article, “Harmonious Contact: Stories About Intergroup Musical Collaboration Improve Intergroup Attitudes.” The article appeared in a fall issue of the peer-reviewed Journal of Communication.

Another major finding: The benefits were notable, even when individuals did not play musical instruments themselves. Merely listening to music produced by outgroup members helped reduce negative feelings about outgroup members, Harwood said.

“It’s not just about playing Arab music. But if you see an Arab person playing music that merges the boundary between mainstream U.S. and Arab, then you start connecting the two groups,” Harwood said.

As part of his ongoing research in a different study, which he will present during the International Communication Association conference, Harwood and Stefania Paolini, a senior lecturer at the University of Newcastle’s School of Psychology, measured people’s appreciation for diversity, gauging how they felt about members of other groups. After doing so, the team asked people to listen to music from other cultures and then report how much they enjoyed the music and what they perceived of the people the music represented.

The team found that people who value diversity are more likely to enjoy listening to music from other cultures, and that act of listening furthers one’s pro-diversity beliefs.

“It has this sort of spiral effect. If you value diversity, you are going to listen to more music from other cultures,” Harwood said, noting that that research is continuing. “If all you are doing is listening to the same type of music all the time, there is homogeneity that is not doing a lot to help people to increase their value for diversity.”

For Harwood and his collaborators, these findings are affirming given the decades-old world music explosion and more recent examples of performers around the world who regularly sample and cross-reference outgroup musical traditions and elements.

Harwood pointed to Paul Simon’s “Graceland” album as an early and notable example. Released in 1986, the album drew influence from South African instrumentation and rhythms.

“It was the start of the world music phenomena,” Harwood said. “Suddenly, everyone wanted to listen to African music. Then Indonesian, then Algerian music. Then you see this modeling of new music with different musical cultures and different people collaborating with each other.”

Harwood also said artists such as Eminem and Rihanna are among those who are experimenting with music that crosses cultural boundaries. “This whole new type of music is emerging that would not exist if you did not have that kind of cross-collaboration.”

Harwood also said his team’s findings build on earlier research and emergent models of intergroup dialogue that encourage direct contact and conversation to help build cross-cultural understanding and cohesion.

“We must think about music as a human, social activity rather than a sort of beautiful, aesthetic hobby and appreciate how fundamental it is to us all,” he said. “We can then begin to see people from other groups as more human and begin to recategorize one another as members as the same group.”

Abstract of one of the studies mentioned in the press release:

Watching contact between members of one’s ingroup and members of an outgroup in the media (mediated vicarious contact) improves intergroup attitudes. We compare mediated vicarious contact with observing only members of the outgroup (parasocial contact), and examine whether the activity of the portrayed contact matters. Building on theory, we predict that watching outgroup members playing music should reduce prejudice more than watching them engaged in nonmusical activities, particularly with vicarious (vs. parasocial) contact. Results show that vicarious musical contact enhances perceptions of synchronization, liking, and honesty between ingroup and outgroup actors in a video, which in turn results in more positive attitudes toward the outgroup. Counter to predictions, parasocial musical contact results in less positive outcomes than parasocial nonmusical contact.

1 Comment

Filed under Education, Psychology, Research, Youngsters

Jordan Peterson puts it quite bluntly: The Theory of Multiple Intelligences Is Rubbish!

Well, maybe you’ll be surprised, but than you haven’t read this (or our book). H/t Carl Hendrick

1 Comment

Filed under Education, Myths, Psychology, Review

Interesting: how the 2 languages develop in the minds of young bilingual children

Bilingualism keeps me fascinated. This new study shows that the 2 languages of young bilingual children develop simultaneously but independently from each other. But there is more: the study also shows Spanish is vulnerable to being taken over by English, but English is not vulnerable to being taken over by Spanish.

Btw, the study has also a video-abstract:

From the press release:

A new study of Spanish-English bilingual children by researchers at Florida Atlantic University published in the journal Developmental Science finds that when children learn two languages from birth each language proceeds on its own independent course, at a rate that reflects the quality of the children’s exposure to each language.

In addition, the study finds that Spanish skills become vulnerable as children’s English skills develop, but English is not vulnerable to being taken over by Spanish. In their longitudinal data, the researchers found evidence that as the children developed stronger skills in English, their rates of Spanish growth declined. Spanish skills did not cause English growth to slow, so it’s not a matter of necessary trade-offs between two languages.

“One well established fact about monolingual development is that the size of children’s vocabularies and the grammatical complexity of their speech are strongly related. It turns out that this is true for each language in bilingual children,” said Erika Hoff, Ph.D., lead author of the study, a psychology professor in FAU’s Charles E. Schmidt College of Science, and director of the Language Development Lab. “But vocabulary and grammar in one language are not related to vocabulary or grammar in the other language.”

For the study, Hoff and her collaborators David Giguere, a graduate research assistant at FAU and Jamie M. Quinn, a graduate research assistant at Florida State University, used longitudinal data on children who spoke English and Spanish as first languages and who were exposed to both languages from birth. They wanted to know if the relationship between grammar and vocabulary were specific to a language or more language general. They measured the vocabulary and level of grammatical development in these children in six-month intervals between the ages of 2 and a half to 4 years.

The researchers explored a number of possibilities during the study. They thought it might be something internal to the child that causes vocabulary and grammar to develop on the same timetable or that there might be dependencies in the process of language development itself. They also considered that children might need certain vocabulary to start learning grammar and that vocabulary provides the foundation for grammar or that grammar helps children learn vocabulary. One final possibility they explored is that it may be an external factor that drives both vocabulary development and grammatical development.

“If it’s something internal that paces language development then it shouldn’t matter if it’s English or Spanish, everything should be related to everything,” said Hoff. “On the other hand, if it’s dependencies within a language of vocabulary and grammar or vice versa then the relations should be language specific and one should predict the other. That is a child’s level of grammar should predict his or her future growth in vocabulary or vice versa.”

Turns out, the data were consistent only with the final possibility — that the rate of vocabulary and grammar development are a function of something external to the child and that exerts separate influences on growth in English and Spanish. Hoff and her collaborators suggest that the most cogent explanation would be in the properties of children’s input or their language exposure.

“Children may hear very rich language use in Spanish and less rich use in English, for example, if their parents are more proficient in Spanish than in English,” said Hoff. “If language growth were just a matter of some children being better at language learning than others, then growth in English and growth in Spanish would be more related than they are.”

Detailed results of the study are described in the article, “What Explains the Correlation between Growth in Vocabulary and Grammar? New Evidence from Latent Change Score Analyses of Simultaneous Bilingual Development.”

“There is something about differences among the children and the quality of English they hear that make some children acquire vocabulary and grammar more rapidly in English and other children develop more slowly,” said Hoff. “I think the key takeaway from our study is that it’s not the quantity of what the children are hearing; it’s the quality of their language exposure that matters. They need to experience a rich environment.”

Abstract of the study:

A close relationship between children’s vocabulary size and the grammatical complexity of their speech is well attested but not well understood. The present study used latent change score modeling to examine the dynamic relationships between vocabulary and grammar growth within and across languages in longitudinal data from 90 simultaneous Spanish–English bilingual children who were assessed at 6-month intervals between 30 and 48 months. Slopes of vocabulary and grammar growth were strongly correlated within each language and showed moderate or nonsignificant relationships across languages. There was no evidence that vocabulary level predicted subsequent grammar growth or that the level of grammatical development predicted subsequent vocabulary growth. We propose that a common influence of properties of input on vocabulary and grammatical development is the source of their correlated but uncoupled growth. An unanticipated across-language finding was a negative relationship between level of English skill and subsequent Spanish growth. We propose that the cultural context of Spanish–English bilingualism in the US is the reason that strong English skills jeopardize Spanish language growth, while Spanish skills do not affect English growth.

2 Comments

Filed under At home, Education, Psychology, Research

I forgot what I learned in class because it hurts my psyche?

UCLA-led study suggests people often don’t recall memories that threaten the way they want to see themselves. This may mean students may forget relevant information in order to protect their own psyches, even if this relevant information is something they had to learn in class.

From the press release:

UCLA-led research has found that students in a college mathematics course experienced a phenomenon similar to repression, the psychological process in which people forget emotional or traumatic events to protect themselves.

In a study published online by the Journal of Educational Psychology, the researchers found that the students who forgot the most content from the class were those who reported a high level of stress during the course. But, paradoxically, the study also found that the strong relationship between stress level and the tendency to forget course material was most prevalent among the students who are most confident in their own mathematical abilities.

The phenomenon, which the authors call “motivated forgetting,” may occur because students are subconsciously protecting their own self-image as excellent mathematicians, said Gerardo Ramirez, a UCLA assistant professor of psychology and the study’s lead author.

For the study, researchers analyzed 117 undergraduates in an advanced calculus course at UCLA. The students generally consider themselves to be strong in mathematics and plan to pursue careers that rely on high-level mathematical skills, so the logical assumption would be that they would be likely to remember most of the material from the course.

Researchers asked students a series of questions at the start of the course, including having them assess to what extent they see themselves as “math people.” Each week throughout the course, students were asked to gauge how stressful they thought the course was. Then, the study’s authors examined students’ performance on the course’s final exam and on another similar test two weeks later. On average, students’ grades were 21 percent lower on the follow-up.

Among students who strongly considered themselves to be “math people,” those who experienced a lot of stress performed measurably worse on the follow-up exam than those whose stress levels were lower.

The results were striking because, in the cases of the students whose stress levels were highest, test scores dropped by as much as a full letter grade — from an A-minus to a B-minus, for example. But, according to Ramirez, the findings make sense from a psychological perspective.

“Students who found the course very stressful and difficult might have given in to the motivation to forget as a way to protect their identity as being good at math,” he said. “We tend to forget unpleasant experiences and memories that threaten our self-image as a way to preserve our psychological well-being. And ‘math people’ whose identity is threatened by their previous stressful course experience may actively work to forget what they learned.”

The idea that people are motivated to forget unpleasant experiences — activating a sort of “psychological immune system” — goes back to Sigmund and Anna Freud, the pioneers of psychoanalysis, Ramirez said.

The students who think of themselves as excellent at math and felt high levels of stress were also more likely to report they avoided thinking about the course after it ended more than other students did.

Previous studies by other researchers seem to support the concept of motivated forgetting. For example, a 2011 Harvard University study found that when people were asked to memorize an “honor code” and then pay themselves for solving a series of problems, those who cheated and overpaid themselves remembered less of the honor code at the end of the experiment than those who did not cheat.

“Motivated forgetting, or giving in to the desire to forget what we find threatening, is a defense mechanism people use against threats to the way they like to depict themselves,” Ramirez said. “The students are highly motivated to do well and can’t escape during the course, but as soon as they take their final exam, they can give in their desire to forget and try to suppress the information.”

Ramirez said there are steps teachers can take to help students retain information. Some of them:

  • Emphasize the material’s real-world applications. This will give students incentives to remember information and review it later on. “I think we often do a poor job of showing students why the content is relevant to their lives and future job skills,” Ramirez said.
  • Cover the entire course in final exams. And not just the most recent material. “Non-cumulative exams tell students they can forget what they have already been tested on,” he said.
  • Guard against learning-by-photo. Specifically, Ramirez advises students not to try to capture course notes by taking photos with their smartphones — it might subtly create an impression that they don’t need to actually learn the information.
  • Embrace the challenges. When his students struggle, Ramirez tells them the challenge they’re facing will lead to deeper learning. “I try to change what ‘struggle’ means for them so that they don’t feel threatened when they are stressed out about the material,” he said.

Abstract of the study:

The ability to retain educationally relevant content in a readily accessible state in memory is critical for students at all stages in schooling. We hypothesized that a high degree of stress in mathematics courses can threaten students’ mathematics self-concept and lead to a motivation to forget course content. We tested the aforementioned hypothesis by recruiting students from a college course on multivariate calculus. Students were asked to report their ongoing stress in the course. The forgetting rate was assessed by comparing students’ final exam performance against their performance for a subset of the same final exam items 2 weeks later. We found that among students with a strong mathematics self-concept, a higher amount of ongoing weekly stress during the course was associated with increased forgetting of course content and a higher report of avoidant thinking about the course. Neither of these associations was found among students with a weaker mathematics self-concept. Our results provide evidence for a scientific account of the affective and motivational forces that shape why students forget educationally relevant content. We discuss the various educational practices that cue forgetting and make recommendations for reducing motivated forgetting in the classroom.

 

1 Comment

Filed under Education, Psychology

Something children are better at than adults (and no, it isn’t creativity)

Often adults can do thing better than children. I know some – Romantic – people think children are genius and education kills creativity, but that has been debunked already. Still, I know a lot of stuff that children can do, which are now almost impossible to me. And a new study adds something to this list: noticing stuff that adults didn’t see. And it’s because of one of the limitations of children: it’s harder for them to focus. (btw, this recent post by Dan Willingham is also about the benefits and downsides of focus)

From the press release:

In two studies, researchers found that adults were very good at remembering information they were told to focus on, and ignoring the rest. In contrast, 4- to 5-year-olds tended to pay attention to all the information that was presented to them – even when they were told to focus on one particular item. That helped children to notice things that adults didn’t catch because of the grownups’ selective attention.

“We often think of children as deficient in many skills when compared to adults. But sometimes what seems like a deficiency can actually be an advantage,” said Vladimir Sloutsky, co-author of the study and professor of psychology at The Ohio State University.

“That’s what we found in our study. Children are extremely curious and they tend to explore everything, which means their attention is spread out, even when they’re asked to focus. That can sometimes be helpful.”

The results have important implications for understanding how education environments affect children’s learning, he said.

Sloutsky conducted the study with Daniel Plebanek, a graduate student in psychology at Ohio State. Their results were just published in the journal Psychological Science.

The first study involved 35 adults and 34 children who were 4 to 5 years old.

The participants were shown a computer screen with two shapes, with one shape overlaying the other. One of the shapes was red, the other green. The participants were told to pay attention to a shape of a particular color (say, the red shape).

The shapes then disappeared briefly, and another screen with shapes appeared. The participants had to report whether the shapes in the new screen were the same as in the previous screen.

In some cases, the shapes were exactly the same. In other cases, the target shape (the one participants were told to pay attention to) was different. But there were also instances where the non-target shape changed, even though it was not the one participants were told to notice.

Adults performed slightly better than children at noticing when the target shape changed, noticing it 94 percent of the time compared to 86 percent of the time for children.

“But the children were much better than adults at noticing when the non-target shape changed,” Sloutsky said. Children noticed that change 77 percent of the time, compared to 63 percent of the time for adults.

“What we found is that children were paying attention to the shapes that they weren’t required to,” he said. “Adults, on the other hand, tended to focus only on what they were told was needed.”

A second experiment involved the same participants. In this case, participants were shown drawings of artificial creatures with several different features. They might have an “X” on their body, or an “O”; they might have a lightning bolt on the end of their tail or a fluffy ball.

Participants were asked to find one feature, such as the “X” on the body among the “Os.” They weren’t told anything about the other features. Thus, their attention was attracted to “X” and “O”, but not to the other features. Both children and adults found the “X” well, with adults being somewhat more accurate than children.

But when those features appeared on creatures in later screens, there was a big difference in what participants remembered. For features they were asked to attend to (i.e., “X” and “O”), adults and children were identical in remembering these features. But children were substantially more accurate than adults (72 percent versus 59 percent) at remembering features that they were not asked to attend to, such as the creatures’ tails.

“The point is that children don’t focus their attention as well as adults, even if you ask them to,” Sloutsky said. “They end up noticing and remembering more.”

Sloutsky said that adults would do well at noticing and remembering the ignored information in the studies, if they were told to pay attention to everything. But their ability to focus attention has a cost – they miss what they are not focused on.

The ability of adults to focus their attention – and children’s tendency to distribute their attention more widely – both have positives and negatives.

“The ability to focus attention is what allows adults to sit in two-hour meetings and maintain long conversations, while ignoring distractions,” Sloutsky said.

“But young children’s use of distributed attention allows them to learn more in new and unfamiliar settings by taking in a lot of information.”

The fact that children don’t always do as well at focusing attention also shows the importance of designing the right learning environment in classrooms, Sloutsky said.

“Children can’t handle a lot of distractions. They are always taking in information, even if it is not what you’re trying to teach them. We need to make sure that we are aware of that and design our classrooms, textbooks and educational materials to help students succeed.

“Perhaps a boring classroom or a simple black and white worksheet means less distraction and more successful learning,” Sloutsky added.

Abstract of the study:

One of the lawlike regularities of psychological science is that of developmental progression—an increase in sensorimotor, cognitive, and social functioning from childhood to adulthood. Here, we report a rare violation of this law, a developmental reversal in attention. In Experiment 1, 4- to 5-year-olds (n = 34) and adults (n = 35) performed a change-detection task that included externally cued and uncued shapes. Whereas the adults outperformed the children on the cued shapes, the children outperformed the adults on the uncued shapes. In Experiment 2, the same participants completed a visual search task, and their memory for search-relevant and search-irrelevant information was tested. The young children outperformed the adults with respect to search-irrelevant features. This demonstration of a paradoxical property of early attention deepens current understanding of the development of attention. It also has implications for understanding early learning and cognitive development more broadly.

2 Comments

Filed under At home, Education, Psychology, Research

Interesting: famous Milgram-experiment has been successfully replicated

It has been a topic that has been fascinating me for quite a while now: are the insights from the famous Milgram-experiment valid or not. Why I have been questioning this, is because there has been criticism lately:

…several scholars raised new criticisms of the research based on their analysis of the transcripts and audio from the original experiments, or on new simulations or partial replications of the experiments. These contemporary criticisms add to past critiques, profoundly undermining the credibility of the original research and the way it is usually interpreted. That Milgram’s studies had a mighty cultural and scholarly impact is not in dispute; the meaning of what he found most certainly is.

BPS Digest sums up the most important modern criticisms:

  • When a participant hesitated in applying electric shocks, the actor playing the role of experimenter was meant to stick to a script of four escalating verbal “prods”. In fact, he frequently improvised, inventing his own terms and means of persuasion. Gina Perry (author of Behind The Shock Machine) has said the experiment was more akin to an investigation of “bullying and coercion” than obedience.
  • A partial replication of the studies found that no participants actually gave in to the fourth and final prod, the only one that actually constituted a command. Analysis of Milgram’s transcripts similarly suggested that the experimenter prompts that were most like a command were rarely obeyed. A modern analogue of Milgram’s paradigm found that order-like prompts were ineffective compared with appeals to science, supporting the idea that people are not blindly obedient to authority but believe they are contributing to a worthy cause.
  • Milgram failed to fully debrief his participants immediately after they’d participated.
  • In an unpublished version of his paradigm, Milgram recruited pairs of people who knew each other to play the role of teacher and learner. In this case, disobedience rose to 85 per cent.
  • Many participants were sceptical about the reality of the supposed set-up. Restricting analysis to only those who truly believed the situation was real, disobedience rose to around 66 per cent.

But now there is a new – successful but again partial- replication of the famous experiment, the research appears in the journal Social Psychological and Personality Science. Still, one could ask again if some of the criticisms aren’t still valid also for the new replication – they do imho.

From the press release:

“Our objective was to examine how high a level of obedience we would encounter among residents of Poland,” write the authors. “It should be emphasized that tests in the Milgram paradigm have never been conducted in Central Europe. The unique history of the countries in the region made the issue of obedience towards authority seem exceptionally interesting to us.”

For those unfamiliar with the Milgram experiment, it tested people’s willingness to deliverer electric shocks to another person when encouraged by an experimenter. While no shocks were actually delivered in any of the experiments, the participants believed them to be real. The Milgram experiments demonstrated that under certain conditions of pressure from authority, people are willing to carry out commands even when it may harm someone else.

“Upon learning about Milgram’s experiments, a vast majority of people claim that ‘I would never behave in such a manner,’ says Tomasz Grzyb, a social psychologist involved in the research. “Our study has, yet again, illustrated the tremendous power of the situation the subjects are confronted with and how easily they can agree to things which they find unpleasant.”

While ethical considerations prevented a full replication of the experiments, researchers created a similar set-up with lower “shock” levels to test the level of obedience of participants.

The researchers recruited 80 participants (40 men and 40 women), with an age range from 18 to 69, for the study. Participants had up to 10 buttons to press, each a higher “shock” level. The results show that the level of participants’ obedience towards instructions is similarly high to that of the original Milgram studies.

They found that 90% of the people were willing to go to the highest level in the experiment. In terms of differences between peoples willingness to deliver shock to a man versus a woman, “It is worth remarking,” write the authors, “that although the number of people refusing to carry out the commands of the experimenter was three times greater when the student [the person receiving the “shock”] was a woman, the small sample size does not allow us to draw strong conclusions.”

In terms of how society has changed, Grzyb notes, “half a century after Milgram’s original research into obedience to authority, a striking majority of subjects are still willing to electrocute a helpless individual.”

Abstract of the study:

In spite of the over 50 years which have passed since the original experiments conducted by Stanley Milgram on obedience, these experiments are still considered a turning point in our thinking about the role of the situation in human behavior. While ethical considerations prevent a full replication of the experiments from being prepared, a certain picture of the level of obedience of participants can be drawn using the procedure proposed by Burger. In our experiment, we have expanded it by controlling for the sex of participants and of the learner. The results achieved show a level of participants’ obedience toward instructions similarly high to that of the original Milgram studies. Results regarding the influence of the sex of participants and of the “learner,” as well as of personality characteristics, do not allow us to unequivocally accept or reject the hypotheses offered.

1 Comment

Filed under Psychology, Research