Category Archives: Research

Does successfully improving students’ achievement test scores lead to higher rates of national economic growth?

No, this isn’t a OECD or PISA-bashing post, but I found a new study by Komatsua and Rappleye via @cbokhove that raises an important question: does successfully improving students’ achievement test scores lead to higher rates of national economic growth. This is a claim based on research by Hanushek and Woessmann and is the basis for a lot of policy-influencing research and policy-advice by e.g. PISA or the World bank. But Komatsua and Rappleye argue now that this claim is maybe based on flawed statistics, as the abstract makes clear what it’s all about:

Several recent, highly influential comparative studies have made strong statistical claims that improvements on global learning assessments such as PISA will lead to higher GDP growth rates. These claims have provided the primary source of legitimation for policy reforms championed by leading international organisations, most notably the World Bank and OECD. To date there have been several critiques but these have been too limited to challenge the validity of the claims. The consequence is continued utilisation and citation of these strong claims, resulting in a growing aura of scientific truth and concrete policy reforms. In this piece we report findings from two original studies that invalidate these statistical claims. Our intent is to contribute to a more rigorous global discussion on education policy, as well as call attention to the fact that the new global policy regime is founded on flawed statistics.

They performed a replication on the same data Hanushek and Woessmann used and their conclusion sounds a bit damning:

Our primary purpose has been to report findings from two studies where our results invalidate H&W’s strong statistical claims that attempt to link student test scores and economic growth. In Study 1, we observed that the explanatory power of test scores was weak in subsequent periods: the relationship between scores and growth has been neither consistently strong nor strongly consistent. In Study 2, we observed that the relationship between changes in test scores in one period and changes in economic growth for subsequent periods were unclear at best, doubtful at worst. Combined, these two original studies do not simply challenge the key statistical claims advanced by H&W but invalidate them because they utilise the same sample, dataset and methods.

But I have to agree with Christian Bokhove in our tweet conversation about this article that both scientists are quite a bit firm in their statements. Still: I think it’s too important not to have further debate on this topic as it’s quite essential to present policy.

1 Comment

Filed under Education, Research, Review

A bit skeptic: Brain activity can be used to predict reading success up to 2 years in advance

This is a study that is both interesting and a bit… well, scary and/or making me skeptic: by measuring brainwaves, it is possible to predict what a child’s reading level will be years in advance, according to research from Binghamton University, State University of New York. The longitudinal study on 75 children did try to measure a lot. Why am I skeptic? Well, it’s rather correlational than clear causal. But also the prediction is one thing, but what can we do with the prediction? The researchers state that it could help to detect pupils who need extra help early, but again: isn’t that assuming that we can alter something? There is also the question how absolute the prediction is and if this is the cheapest solution in the long run. Don’t get me wrong, it’s not my goal to undermine this study, but just some thoughts.

From the press release:

Binghamton University researchers Sarah Laszlo and Mallory Stites measured the brain activity of children and then compared it to their report cards, their vocabulary and other signs of reading success two years later, as part of the National Science Foundation-funded Reading Brain Project. Laszlo and Stites used event-related potentials (ERPs) to determine that brain activity was different in children who showed reading success in later years than in children that did not.

“Your brain is what allows you to do everything, from math to designing buildings to making art,” said Laszlo, associate professor of psychology at Binghamton University. “If we look at what the brain is doing during reading, it is a really good predictor of how reading will develop.”

The children read a list of words silently to themselves. Every so often they would come across their own name to make sure they were understanding the text and paying attention. Children that had better report cards tended to show different patterns of activity during both phonological (sound) and semantic (meaning) processing..

“Phonological processing is the ability to sound things out and semantic processing is knowing what words mean,” said Laszlo. “Like being able to link the word fish with a slimy creature that swims underwater.”

Other factors were included when measuring the reading success of students, such as their teachers, their parents’ encouragement, their age and the amount that they read at home.

“The thing that is really valuable about this is that once kids starting having trouble with reading, they start needing extra help, which can be hard and stigmatizing for the child and often not effective,” said Laszlo. “By using long-range predictions about success, we can give them the extra help they need before they fall behind.”

The team is currently working on a paper to record the findings from the first four years of this research. At the end of the fifth year, Laszlo and her team will look back to see what predictions can be made regarding ERP’s and reading progress.

The paper, “Time will tell: A longitudinal investigation of brain-behavior relationships during reading development,” was published in Psychophysiology.

Abstract of the study:

ERPs are a powerful tool for the study of reading, as they are both temporally precise and functionally specific. These are essential characteristics for studying a process that unfolds rapidly and consists of multiple, interactive subprocesses. In work with adults, clear, specific models exist linking components of the ERP with individual subprocesses of reading including orthographic decoding, phonological processing, and semantic access (e.g., Grainger & Holcomb, 2009). The relationships between ERP components and reading subprocesses are less clear in development; here, we address two questions regarding these relationships. First, we ask whether there are ERP markers that predict future reading behaviors across a longitudinal year. Second, we ask whether any relationships observed between ERP components and reading behavior across time map onto the better-established relationships between ERPs and reading subprocesses in adults. To address these questions, we acquired ERPs from children engaging in a silent reading task and then, a year later, collected behavioral assessments of their reading ability. We find that ERPs collected in Year 1 do predict reading behaviors a year later. Further, we find that these relationships do conform, at least to some extent, to relationships between ERP components and reading subprocesses observed in adults, with, for example, N250 amplitude in Year 1 predicting phonological awareness in Year 2, and N400 amplitude in Year 1 predicting vocabulary in Year 2.

1 Comment

Filed under Education, Research

A downside of expertise? Skilled workers more prone to mistakes when interrupted

Expertise is great, it’s what makes the difference between a novice and an expert. But every wish comes with a curse, and even expertise has maybe a downside according to this new study as it shows that highly trained workers in some occupations could actually be at risk for making errors when interrupted

From the press release:

The reason: Experienced workers are generally faster at performing procedural tasks, meaning their actions are more closely spaced in time and thus more confusable when they attempt to recall where to resume a task after being interrupted.

“Suppose a nurse is interrupted while preparing to give a dose of medication and then must remember whether he or she administered the dose,” said Erik Altmann, lead investigator on the project. “The more experienced nurse will remember less accurately than a less-practiced nurse, other things being equal, if the more experienced nurse performs the steps involved in administering medication more quickly.”

That’s not to say skilled nurses should avoid giving medication, but only that high skill levels could be a risk factor for increased errors after interruptions and that experts who perform a task quickly and accurately have probably figured out strategies for keeping their place in a task, said Altmann, who collaborated with fellow professor Zach Hambrick.

Their study, funded by the Office of Naval Research, is published online in the Journal of Experimental Psychology: General.

For the experiment, 224 people performed two sessions of a computer-based procedural task on separate days. Participants were interrupted randomly by a simple typing task, after which they had to remember the last step they performed to select the correct step to perform next.

In the second session, people became faster, and on most measures, more accurate, Altmann said. After interruptions, however, they became less accurate, making more errors by resuming the task at the wrong spot.

“The faster things happen, the worse we remember them,” Altmann said, adding that when workers are interrupted in the middle of critical procedures, as in emergency rooms or intensive care units, they may benefit from training and equipment design that helps them remember where they left off.

Abstract of the study:

Positive effects of practice are ubiquitous in human performance, but a finding from memory research suggests that negative effects are possible also. The finding is that memory for items on a list depends on the time interval between item presentations. This finding predicts a negative effect of practice on procedural performance under conditions of task interruption. As steps of a procedure are performed more quickly, memory for past performance should become less accurate, increasing the rate of skipped or repeated steps after an interruption. We found this effect, with practice generally improving speed and accuracy, but impairing accuracy after interruptions. The results show that positive effects of practice can interact with architectural constraints on episodic memory to have negative effects on performance. In practical terms, the results suggest that practice can be a risk factor for procedural errors in task environments with a high incidence of task interruption.

1 Comment

Filed under At work, Research

Ready for some debate? Study suggests too much structured knowledge can hurt creativity

Ok, this is a study that can stir some debate, imho. This study from the University of Toronto’s Rotman School of Management suggests that while structure organizes human activities and help us understand the world with less effort, it also can be the killer of creativity. But there are also some elements you have to take into account before jumping to conclusions, it’s not really about learning or creativity in education as such.

The study consists of 3 different experiment – with one based on using Lego bricks – and has this as conclusion:

In three studies, the current research showed that individuals presented with a flat information structure were more creative compared to those presented with a hierarchical information structure. We define a flat information structure as a set of information that is presented without higher-order categories, and a hierarchical information structure as a set of information organized by higher-order categories. We found that the increased creativity in a flat information structure condition is due to an elevated level of cognitive flexibility. Additionally, exploratory analyses showed that participants in the flat information structure condition spent longer time on their tasks than those in the hierarchical information structure condition. Given that time spent could be a proxy for how cognitively persistent participants were on the task, this suggests that the absence of structure might also increase creativity through cognitive persistence.

If you want to know what hierarchical information structure versu flat information structure means, just check this picture from the original study:

So people are more creative when the Lego bricks were given in a flat information structure. Interesting, but to be frank, I think there is a big jump from an experimental setting to real life situations in e.g. a company setting in this press release:

While most management research has supported the idea that giving structure to information makes it easier to cope with its complexity and boosts efficiency, the paper says that comes as a double-edged sword.

“A hierarchically organized information structure may also have a dark side,” warns Yeun Joon Kim, a PhD student who co-authored the paper with Chen-Bo Zhong, an associate professor of organizational behaviour and human resource management at the Rotman School.

The researchers showed in a series of experiments that participants displayed less creativity and cognitive flexibility when asked to complete tasks using categorized sets of information, compared to those asked to work with items that were not ordered in any special way. Those in the organized information group also spent less time on their tasks, suggesting reduced persistence, a key ingredient for creativity.

The researchers ran three experiments. In two, study participants were presented with a group of nouns that were either organized into neat categories or not, and then told to make as many sentences as they could with them.

The third experiment used LEGO® bricks. Participants were asked to make an alien out of a box of bricks organized by colour and shape or, in a scenario familiar to many parents, out of a box of unorganized bricks. Participants in the organized category were prohibited from dumping the bricks out onto a table.

The findings may have application for leaders of multi-disciplinary teams, which tend to show inconsistent rates of innovation, perhaps because team members may continue to organize their ideas according to functional similarity, area of their expertise, or discipline.

“We suggest people put their ideas randomly on a white board and then think about some of their connections,” says Kim. Our tendency to categorize information rather than efficiency itself is what those working in creative industries need to be most on guard about, the researchers say.

Ehm, ok, in that last paragraph it’s almost as the researchers suggest brainstorming, but we’ve known that brainstorming in itself is not the best way to find creative results. I do think that this study is interesting in it’s own right, but that we should be careful with the possible implications.

Abstract of the study:

Is structure good or bad for creativity? When it comes to organizing information, management scholars have long advocated for a hierarchical information structure (information organized around higher-order categories as opposed to a flat information structure where there is no higher-order category) to reduce complexity of information processing and increase efficiency of work. However, a hierarchical information structure can be a double-edged sword that may reduce creativity, defined as novel and useful combination of existing information. This is because a hierarchical information structure might obstruct combining information from distal conceptual categories. Thus, the current research investigates whether information structure influences creativity. We theorize that a hierarchical information structure, compared to a flat information structure, will reduce creativity because it reduces cognitive flexibility. Three experiments using a sentence construction task and a LEGO task supported our prediction.

1 Comment

Filed under Education, Research

Important new meta-analysis on the testing effect – with some surprises…

There is a new Best Evidence in Brief and one of the studies the newsletter discusses is all about the effect of testing:

Olusola O. Adesope and colleagues conducted a meta-analysis to summarize the learning benefits of taking a practice test versus other forms of non-testing learning conditions, such as re-studying, practice, filler activities, or no presentation of the material.

Results from 272 independent effects from 188 separate experiments demonstrated that the use of practice tests is associated with a moderate, statistically significant weighted mean effect size compared to re-studying (+0.51) and a much larger weighted mean effect size (+ 0.93) when compared to filler or no activities.

In addition, the format, number, and frequency of practice tests make a difference for the learning benefits on a final test. Practice tests with a multiple-choice option have a larger weighted mean effect size (+0.70) than short-answer tests (+0.48). A single practice test prior to the final test is more effective than when students take several practice tests. However, the timing should be carefully considered. A gap of less than a day between the practice and final tests showed a smaller weighted effect size than when there is a gap of one to six days (+0.56 and + 0.82, respectively).  

Are you as baffled by these results as I am? I checked in the original article to find out more.

Eg. MC-questions having a bigger effect size – while remembering is often harder than recognizing? Well, there are some studies who actually support the latter:

For instance, Kang et al. (2007) revealed that students who took a short-answer practice test outperformed students who took a multiple-choice practice test on the final test, regardless of whether the final test was short-answer or multiple-choice.

But:

On the other hand, C. D. Morris, Bransford, and Franks’s (1977) research on levels of processing suggests that retention is strongest when processing demands are less demanding. They reason that this is because less demanding retrieval practice activities allow participants to focus all of their cognitive energy on a simple task at hand, whereas deeper levels of processing require more cognitive energy and can distract participants from relevant aspects (C. D. Morris et al., 1977).

And looking at the meta-analysis, the second theory seems to be winning as “the differences between multiple-choice and short-answer practice test formats did emerge: g = 0.70 and g = 0.48, respectively” But it’s worth noting that the researchers do warn it’s not that simple:

…we found that multiple-choice testing was the most effective format; however, this should be interpreted with caution, since an educator’s decision to use any given format should be based on the content of the learning material and the expected learning outcomes. For example, multiple- choice tests may be especially useful for memorization and fact retention, while short-answer testing may require more higher order thinking skills that are useful for more conceptual and abstract learning content

And what about a single test being more effective than taking several practice tests? The meta-analysis does support this, but Adesope et al. can only guess why:

Thus, our findings suggest that although a single test prior to a final test may result in better performance, the timing of the test should be carefully considered. One plausible explanation is more time between the practice and final tests allows students to mentally recall and process information, leading to deeper learning. An alternative hypothesis is that multiple tests within a short time may result in test fatigue that affects performance, while retrieval practice over a distributed time period enables long-term storage.

I do think that this meta-analysis will invite other researcher do join the debate…

Abstract of the study:

The testing effect is a well-known concept referring to gains in learning and retention that can occur when students take a practice test on studied material before taking a final test on the same material. Research demonstrates that students who take practice tests often outperform students in nontesting learning conditions such as restudying, practice, filler activities, or no presentation of the material. However, evidence-based meta-analysis is needed to develop a comprehensive understanding of the conditions under which practice tests enhance or inhibit learning. This meta-analysis fills this gap by examining the effects of practice tests versus nontesting learning conditions. Results reveal that practice tests are more beneficial for learning than restudying and all other comparison conditions. Mean effect sizes were moderated by the features of practice tests, participant and study characteristics, outcome constructs, and methodological features of the studies. Findings may guide the use of practice tests to advance student learning, and inform students, teachers, researchers, and policymakers. This article concludes with the theoretical and practical implications of the meta-analysis.

1 Comment

Filed under Education, Research, Review

Some good news? “Critical thinking instruction in humanities reduces belief in pseudoscience”

Sometimes it’s quite difficult for (educational) mythbusters: what if you only make it worse. There are some studies – also mentioned in our book – who say so. But this new study – although with a very specific group and with a rather small amount of participants – at least suggests there are opportunities to battle pseudoscience…

From the press release:

A recent study by North Carolina State University researchers finds that teaching critical thinking skills in a humanities course significantly reduces student beliefs in “pseudoscience” that is unsupported by facts.

“Given the national discussion of ‘fake news,’ it’s clear that critical thinking – and classes that teach critical thinking – are more important than ever,” says Anne McLaughlin, an associate professor of psychology at NC State and co-author of a paper describing the work.

“Fundamentally, we wanted to assess how intentional you have to be when teaching students critical thinking,” says Alicia McGill, an assistant professor of history at NC State and co-author of the paper. “We also wanted to explore how humanities classes can play a role and whether one can assess the extent to which critical thinking instruction actually results in improved critical thinking by students.

“This may be especially timely, because humanities courses give students tools they can use to assess qualitative data and sort through political rhetoric,” McGill says. “Humanities also offer us historical and cultural perspective that allow us to put current events into context.”

For this study, the researchers worked with 117 students in three different classes. Fifty-nine students were enrolled in a psychology research methods course, which taught statistics and study design, but did not specifically address critical thinking. The other 58 students were enrolled in one of two courses on historical frauds and mysteries – one of which included honors students, many of whom were majors in science, engineering and mathematics disciplines.

The psychology class served as a control group. The two history courses incorporated instruction explicitly designed to cultivate critical thinking skills. For example, students in the history courses were taught how to identify logical fallacies – statements that violate logical arguments, such as non sequiturs.

At the beginning of the semester, students in all three courses took a baseline assessment of their beliefs in pseudoscientific claims. The assessment used a scale from 1 (“I don’t believe at all.”) to 7 (“I strongly believe.”).

Some of the topics in the assessment, such as belief in Atlantis, were later addressed in the “historical frauds” course. Other topics, such as the belief that 9/11 was an “inside job,” were never addressed in the course. This allowed the researchers to determine the extent to which changes in student beliefs stemmed from specific facts discussed in class, versus changes in a student’s critical thinking skills.

At the end of the semester, students took the pseudoscience assessment again.

The control group students did not change their beliefs – but students in both history courses had lower beliefs in pseudoscience by the end of the semester.

Students in the history course for honors students decreased the most in their pseudoscientific beliefs; on average, student beliefs dropped an entire point on the belief scale for topics covered in class, and by 0.5 points on topics not covered in class. There were similar, but less pronounced, changes in the non-honors course.

“The change we see in these students is important, because beliefs are notoriously hard to change,” says McLaughlin. “And seeing students apply critical thinking skills to areas not covered in class is particularly significant and heartening.”

“It’s also important to note that these results stem from taking only one class,” McGill says. “Consistent efforts to teach critical thinking across multiple classes may well have more pronounced effects.

“This drives home the importance of teaching critical thinking, and the essential role that humanities can play in that process,” McGill says. “This is something that NC State is actively promoting as part of a universitywide focus on critical thinking development.”

The paper, “Explicitly teaching critical thinking skills in a history course,” was published March 20 in the journal Science & Education.

Abstract of the study:

Critical thinking skills are often assessed via student beliefs in non-scientific ways of thinking, (e.g, pseudoscience). Courses aimed at reducing such beliefs have been studied in the STEM fields with the most successful focusing on skeptical thinking. However, critical thinking is not unique to the sciences; it is crucial in the humanities and to historical thinking and analysis. We investigated the effects of a history course on epistemically unwarranted beliefs in two class sections. Beliefs were measured pre- and post-semester. Beliefs declined for history students compared to a control class and the effect was strongest for the honors section. This study provides evidence that a humanities education engenders critical thinking. Further, there may be individual differences in ability or preparedness in developing such skills, suggesting different foci for critical thinking coursework.

1 Comment

Filed under Education, Media literacy, Myths, Research

Interesting: famous Milgram-experiment has been successfully replicated

It has been a topic that has been fascinating me for quite a while now: are the insights from the famous Milgram-experiment valid or not. Why I have been questioning this, is because there has been criticism lately:

…several scholars raised new criticisms of the research based on their analysis of the transcripts and audio from the original experiments, or on new simulations or partial replications of the experiments. These contemporary criticisms add to past critiques, profoundly undermining the credibility of the original research and the way it is usually interpreted. That Milgram’s studies had a mighty cultural and scholarly impact is not in dispute; the meaning of what he found most certainly is.

BPS Digest sums up the most important modern criticisms:

  • When a participant hesitated in applying electric shocks, the actor playing the role of experimenter was meant to stick to a script of four escalating verbal “prods”. In fact, he frequently improvised, inventing his own terms and means of persuasion. Gina Perry (author of Behind The Shock Machine) has said the experiment was more akin to an investigation of “bullying and coercion” than obedience.
  • A partial replication of the studies found that no participants actually gave in to the fourth and final prod, the only one that actually constituted a command. Analysis of Milgram’s transcripts similarly suggested that the experimenter prompts that were most like a command were rarely obeyed. A modern analogue of Milgram’s paradigm found that order-like prompts were ineffective compared with appeals to science, supporting the idea that people are not blindly obedient to authority but believe they are contributing to a worthy cause.
  • Milgram failed to fully debrief his participants immediately after they’d participated.
  • In an unpublished version of his paradigm, Milgram recruited pairs of people who knew each other to play the role of teacher and learner. In this case, disobedience rose to 85 per cent.
  • Many participants were sceptical about the reality of the supposed set-up. Restricting analysis to only those who truly believed the situation was real, disobedience rose to around 66 per cent.

But now there is a new – successful but again partial- replication of the famous experiment, the research appears in the journal Social Psychological and Personality Science. Still, one could ask again if some of the criticisms aren’t still valid also for the new replication – they do imho.

From the press release:

“Our objective was to examine how high a level of obedience we would encounter among residents of Poland,” write the authors. “It should be emphasized that tests in the Milgram paradigm have never been conducted in Central Europe. The unique history of the countries in the region made the issue of obedience towards authority seem exceptionally interesting to us.”

For those unfamiliar with the Milgram experiment, it tested people’s willingness to deliverer electric shocks to another person when encouraged by an experimenter. While no shocks were actually delivered in any of the experiments, the participants believed them to be real. The Milgram experiments demonstrated that under certain conditions of pressure from authority, people are willing to carry out commands even when it may harm someone else.

“Upon learning about Milgram’s experiments, a vast majority of people claim that ‘I would never behave in such a manner,’ says Tomasz Grzyb, a social psychologist involved in the research. “Our study has, yet again, illustrated the tremendous power of the situation the subjects are confronted with and how easily they can agree to things which they find unpleasant.”

While ethical considerations prevented a full replication of the experiments, researchers created a similar set-up with lower “shock” levels to test the level of obedience of participants.

The researchers recruited 80 participants (40 men and 40 women), with an age range from 18 to 69, for the study. Participants had up to 10 buttons to press, each a higher “shock” level. The results show that the level of participants’ obedience towards instructions is similarly high to that of the original Milgram studies.

They found that 90% of the people were willing to go to the highest level in the experiment. In terms of differences between peoples willingness to deliver shock to a man versus a woman, “It is worth remarking,” write the authors, “that although the number of people refusing to carry out the commands of the experimenter was three times greater when the student [the person receiving the “shock”] was a woman, the small sample size does not allow us to draw strong conclusions.”

In terms of how society has changed, Grzyb notes, “half a century after Milgram’s original research into obedience to authority, a striking majority of subjects are still willing to electrocute a helpless individual.”

Abstract of the study:

In spite of the over 50 years which have passed since the original experiments conducted by Stanley Milgram on obedience, these experiments are still considered a turning point in our thinking about the role of the situation in human behavior. While ethical considerations prevent a full replication of the experiments from being prepared, a certain picture of the level of obedience of participants can be drawn using the procedure proposed by Burger. In our experiment, we have expanded it by controlling for the sex of participants and of the learner. The results achieved show a level of participants’ obedience toward instructions similarly high to that of the original Milgram studies. Results regarding the influence of the sex of participants and of the “learner,” as well as of personality characteristics, do not allow us to unequivocally accept or reject the hypotheses offered.

1 Comment

Filed under Psychology, Research

How to get kids to eat more healthy in school?

When I went to school, there were still nuns who cooked dinner for us and I remember one nun in particular who could look at you in way you sure would eat all of your vegetables. Well, that was one way of making sure we all ate more healthy at school. 1Brigham Young University researcher is trying to nail down how to get kids more salad without that special look.

From the press release:

Thanks to a national initiative, salad bars are showing up in public schools across the country. Now a Brigham Young University researcher is trying to nail down how to get kids to eat from them.

BYU health sciences professor Lori Spruance studies the impact of salad bars in public schools and has found one helpful tip: teens are more likely to use salad bars if they’re exposed to good, old-fashioned marketing. Students at schools with higher salad bar marketing are nearly three times as likely to use them.

“Children and adolescents in the United States do not consume the nationally recommended levels of fruits and vegetables,” Spruance said. “Evidence suggests that salad bars in schools can make a big difference. Our goal is to get kids to use them.”

Some 4,800 salad bars have popped up in public schools around the country according to the Let’s Move Salad Bars to Schools initiative. About 50 percent of high school students have access to salad bars at schools, 39 percent of middle school kids and 31 percent of elementary school children.

Spruance’s study, published in Health Education and Behavior, followed the salad bar usage of students in 12 public schools in New Orleans. Spruance and coauthors from Tulane University administered surveys to the students and tracked the school environment through personal visits.

Not only did they find better marketing improved salad bar usage among secondary school students, but they also found female students use salad bars more often than male students, and children who prefer healthy foods use them more frequently.

“The value of a salad bar program depends on whether students actually use the salad bar,” Spruance said. “But few studies have examined how to make that happen more effectively.”

Some examples of successful salad bar marketing efforts included signage throughout the school promoting the salad bar, information in school publications and newsletters, and plugs for the salad bar on a school’s digital presence.

Spruance suggests that schools engage parents in their efforts to improve the school food environment–such as reaching out to parents through newsletters or parent teacher conferences. Of course, she says, offering healthy options at home makes the biggest difference.

“It takes a lot of effort and time, but most children and adolescents require repeated exposures to food before they will eat them on their own,” Spruance said. “If a child is being exposed to foods at home that are served at school, the child may be more likely to eat those fruits or vegetables at school.”

Spruance’s research builds off of previous studies that show students are more likely to use salad bars if they are included in the normal serving line.

Guess the nun and that special look is maybe still the best option…

Abstract of the study:

Background. Consumption levels of fruits and vegetables (F/V) among children/adolescents are low. Programs like school-based salad bars (SB) provide children/adolescents increased F/V access.

Aims. The purpose of this study was to examine the relationship between SB use and individual and school-level factors among elementary and secondary school students in New Orleans public schools.

Method. Twelve schools receiving SB units from the Let’s Move Salad Bars to Schools Campaign participated in this study. Self-reported data were collected from students (n = 1,012), administrators (n = 12), and food service staff (n = 37). School environmental data were obtained through direct observation. Generalized estimating equation regression methods were used to develop a multilevel model including both school-level (e.g., length of lunchtime, SB marketing, vending machines) and individual-level (e.g., sex, food preferences, nutrition knowledge) effects.

Results. Female students had higher odds of using the SB compared to males. Students with healthier food preferences had higher odds of using the SB than those who reported less healthy food preferences. Within the multilevel model for all students, only sex and healthy food preferences remained significant. In a multilevel model assessing secondary students only, student encouragement toward others for healthy eating and school-based SB marketing were significantly related to SB use.

Conclusions. Little research has examined factors related to school-based SB use. These findings suggest recommendations that may help improve student use of SBs. For example, increasing the promotion of SB, particularly in secondary schools, might encourage their use among students.

1 Comment

Filed under Education, Research

Children prefer reading on paper, more technology means less reading.

This new study that I found via Casper Hulshof examined how the access to eReaders, computers and mobile phones influenced the children’s book reading frequency.

In short:

  • Children underutilised devices for recreational book reading, even when daily book readers, which means that children prefer paper over screens for reading.
  • Reading frequency was less when children had access to mobile phones.
  • Reading in general was less when children were given access to more digital devices.

Abstract of the study:

Regular recreational book reading is a practice that confers substantial educative benefit. However, not all book types may be equally beneficial, with paper book reading more strongly associated with literacy benefit than screen-based reading at this stage, and a paucity of research in this area. While children in developed countries are gaining ever-increasing levels of access to devices at home, relatively little is known about the influence of access to devices with eReading capability, such as Kindles, iPads, computers and mobile phones, on young children’s reading behaviours, and the extent to which these devices are used for reading purposes when access is available. Young people are gaining increasing access to devices through school-promoted programs; parents face aggressive marketing to stay abreast of educational technologies at home; and schools and libraries are increasingly their eBook collections, often at the expense of paper book collections. Data from the 997 children who participated in the 2016 Western Australian Study in Children’s Book Reading were analysed to determine children’s level of access to devices with eReading capability, and their frequency of use of these devices in relation to their recreational book reading frequency. Respondents were found to generally underutilise devices for reading purposes, even when they were daily book readers. In addition, access to mobile phones was associated with reading infrequency. It was also found that reading frequency was less when children had access to a greater range of these devices.

1 Comment

Filed under At home, Education, Research

New study confirms: cyberbullying rarely occurs in isolation

It’s something I’ve known from other studies, but this new research from the University of Warwick, published in European Child and Adolescent Psychiatry, confirms it: cyberbullying is mostly an extension of playground bullying — and doesn’t create large numbers of new victims.

In short:

  • Cyberbullying doesn’t create large numbers of new victims
  • Most bullying is face-to-face – with cyberbullying used as a modern tool to supplement traditional forms
  • 29% of UK teenagers reported being bullied – only 1% were victims of cyberbullying alone
  • Bullying intervention strategies should focus on traditional bullying as well as cyberbullying

From the press release:

Professor Dieter Wolke in the Department of Psychology finds that although cyberbullying is prevalent and harmful, it is a modern tool used to harm victims already bullied by traditional, face-to-face means.

In a study of almost 3000 pupils aged 11-16 from UK secondary schools, twenty-nine percent reported being bullied, but one percent of adolescents were victims of cyberbullying alone.

During the survey, pupils completed the Bullying and Friendship Interview, which has been used in numerous studies to assess bullying and victimization.

They were asked about direct victimisation (e.g., “been hit/beaten up” or “called bad/nasty names”); relational victimization (e.g., “had nasty lies/rumours spread about you”); and cyber-victimization (e.g., “had rumours spread about you online”, “had embarrassing pictures posted online without permission”, or “got threatening or aggressive emails, instant messages, text messages or tweets”).

All the teenagers who reported being bullied in any form had lower self-esteem, and more behavioural difficulties than non-victims.

However, those who were bullied by multiple means – direct victimisation, relational victimisation and cyber-victimisation combined – demonstrated the lowest self-esteem and the most emotional and behavioural problems.

The study finds that cyberbullying is “another tool in the toolbox” for traditional bullying, but doesn’t create many unique online victims.

As a result, Professor Wolke argues that public health strategies to prevent bullying overall should still mainly focus on combatting traditional, face-to-face bullying – as that is the root cause of the vast majority of cyberbullying.

Professor Wolke comments:

“Bullying is a way to gain power and peer acceptance, being the ‘cool’ kid in class. Thus, cyber bullying is another tool that is directed towards peers that the bully knows, and bullies, at school.

“Any bullying prevention and intervention still needs to be primarily directed at combatting traditional bullying while considering cyberbullying as an extension that reaches victims outside the school gate and 24/7.”

Abstract of the study:

Cyberbullying has been portrayed as a rising ‘epidemic’ amongst children and adolescents. But does it create many new victims beyond those already bullied with traditional means (physical, relational)? Our aim was to determine whether cyberbullying creates uniquely new victims, and whether it has similar impact upon psychological and behavioral outcomes for adolescents, beyond those experienced by traditional victims. This study assessed 2745 pupils, aged 11–16, from UK secondary schools. Pupils completed an electronic survey that measured bullying involvement, self-esteem and behavioral problems. Twenty-nine percent reported being bullied but only 1% of adolescents were pure cyber-victims (i.e., not also bullied traditionally). Compared to direct or relational victims, cyber-victimization had similar negative effects on behavior (z = −0.41) and self-esteem (z = −0.22) compared to those not involved in bullying. However, those bullied by multiple means (poly-victims) had the most difficulties with behavior (z = −0.94) and lowest self-esteem (z = −0.78). Cyberbullying creates few new victims, but is mainly a new tool to harm victims already bullied by traditional means. Cyberbullying extends the reach of bullying beyond the school gate. Intervention strategies against cyberbullying may need to include approaches against traditional bullying and its root causes to be successful.

1 Comment

Filed under At home, Education, Psychology, Research, Social Media, Technology