My new book “The Ingredients for Great Teaching” is out now in Europe!

Continue reading

1 Comment

Filed under Book

“Personalized Learning”: The Difference between a Policy and a Strategy

Larry Cuban on School Reform and Classroom Practice

“Personalized learning”–and whatever it means–has been the mantra for policymakers. technology entrepreneurs, and engaged practitioners for the past few years. Mention the phrase and those whose bent is to alter schooling nod in assent as to its apparent value in teaching and learning.  Mentions of it cascade through media and research reports as if it is the epitome of the finest policy to install in classrooms.

But it is not a policy, “personalized learning” is a strategy.

What’s the difference?

Read what Yale University historian Beverly Gage writes about the crucial distinction between the two concepts:

A strategy, in politics, can be confused with a policy or a vision, but they’re not quite the same thing. Policies address the “what”; they’re prescriptions for the way things might operate in an ideal world. Strategy is about the “how.” How do you move toward a desired end, despite limited means and huge…

View original post 867 more words

Leave a comment

Filed under Education

A small but positive effect size of enquiry-based STEM-program (best evidence in brief)

There is a new Best Evidence in Brief and this study may surprise some:

With the increasing interest in STEM (science, technology, engineering, and math) curricula comes the need for evidence backing these programs. One such science program is The BSCS Inquiry Approach, a comprehensive high school science approach based on three key concepts: constructivism, coherence, and cohesiveness. The materials are built around the 5E process (engage, explore, explain, elaborate, and evaluate). Teaching focuses on evaluating students’ current understanding and using inquiry methods to move them to higher understandings. Each of the science disciplines (physical science, life science, earth science, and science and society) is composed of four chapters that repeat common themes, which advance over a three-year period. Designing and carrying out experiments in small groups is important in all topics. Teachers receive seven days of professional development each year, including a three-day summer institute and four one-day sessions, enabling sharing of experiences and introducing new content over time.
To determine the effects of The BSCS Inquiry Approach on student achievement, BSCS conducted a two-year cluster-randomized study of the intervention that compared students in grades 10-11 in nine experimental (n=1,509 students) and nine control high schools (n=1,543 students) in Washington State. A total of 45% of students qualified for free or reduced-price lunches. At the end of two years, the BSCS students scored higher than controls (effect size=+0.09, p<.05) on the Washington State Science Assessments.
I checked the actual approach and it’s 3underpinnings:
  • Students come to the classroom with preconceptions that shape their learning,
  • student competence requires a deep foundation of knowledge, as well as an understanding of how this knowledge relates to a framework,
  • and that students benefit from explicitly monitoring and taking control of their own learning

And if you look closer it is anything but enquiry with minimal guidance:

This study does show that there is much more possible between sometimes extreme poles in educational discussions.

Leave a comment

Filed under Education, Research

The fewer women in entering class, the less likely they’ll stay in doctoral STEM programs

Another study on women in STEM with some interesting insights, but while a lot suggests a causal relation, it’s still correlation.

From the press release:

Many women in doctoral degree programs in fields like engineering and physics are in a class of their own – and that’s not a good thing.

A new study found that the fewer females who enter a doctoral program at the same time, the less likely any one of them will graduate within six years.

In the worst-case scenario – where there’s just one woman in a new class – she is 12 percentage points less likely to graduate within six years than her male classmates, the study found.

However, for each additional 10 percent of women in a new class, that gender gap in on-time graduation rates closes by more than 2 percentage points.

The findings suggest that the “female-friendliness” of doctoral programs may play a key role in the gender gap in STEM (Science, Technology, Engineering and Mathematics) fields.

“It has been nearly impossible to quantify the climate for women in male-dominated STEM fields,” said Valerie Bostwick, co-author of the study and a post-doctoral researcher in economics at The Ohio State University.

“But our data gave us a unique opportunity to try to measure what it is like for women in STEM. What we found suggests that if there are few or no other women in your incoming class, it can make it more difficult to complete your degree.”

Bostwick conducted the research with Bruce Weinberg, professor of economics at Ohio State. Their results will be published Monday, Sept. 17 on the website of the National Bureau of Economic Research.

They used a new data set that previously had not been available to researchers. They linked transcript records from all public universities in Ohio to data from the UMETRICS project, which provides information on students supported by federal research grants.

A key advantage of this data is that it shows when and if students drop out – something that most data sets on graduate students don’t show.

“Most datasets are based on students who graduate – they don’t see you if you don’t get your degree,” Bostwick said. “That makes it impossible to find out why students drop out.”

This study examined all 2,541 students who enrolled in 33 graduate programs at six Ohio public universities between 2005 and 2016.

Overall, the average incoming class of doctoral programs included about 17 students and was about 38 percent female. But there was wide variation in class sizes and the percentage of female students.

The researchers separated the programs into those that were typically male and typically female. Typically male programs (including chemical engineering, computer science and physics) were those that were less than 38.5 percent female.

In typically male programs, the average number of women who joined a class in any particular year was less than five.

The study shows the importance for women of having a support system of other women in their entering class, Weinberg said.

A woman joining a class that was more male than typical for her doctoral program was about 7 percent less likely to graduate within six years than were her male peers.

“But if there were more women than average in the program, that graduation gap goes away,” Weinberg said.

Findings showed that when women dropped out of male-dominated programs, they usually did it in the first year. Women who joined a doctoral class with no other females were 10 percentage points more likely to drop out in that first year.

The researchers looked at two potential reasons why women may be dropping out: research funding and grades.

If female students were less likely to obtain research funding than their male peers, that could be an important reason why they’re failing to finish. But the study found no real differences in funding for men and women.

Results did show that women had slightly lower grades than men when they were in male-dominated classes. Women who joined a class with no other females had first-term GPAs that were 0.11 grade points lower than their male peers.

“That’s not enough to make a big difference,” Bostwick said. “We estimate that grades could not explain more than a quarter of the difference between the number of women and men who graduate within six years.”

Bostwick said that if grades or research funding are not the main reason for why women are not completing their STEM degrees, that suggests the reason must be something that can’t be directly measured: the academic climate for women.

“We can only speculate about what it is in the climate that is making it more difficult for women,” Bostwick said.

“It may be hard to feel like you belong when you don’t see other women around you. There may be subtle discrimination. We don’t know. But it highlights the fact that women need support, particularly if they are the only ones entering a doctoral class. They need to know about resources that could help them, particularly in that first key year.”

Abstract of the study:

We study the effects of peer gender composition, a proxy for female-friendliness of environment, in STEM doctoral programs on persistence and degree completion. Leveraging unique new data and quasi-random variation in gender composition across cohorts within programs, we show that women entering cohorts with no female peers are 11.9pp less likely to graduate within 6 years than their male counterparts. A 1 sd increase in the percentage of female students differentially increases the probability of on-time graduation for women by 4.6pp. These gender peer effects function primarily through changes in the probability of dropping out in the first year of a Ph.D. program and are largest in programs that are typically male-dominated.

Leave a comment

Filed under Education, Research

Funny on Sunday: Storks and Babies

Leave a comment

Filed under Funny

The 2 presentations I gave at #rEDPret

Today I gave 2 presentations in Pretoria, South Africa for ResearchED.

The first one was about Urban Myths about Learning and Education:

And I also did one on my new book The Ingredients for Great Teaching:

Leave a comment

Filed under Book, On the road

What if this study is correct and believing in neuromyths doesn’t matter?

There is a new interesting study published in Frontiers on how the believe in neuromyths doesn’t seem to matter as the best teachers believe as much in neuromyths as regular teachers. You can check the study here and read a good analysis by Christian Jarrett at BPS Digest here. Ok, I want to add maybe just one thing to the analysis. The researchers picked teachers that were selected as winners of best teacher elections. The authors acknowledge this is a weak spot, as we don’t know how those teachers were selected. If you read the new book by Dylan William, you will discover how it’s almost impossible to find out which teachers are actually really good or which ones are doing a bad job. It’s hard to observe the difference between a bad teacher having a good day and a great teacher having a bad day.

It may surprise you that at first I really hoped this study to be correct, and for several reasons, such as:

  • it would make my life much easier as I can stop writing about myths and move on,
  • our children would have great teachers even if they believe in nonsense.

But next I remembered that previous research has shown over and over again that people who are really interested in the brain, are easier caught in neuromyths. So it seems not implausible that really good teachers just look for a lot of stuff that may help them to become even better teachers. Which is nice, and I think actually the case.

But than I suddenly realized how dangerous this result can potentially be. Imagine it to be correct it could also mean that whatever we teach our teachers, it has little impact. In that case quid teacher training? Sad thing is, if you look at the work by John Hattie there is sometimes a case there to be made. But it would maybe also mean that one can teach and others just can’t… by nature. Because their knowledge doesn’t make much of a difference.

Of course it’s all a bit more complicated than that and there are probably often a lot of difference between what people think and how they act, and even more: sometimes how a teacher acts will be similar despite believing or not believing a myth, because the action is the same but there is a different reasoning behind it.

But I do want to argue that the authors of the study have overlooked a potential danger of neuromyths. Teaching those myths often take away important time of professional development and teacher training, time that isn’t spent on effective methods. Another possible explanation of the results could well be: even the best teachers don’t know these excellent techniques. In that case it means there is still a lot to gain. Which again is good news. Well, kind of.

In the meantime I need to get back to writing our second book on myths about learning and education.

2 Comments

Filed under Book, Myths, Research, Review

What often makes discussions about education so difficult, is what we all have in common

There are always fierce discussions about education on Twitter, but this happens also outside the social media bubble. Some of those discussions can end op in real life cold wars. On a conference a few years back I experienced this strange situation: I’m talking with person A and a person B joins us. The three of us are all new to each other. Person B introduces himself to us and names the institute he is working for. Person A immediately states that he doesn’t want to talk to person B because their view on education. I was left feeling flabbergasted.

The strange thing is that most people discussing education have often more in common than they themselves might think.

  • we are all progressive, as in: we all want our children to progress,
  • we all hate the idea that social background determines your future,
  • we all want to best for children with disabilities.

Where it often goes wrong is when we start discussing how those common goals should be achieved with both or more sides stating that the other side(s) do(es)n’t want the best for our children, which again infuriate(s) the other side(s), fueling further debate that will often than miss the point. Oh, and if you try to be neutral, nuanced or critical towards everything, it becomes walking a very thin line.

1 Comment

Filed under Education

A study prone to be misunderstood: a possible link between social media groups and academic performance

Show me which groups you take part in on social media or which pages you like, and I’ll tell you how good you’ll do in school. Well, that could well be the interpretation of this study, but a couple of things:

  • correlation doesn’t mean causal relation (!)
  • being part of some social media groups means higher risk on performing less, but not necessarily.
  • Neither vice versa!
  • And no: it’s not a good idea to check the social media profiles of your students – privacy!!!

Now you know this, do read on. From the press release:

High school students’ membership in certain social media groups can be used to predict their academic performance, as demonstrated by Ivan Smirnov, junior research fellow at HSE’s Institute of Education. The analysis of school students’ membership in groups and communities was used to detect low-performing and high-performing students. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/viewFile/17798/17027 Teenagers’ performance at school can be evaluated based on their digital footprint and their activities in social media, according to the researcher. High achieving children usually subscribe to pages with scientific and cultural content. Lower achieving students are more interested in online humour and horoscopes. The researcher found the students’ academic performance to correlate with their social media interests on the basis of the Russian national longitudinal study ‘Trajectories in Education and Careers’. The sample included 4 400 students. Their knowledge was measured using PISA, a widely recognized international study. Social media activity was studied using VK, the most popular social network among young people in Russia. About 4,500 groups were selected. The study participants had 54 subscriptions on average. The PISA study, which is conducted by the Organisation for Economic Co-operation and Development (OECD), allows researchers to assess the level of reading and mathematical and natural science skills and knowledge among 15-year-olds. The list of social media subscriptions reflects the real interests of the school children. Obviously, it does not reflect the complete range of their interests since it is possible to enter the groups as an ‘invisible’ user, without leaving any digital footprint. Nevertheless, social media profiles are a good indicator. This source of information has great potential for educational studies. Personal pages, posts, photos and comments made by social media users provide a lot of information. Researchers can analyse a person’s lifestyle and psychological profile by using this digital footprint. For example, researchers found out that a person’s demographics (ethnic identity, gender and income level) can be forecasted following the analysis of their tweets, visual information from their profile, posts, photos of the neighbourhood etc. According to research, behavior on social media says a lot about an individual’s character and intelligence. Social media can also be used to ‘test’ academic performance. ‘We have created a simple model that predicts PISA results based on students’ subscriptions to certain groups,’ Ivan Smirnov explained. The results show that there is a strong academic, knowledge-related aspect of teenagers’ online preferences. Ivan Smirnov’s study revealed segregation among teenagers in terms of their interests and in correlation with academic performance. Lower-achieving students are usually interested in horoscopes and jokes. Higher-achieving students visit pages on science, technology, books, and films more often. One could assume that this influences their performance at school and general knowledge. But there is no such data. ‘We do not know whether subscriptions impact school performance, but I tend to think that they do not’, the researcher commented, ‘There are much stronger factors, including predisposition, family resources, level of the school, and so on.’ The author mentioned two effects. Firstly, ‘those who perform better tend to choose something educational, rather than entertaining, online.’ Sadly, the low achievers, who particularly ‘need support, are surrounded by horoscopes and jokes’. Secondly, if we take a closer look at the groups, it’s clear that ‘their content is not educational at all’. This means that the difference is due to ‘self-identity, and that it probably serves as a signal, since the groups in question are public,’ the researcher emphasized. The researcher compared the real PISA results and the grades calculated on the basis of digital footprint. The model was found to predict the teenagers’ achievements very accurately. ‘Its ability to correctly identify stronger and weaker students is 90% for mathematics, 92% for natural sciences, and 94% for reading,’ the author clarified. Two groups of students were investigated: those who don’t even have a ‘basic second level, which is necessary for survival in the contemporary world, according to the OECD, and those who achieve one of the two highest levels (fifth or sixth)’.

Abstract of the study:

The Programme for International Student Assessment (PISA) is an influential worldwide study that tests the skills and knowledge in mathematics, reading, and science of 15-year- old students. In this paper, we show that PISA scores of indi- vidual students can be predicted from their digital traces. We use data from the nationwide Russian panel study that tracks 4,400 participants of PISA and includes information about their activity on a popular social networking site. We build a simple model that predicts PISA scores based on students’ subscriptions to various public pages on the social network. The resulting model can successfully discriminate between low- and high-performing students (AUC = 0.9). We find that top-performing students are interested in pages related to sci- ence and art, while pages preferred by low-performing stu- dents typically concern humor and horoscopes. The differ- ence in academic performance between subscribers to such public pages could be equivalent to several years of formal schooling, indicating the presence of a strong digital divide. The ability to predict academic outcomes of students from their digital traces might unlock the potential of social media data for large-scale education research.

1 Comment

Filed under Education, Research, Social Media

New meta-analysis begs: Don’t throw away your printed books in education

I just found a new meta-analysis soon to be published in which Pablo Delgado, Cristina Vargas, Rakefet Ackerman & Ladislao Salmerón examine the effects of reading media on reading comprehension. Well, the title gives away the conclusion, I guess.

But this is the longer version:

The results of the two meta-analyses in the present study yield a clear picture of screen inferiority, with lower reading comprehension outcomes for digital texts compared to printed texts, which corroborates and extends previous research (Kong et al., 2018; Singer & Alexander, 2017b; Wang et al. 2007). These results were consistent across methodologies and theoretical frameworks.

And while the effects are relatively low, the researchers do warn:

Although the effect sizes found for media (-.21) are small according to Cohen’s guidelines (1988), it is important to interpret this effect size in the context of reading comprehension studies. During elementary school, it is estimated that yearly growth in reading comprehension is .32 (ranging from .55 in grade 1, to .08 in grade 6) (Luyten, Merrel & Tymms, 2017). Intervention studies on reading comprehension yield a mean effect of .45 (Scammacca et al., 2015). Thus, the effects of media are relevant in the educational context because they represent approximately 2/3 of the yearly growth in comprehension in elementary school, and 1/2 of the effect of remedial interventions.

The analysis also has some clear practical consequences:

A relevant moderator found for the screen inferiority effect was time frame. This finding sheds new light on the mixed results in the existing literature. Consistent with the findings by Ackerman and Lauterman (2012) with lengthy texts, mentioned above, Sidi et al. (2017) found that even when performing tasks involving reading only brief texts and no scrolling (solving challenging logic problems presented in an average of 77 words), digital-based environments harm performance under time pressure conditions, but not under a loose time frame. In addition, they found a similar screen inferiority when solving problems under time pressure and under free time allocation, but framing the task as preliminary rather than central. Thus, the harmful effect of limited time on digital-based work is not limited to reading lengthy texts. Moreover, consistently across studies, Ackerman and colleagues found that people suffer from greater overconfidence in digital-based reading than in paper-based reading under these conditions that warrant shallow processing.

Our findings call to extend existing theories about self-regulated learning (see Boekaerts, 2017, for a review). Effects of time frames on self-regulated learning have been discussed from various theoretical approaches. First, a metacognitive explanation suggests that time pressure encourages compromise in reaching learning objectives (Thiede & Dunlosky, 1999). Second, time pressure has been associated with cognitive load. Some studies found that time pressure increased cognitive load and harmed performance (Barrouillet, Bernardin, Portrat, Vergauwe, & Camos, 2007). However, others suggested that it can generate a germane (“good”) cognitive load by increasing task engagement (Gerjets & Scheiter, 2003). In these theoretical discussions, the potential effect of the medium in which the study is conducted has been overlooked. We see the robust finding in the present meta-analyses about the interaction between the time frame and the medium as a call to theorists to integrate the processing style adapted by learners in specific study environments into their theories

What I really appreciate is that the researchers also checked for publication bias, and good news, the different indicators that were used, suggested no risk of publication bias.

There is only a small bit of irony… I read the study online and you read this online too.

Abstract of the meta-analysis:

With the increasing dominance of digital reading over paper reading, gaining understanding of the effects of the medium on reading comprehension has become critical. However, results from research comparing learning outcomes across printed and digital media are mixed, making conclusions difficult to reach. In the current metaanalysis, we examined research in recent years (2000-2017), comparing the reading of comparable texts on paper and on digital devices. We included studies with betweenparticipant (n = 38) and within-participant designs (n = 16) involving 171,055 participants. Both designs yielded the same advantage of paper over digital reading (Hedge’s g = -.21; dc = -.21). Analyses revealed three significant moderators: (1) time frame: the paper-based reading advantage increased in time-constrained reading compared to self-paced reading; (2) text genre: the paper-based reading advantage was consistent across studies using informational texts, or a mix of informational and narrative texts, but not on those using only narrative texts; (3) publication year: the advantage of paper-based reading increased over the years. Theoretical and educational implications are discussed.

Leave a comment

Filed under Education, Research, Review, Technology

How to better report on effect sizes in meta-analyses?

Yesterday I had to miss the debate on meta-analyses on #rED18 but I did read the post by Robert Coe.

It’s true there has been quite a stir about Hattie and meta-analyses lately, and to me there are different aspects to the discussion.

I did notice that when effect sizes are shown in a different way, people can spot the complexity that’s often being obscured.

Compare the way Hattie notes effect sizes in his infamous lists of effects, e.g.:

And compare that with this graph, taken from Dietrichson et al., 2017.:

In this second example you can see the range of effects that are hidden behind the average effect size. It’s still an abstraction of a more complex reality, but it invites people who are interested to check what makes the difference between effect sizes noted for small-group instruction so big, or it shows that while coaching and mentoring students can have a positive effect, there is also a danger of the opposite.

Reference:

  • Dietrichson, J., Bøg, M., Filges, T., & Klint Jørgensen, A. M. (2017). Academic interventions for elementary and middle school students with low socioeconomic status: A systematic review and meta-analysis. Review of Educational Research87(2), 243-282.

 

Leave a comment

Filed under Research, Review