Category Archives: Review

How can schools optimize support for children with ADHD, a new review

With all the craze about meta-analyses, sometimes people forget that a good systematic review can be even more helpful. This new systematic review even including some meta-analysis led by the University of Exeter seems no exception as it want to give guidance on how schools can best support children with ADHD to improve symptoms and maximise their academic outcomes.

From the press release:

The study, led by the University of Exeter and involving researchers at the EPPI-Centre (University College London), undertook a systematic review which analysed all available research into non-medication measures to support children with ADHD in schools. Published in Review of Education, the paper found that interventions which include one-to-one support and a focus on self-regulation improved academic outcomes.

Around five per cent of children have ADHD, meaning most classrooms will include at least one child with the condition. They struggle to sit still, focus their attention and to control impulses much more than ordinary children of the same age. Schools can be a particularly challenging setting for these children, and their difficulty in waiting their turn or staying in their seat impacts peers and teachers. Research shows that medication is effective, but does not work for all children, and is not acceptable to some families.

The research was funded by the National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care (CLAHRC) South West Peninsula – or PenCLAHRC. The team found 28 randomised control trials on non-drug measures to support children with ADHD in schools. In a meta-analysis, they analysed the different components of the measures being carried out to assess the evidence for what was most effective.

The studies varied in quality, which limits the confidence the team can have in their results. They found that important aspects of successful interventions for improving the academic outcomes of children are when they focus on self-regulation and are delivered in one-to-one sessions.

Self- regulation is hard for children who are very impulsive and struggle to focus attention. Children need to learn to spot how they are feeling inside, to notice triggers and avoid them if possible, and to stop and think before responding. This is much harder for children with ADHD than most other children, but these are skills that can be taught and learned.

The team also found some promising evidence for daily report cards. Children are set daily targets which are reviewed via a card that the child carries between home and school and between lessons in school. Rewards are given for meeting targets. The number of studies looking at this was lower, and their findings did not always agree. But using a daily report card is relatively cheap and easy to implement. It can encourage home-school collaboration and offers the flexibility to respond to a child’s individual needs

Tamsin Ford, Professor of Child Psychiatry at the University of Exeter Medical School, said: “Children with ADHD are of course all unique. It’s a complex issue and there is no one-size-fits-all approach. However, our research gives the strongest evidence to date that non-drug interventions in schools can support children to meet their potential in terms of academic and other outcomes. More and better quality research is needed but in the mean-time, schools should try daily report cards and to increase children’s ability to regulate their emotions. These approaches may work best for children with ADHD by one-to-one delivery”

Abstract of the full paper:

Non-pharmacological interventions for attention-deficit/hyperactivity disorder are useful treatments, but it is unclear how effective school-based interventions are for a range of outcomes and which features of interventions are most effective. This paper systematically reviews randomized controlled trial evidence of the effectiveness of interventions for children with ADHD in school settings. Three methods of synthesis were used to explore the effectiveness of interventions, whether certain types of interventions are more effective than others and which components of interventions lead to effective academic outcomes. Twenty-eight studies (n=1,807) were included in the review. Eight types of interventions were evaluated and a range of different ADHD symptoms, difficulties and school outcomes were assessed across studies. Meta-analysis demonstrated beneficial effects for interventions that combine multiple features (median effect size g=0.37, interquartile range 0.32, range 0.09 to 1.13) and suggest some promise for daily report card interventions (median g=0.0.62, IQR=0.25, range 0.13 to 1.62). Meta-regression analyses did not give a consistent message regarding which types of interventions were more effective than others. Finally, qualitative comparative analysis demonstrated that self-regulation and one-to-one intervention delivery were important components of interventions that were effective for academic outcomes. These two components were not sufficient though; when they appeared with personalisation for individual recipients and delivery in the classroom, or when interventions did not aim to improve child relationships, interventions were effective. This review provides updated information about the effectiveness of non-pharmacological interventions specific to school settings and gives tentative messages about important features of these interventions for academic outcomes.

 

Leave a comment

Filed under Education, Research, Review

Beste Evidence in Brief: Effective programs in elementary math

There is a new Best Evidence in Brief and this time I pick this study:

Marta Pellegrini from the University of Florence and Cynthia Lake, Amanda Inns, and Robert E. Slavin from our own Johns Hopkins Center for Research and Reform in Education have released  a new report on effective programs in elementary math. The report reviews research on the mathematics achievement outcomes of all programs with at least one study meeting the inclusion criteria of the review. A total of 78 studies were identified that evaluated 61 programs in grades K-5.
The studies were very high in quality, with 65 (83%) randomized and 13 (17%) quasi-experimental evaluations. Key findings were as follows:
  • Particularly positive outcomes were found for tutoring programs.
  • One-to-one and one-to-small group models had equal impacts, as did teachers and paraprofessionals as tutors.
  • Technology programs showed modest positive impacts.
  • Professional development approaches focused on helping teachers gain in understanding of math content and pedagogy had no impact on student achievement, but more promising outcomes were seen in studies focused on instructional processes, such as cooperative learning.
  • Whole-school reform, social-emotional approaches, math curricula, and benchmark assessment programs found few positive effects, although there were one or more effective individual approaches in most categories.
The findings suggest that programs emphasizing personalization, engagement, and motivation are most impactful in elementary mathematics instruction, while strategies focused on textbooks, professional development for math knowledge or pedagogy, and other strategies that do not substantially impact students’ daily experiences have little impact.

Leave a comment

Filed under Education, Research, Review

Yes, retrieval and testing does work, but what limits the effect? Insights from a new meta-analysis

A new meta-analysis does confirm memory retrieval can be beneficial for learning, but also shows there are limits:

  • the frequency and difficulty of questions.
  • Simply asking a question is not enough; students must respond to see a positive effect on learning

Probably you now want to know how much is too much? Well, you’re in for a bit of a disappointment I’m afraid, as you can learn from the press release which explains it better than the original paper imho:

“Frequency is a critical factor. There appears to be a trade-off in how often you test students,” Chan said. “If I lecture nonstop throughout class, this lessens their ability to learn the material. However, too many questions, too often, can have a detrimental effect, but we don’t yet know exactly why that happens or how many questions is too many.”

The answer to that question may depend on the length of the lecture and the type or difficulty of the material, Chan said. Given the different dynamics of a class lecture, it may not be possible to develop a universal lecture-to-question ratio. Regardless, Chan says testing students throughout the lecture is a simple step instructors at any level and in any environment can apply to help students learn.

“This is a cheap, effective method and anyone can implement it in their class,” he said. “You don’t need to give every student an iPad or buy some fancy software – you just need to ask questions and have students answer them in class.”

Chan, Christian Meissner, a professor of psychology at Iowa State; and Sara Davis, a postdoctoral fellow at Skidmore College and former ISU graduate student, examined journal articles from the 1970s to 2016 detailing more than 150 different experiments for their analysis. The researchers looked at what factors influenced the magnitude of this effect, when it happens and when the effect is reversed.

Why testing helps

There are several explanations as to why testing students is beneficial for new learning. The researchers evaluated four main theories for the meta-analysis to examine the strengths and weakness of these explanations from the existing research. The data strongly supported what researchers called the integration theory.

“This theory claims that testing enhances future learning by facilitating the association between information on the test and new, especially related, information that is subsequently studied, leading to spontaneous recall of the previously tested information when they learn related information,” Meissner said. “When this testing occurs, people can better tie new information with what they have learned previously, leading them to integrate the old and the new.”

Learning new information requires an encoding process, which is different from the process needed to retrieve that information, the researchers explained. Students are forced to switch between the two when responding to a question. Changing the modes of operation appears to refocus attention and free the brain to do something different.

A majority of the studies in the analysis focused on college students, but some also included older adults, children and people with traumatic brain injuries. The researchers were encouraged to find that testing could effectively enhance learning across all these groups.

“Memory retrieval can optimize learning in situations that require people to maintain attention for an extended period of time. It can be used in class lectures as well as employee training sessions or online webinars,” Davis said. “Future research could examine factors that can maximize this potential.”

Abstract of the meta-analysis:

A growing body of research has shown that retrieval can enhance future learning of new materials. In the present report, we provide a comprehensive review of the literature on this finding, which we term test-potentiated new learning. Our primary objectives were to (a) produce an integrative review of the existing theoretical explanations, (b) summarize the extant empirical data with a meta-analysis, (c) evaluate the existing accounts with the meta-analytic results, and (d) highlight areas that deserve further investigations. Here, we identified four nonexclusive classes of theoretical accounts, including resource accounts, metacognitive accounts, context accounts, and integration accounts. Our quantitative review of the literature showed that testing reliably potentiates the future learning of new materials by increasing correct recall or by reducing erroneous intrusions, and several factors have a powerful impact on whether testing potentiates or impairs new learning. Results of a metaregression analysis provide considerable support for the integration account. Lastly, we discuss areas of under-investigation and possible directions for future research.

1 Comment

Filed under Education, Research, Review

Nothing new: personalized education (but does the article add something new?)

Nihil sub sole novum, we may think the idea of personalized education is new, although defenders of the idea such as Zuckerberg and Gates often refer to a study by Benjamin Bloom from decades ago. But in a new paper published in Nature David Dockterman argues that the idea is even much older than that. But if that’s the case, why didn’t it catch on and even more important: why would it now?

The article pleas for a new kind of pedagogy – and of course that got me triggered – but than seems to fall in many mistakes other people thinking about reform in education have done before by not being critical enough towards both the need for personalization and possible consequences. Biesta describes three tasks of education: the personal development, qualification and socialization. The author does mention something similar by stating

It isn’t enough to scale an instructional system around a single aspect of learner need, like content competence or social acceptance. A robust personalized learning model must respond to whatever needs matter for each individual learner.

But the starting point is the individual. This hides a world view. Nothing wrong with that, but when discussing this one needs to know and acknowledge this. It might also explain in part why some reforms have been failing over and over again…

Abstract of the paper:

Current initiatives to personalize learning in schools, while seen as a contemporary reform, actually continue a 200+ year struggle to provide scalable, mass, public education that also addresses the variable needs of individual learners. Indeed, some of the rhetoric and approaches reformers are touting today sound very familiar in this historical context. What, if anything, is different this time? In this paper I provide a brief overview of historical efforts to create a scaled system of education for all children that also acknowledged individual learner variability. Through this overview I seek patterns and insights to inform and guide contemporary efforts in personalized learning.

1 Comment

Filed under Education, Review

What if this study is correct and believing in neuromyths doesn’t matter?

There is a new interesting study published in Frontiers on how the believe in neuromyths doesn’t seem to matter as the best teachers believe as much in neuromyths as regular teachers. You can check the study here and read a good analysis by Christian Jarrett at BPS Digest here. Ok, I want to add maybe just one thing to the analysis. The researchers picked teachers that were selected as winners of best teacher elections. The authors acknowledge this is a weak spot, as we don’t know how those teachers were selected. If you read the new book by Dylan William, you will discover how it’s almost impossible to find out which teachers are actually really good or which ones are doing a bad job. It’s hard to observe the difference between a bad teacher having a good day and a great teacher having a bad day.

It may surprise you that at first I really hoped this study to be correct, and for several reasons, such as:

  • it would make my life much easier as I can stop writing about myths and move on,
  • our children would have great teachers even if they believe in nonsense.

But next I remembered that previous research has shown over and over again that people who are really interested in the brain, are easier caught in neuromyths. So it seems not implausible that really good teachers just look for a lot of stuff that may help them to become even better teachers. Which is nice, and I think actually the case.

But than I suddenly realized how dangerous this result can potentially be. Imagine it to be correct it could also mean that whatever we teach our teachers, it has little impact. In that case quid teacher training? Sad thing is, if you look at the work by John Hattie there is sometimes a case there to be made. But it would maybe also mean that one can teach and others just can’t… by nature. Because their knowledge doesn’t make much of a difference.

Of course it’s all a bit more complicated than that and there are probably often a lot of difference between what people think and how they act, and even more: sometimes how a teacher acts will be similar despite believing or not believing a myth, because the action is the same but there is a different reasoning behind it.

But I do want to argue that the authors of the study have overlooked a potential danger of neuromyths. Teaching those myths often take away important time of professional development and teacher training, time that isn’t spent on effective methods. Another possible explanation of the results could well be: even the best teachers don’t know these excellent techniques. In that case it means there is still a lot to gain. Which again is good news. Well, kind of.

In the meantime I need to get back to writing our second book on myths about learning and education.

2 Comments

Filed under Book, Myths, Research, Review

New meta-analysis begs: Don’t throw away your printed books in education

I just found a new meta-analysis soon to be published in which Pablo Delgado, Cristina Vargas, Rakefet Ackerman & Ladislao Salmerón examine the effects of reading media on reading comprehension. Well, the title gives away the conclusion, I guess.

But this is the longer version:

The results of the two meta-analyses in the present study yield a clear picture of screen inferiority, with lower reading comprehension outcomes for digital texts compared to printed texts, which corroborates and extends previous research (Kong et al., 2018; Singer & Alexander, 2017b; Wang et al. 2007). These results were consistent across methodologies and theoretical frameworks.

And while the effects are relatively low, the researchers do warn:

Although the effect sizes found for media (-.21) are small according to Cohen’s guidelines (1988), it is important to interpret this effect size in the context of reading comprehension studies. During elementary school, it is estimated that yearly growth in reading comprehension is .32 (ranging from .55 in grade 1, to .08 in grade 6) (Luyten, Merrel & Tymms, 2017). Intervention studies on reading comprehension yield a mean effect of .45 (Scammacca et al., 2015). Thus, the effects of media are relevant in the educational context because they represent approximately 2/3 of the yearly growth in comprehension in elementary school, and 1/2 of the effect of remedial interventions.

The analysis also has some clear practical consequences:

A relevant moderator found for the screen inferiority effect was time frame. This finding sheds new light on the mixed results in the existing literature. Consistent with the findings by Ackerman and Lauterman (2012) with lengthy texts, mentioned above, Sidi et al. (2017) found that even when performing tasks involving reading only brief texts and no scrolling (solving challenging logic problems presented in an average of 77 words), digital-based environments harm performance under time pressure conditions, but not under a loose time frame. In addition, they found a similar screen inferiority when solving problems under time pressure and under free time allocation, but framing the task as preliminary rather than central. Thus, the harmful effect of limited time on digital-based work is not limited to reading lengthy texts. Moreover, consistently across studies, Ackerman and colleagues found that people suffer from greater overconfidence in digital-based reading than in paper-based reading under these conditions that warrant shallow processing.

Our findings call to extend existing theories about self-regulated learning (see Boekaerts, 2017, for a review). Effects of time frames on self-regulated learning have been discussed from various theoretical approaches. First, a metacognitive explanation suggests that time pressure encourages compromise in reaching learning objectives (Thiede & Dunlosky, 1999). Second, time pressure has been associated with cognitive load. Some studies found that time pressure increased cognitive load and harmed performance (Barrouillet, Bernardin, Portrat, Vergauwe, & Camos, 2007). However, others suggested that it can generate a germane (“good”) cognitive load by increasing task engagement (Gerjets & Scheiter, 2003). In these theoretical discussions, the potential effect of the medium in which the study is conducted has been overlooked. We see the robust finding in the present meta-analyses about the interaction between the time frame and the medium as a call to theorists to integrate the processing style adapted by learners in specific study environments into their theories

What I really appreciate is that the researchers also checked for publication bias, and good news, the different indicators that were used, suggested no risk of publication bias.

There is only a small bit of irony… I read the study online and you read this online too.

Abstract of the meta-analysis:

With the increasing dominance of digital reading over paper reading, gaining understanding of the effects of the medium on reading comprehension has become critical. However, results from research comparing learning outcomes across printed and digital media are mixed, making conclusions difficult to reach. In the current metaanalysis, we examined research in recent years (2000-2017), comparing the reading of comparable texts on paper and on digital devices. We included studies with betweenparticipant (n = 38) and within-participant designs (n = 16) involving 171,055 participants. Both designs yielded the same advantage of paper over digital reading (Hedge’s g = -.21; dc = -.21). Analyses revealed three significant moderators: (1) time frame: the paper-based reading advantage increased in time-constrained reading compared to self-paced reading; (2) text genre: the paper-based reading advantage was consistent across studies using informational texts, or a mix of informational and narrative texts, but not on those using only narrative texts; (3) publication year: the advantage of paper-based reading increased over the years. Theoretical and educational implications are discussed.

Leave a comment

Filed under Education, Research, Review, Technology

How to better report on effect sizes in meta-analyses?

Yesterday I had to miss the debate on meta-analyses on #rED18 but I did read the post by Robert Coe.

It’s true there has been quite a stir about Hattie and meta-analyses lately, and to me there are different aspects to the discussion.

I did notice that when effect sizes are shown in a different way, people can spot the complexity that’s often being obscured.

Compare the way Hattie notes effect sizes in his infamous lists of effects, e.g.:

And compare that with this graph, taken from Dietrichson et al., 2017.:

In this second example you can see the range of effects that are hidden behind the average effect size. It’s still an abstraction of a more complex reality, but it invites people who are interested to check what makes the difference between effect sizes noted for small-group instruction so big, or it shows that while coaching and mentoring students can have a positive effect, there is also a danger of the opposite.

Reference:

  • Dietrichson, J., Bøg, M., Filges, T., & Klint Jørgensen, A. M. (2017). Academic interventions for elementary and middle school students with low socioeconomic status: A systematic review and meta-analysis. Review of Educational Research87(2), 243-282.

 

2 Comments

Filed under Research, Review

Scaling up of interesting educational projects (Best Evidence in Brief)

There is a new best evidence in brief with this time a very important paragraph at the end of this summary of an article on early math program (italics by me):

Pre-K Mathematics is a supplementary mathematics curriculum for pre-k children. It focuses on the pre-k classroom and home learning environments of young children, especially those from families experiencing economic hardship. Activities aim to support mathematical development by providing learning opportunities to increase children’s informal mathematical knowledge.
In an article published in Evaluation Review , Jaime Thomas and colleagues report on a cluster-randomized control trial of the scale-up of Pre-K Mathematics in 140 schools in California (70 intervention schools, 70 control). The post-test measured outcomes on the Early Childhood Longitudinal Study, Birth Cohort Mathematics Assessment (ECLS-B) and the Test of Early Mathematics Ability (TEMA-3) at the end of the pre-k year. Results showed that Pre-K Mathematics had positive and significant effects, with an effect size of +0.30 on the ECLS-B and +0.23 on the TEMA-3.
The authors consider how these results differ from previous, smaller studies of the efficacy and effectiveness of Pre-K Mathematics. They find that effect sizes were usually larger in the earlier studies. As studies became larger, more heterogeneous, and less controlled, they tended to yield smaller results.

Leave a comment

Filed under Education, Research, Review

Why it’s improbable that adapting to learning styles will ever work (new review)

I just received a notice from a preprint from a new review study on learning styles by

and they add an interesting element to the learning styles discussion. Besides the knowledge that adapting to learning styles – and by extension multiple intelligences – doesn’t work and isn’t supported by evidence, they also explain why it’s improbable that this will ever be possible. I know, this isn’t entirely new, but worth sharing. No, it’s not because they have the possibility to look into the future, but they based it on the present knowledge and the trends that can be acknowledged when looking into how research has been evolving:

…from the discussion on the functioning of the brain, it is clear that learning styles violate the connectivity principle. Additionally, most of the evidence indicates that teaching in the styles preferred by students does not improve academic performance. However, only 14 studies deny this hypothesis (Cuevas & Dawson, 2018, Moser and Zumbach, 2018, Pashler, McDaniel, Rohrer, & Bjork, 2008), 7 prove it (Cuevas & Dawson, 2018; Moser & Zumbach, 2018) and 6 are nonconforming (Moser & Zumbach, 2018). Therefore, the trend of the evidence on learning styles is negative limited and since the construct does not show connectivity, it can be classified as an improbable phenomenon. Consequently, the recommendation made by Coffield, Moseley, Hall, & Ecclestone (2004, p. 140), of not basing pedagogical interventions on learning styles remains valid.

What do they mean by connectivity principle?

This principle establishes that any theory that attempts to explain a phenomenon must consider previously confirmed empirical facts directly related to the phenomenon. In such a way that it does not contradict this verified knowledge.

Abstract from the preprint:

Leaning styles are a widespread idea that has high levels of acceptance in education and psychology. The promises of adopting the construct range from gains in academic performance, to the development of respect for the self and others. Nevertheless, from a scientific perspective it remains highly controversial. Most studies indicate that matching teaching to the learning styles of students does not improve learning, and that their psychometric instruments do not show enough reliability and validity. In this sense, this paper investigated if the postulates of learning styles are consistent with the way the human brain process information. Moreover, the trend of the accumulated evidence about learning styles was analyzed, using a simple algorithm, to determine if they are a proven, debatable, improbable or denied phenomenon. Results show: (1) that learning styles, along with the multiple intelligence theory and the left or right-brained hypothesis, are not compatible with what is currently know about the inner workings of the brain; (2) that the trend of the evidence, although still limited, does not favor learning styles; (3) that as a phenomenon

styles are classified as improbable.

 

2 Comments

Filed under Education, Myths, Research, Review

Book review: The Testing Charade: about the good, the bad and the really ugly of standardized testing

The Testing CharadeWhile I read this book, my oldest son was waiting for his grades. We had been noticing that he became steadily more nervous the past days. But what do his grades tell us? After reading the new book by Daniel Koretz, you won’t be that sure if those grades will tell me anything besides what my son did on those tests.

Daniel Koretz has been dedicating a large part of his academic career to the effects of high-stakes testing and standardized testing. While often used as synonyms, they are not. ‘Standardized test’ says something about the test in itself, it is a test that is administered and scored in a consistent, or “standard”, manner. The concept of high stakes testing says something about why the test is being used, e.g. to be allowed to go to the university. A third concept often incorrectly used as a synonym are centralized tests, which puts the focus on who organizes the test, but not about how standardized or how high stake a test is. Standardized tests can be used for any given age, and while with high stake testing people often refer to final exams, they can also exist for any given age.

To make this concrete let’s use these concepts to discuss the French Baccalauréat, one could say it is centralized, as the different tests are the same for all participants. The stakes of the Bac are reasonable high as they have an influence on university-admissions, but the stakes are not as high as e.g. the UK of some of the Asian Countries because not passing your BAC doesn’t necessarily mean the end of the line in higher education. And it can be argued that not all the different subjects in the Baccalauréat are standardized, as marking essays is very hard to standardize.

In his new book Koretz goes to a great length to explain how testing in itself isn’t necessarily a bad thing. It can give both children and teachers information about the learning process. But when stakes get too high, e.g. if the future of the children, the teachers or the schools depend on the results, a lot can go wrong. High stakes for children can mean, e.g. in South Korea, that if you don’t have maximum grades for the central exams, you won’t be able to reach for the SKY, the acronym of the 3 top universities in Korea. Maximum grades mean your ticket to a solid and great future. Not having those grades mean a much lesser chance on success.

High stakes for teachers can mean being sacked if certain results aren’t met or rewards if your pupils do better than expected. Koretz describes how schools in the US sometimes get more money from their state if they perform better than the given targets, targets who are checked by administering standardized tests on pupils from different age groups. This all can seem reasonable if you want pupils, teachers and schools to do better, but Koretz warns in his book that you may end with a lot of stress, even less learning and fraud.

A central theme in the book is Campell’s law. Donald T. Campbell stated that “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” What does this law mean for standardized testing? Let’s pick some examples from the book.

What will happen if teachers know that chapter 5 will not be that important in a centralized math test? This can be for many different reasons, not necessarily because it is not important to real life, but maybe because it is more difficult to assess. There is a big chance that teachers will pay less attention to the content of chapter 5, spending more time on e.g. chapter 4 that will be a major part of the test.

What happens within a subject, e.g. maths, can also happen between subjects. If reading and mathematics are key in high stakes centralized exams, often schools will opt to pay less time to subjects such as history, arts of science until those subjects become important in those tests. In the book Koretz describes rather anecdotally how some schools even skipped recess for their children and closed the art rooms to replace them with math labs to prep their pupils for the tests.

Even worse, this all can lead to grade inflation. Test scores may rise because of the strong focus of students, teachers and schools on those tests, but this doesn’t necessarily mean that they become better in the subject but only in what is being tested. Koretz did interesting research on this in an unnamed state by comparing two tests on the same subject, the official state organized test and a test the state used to use, making it possible to compare. While the new test had shown an increase during the past 3 years, the students didn’t do better on the older test that was used before.

It gets really ugly, when Koretz describes several examples of fraud, often performed by teachers and schools because their job and school depend on it. Koretz describes it with a sense of compassion with the perpetrators, as they often are confronted with unreasonable high targets, impossible to meet in an honest way.

Most – not all – examples in Koretz’ book are taken from the United States, which may mean a hurdle to take for people from outside the US. But what the academic describes is relevant for anybody thinking about educational reform and who is looking at making pupils, teachers and schools more accountable, you may get not what you wished for. I’ve noticed that people who read this book, tend to overlook the possible, positive elements of standardized testing. Koretz also discusses the benefits of regular testing for the learning process. Testing in this sense is a mean to give formative assessment. Testing should also therefore be a part of a larger, more holistic approach to education, without the high stakes that as Campbell noted often can corrupt education.

1 Comment

Filed under Education, Review