It’s a very popular idea, dating back to the theories and studies by Bandura: seeing violence teaches people to act violently. And more recently there was an American president linking computer games to school shootings. This new study shows that this maybe unwarranted.
In this study by Kühn et al published in Nature the researchers did a randomized controlled trial with 3 groups
- 1 group who played Grand Theft Auto intensively during 2 months
- 1 group who played The Sims 3 intensively during 2 months
- 1 group who didn’t play games at all.
And what did the researchers find?
Within the scope of the present study we tested the potential effects of playing the violent video game GTA V for 2 months against an active control group that played the non-violent, rather pro-social life simulation game The Sims 3 and a passive control group. Participants were tested before and after the long-term intervention and at a follow-up appointment 2 months later. Although we used a comprehensive test battery consisting of questionnaires and computerised behavioural tests assessing aggression, impulsivity-related constructs, mood, anxiety, empathy, interpersonal competencies and executive control functions, we did not find relevant negative effects in response to violent video game playing. In fact, only three tests of the 208 statistical tests performed showed a significant interaction pattern that would be in line with this hypothesis. Since at least ten significant effects would be expected purely by chance, we conclude that there were no detrimental effects of violent video gameplay.
Will this study end all discussions? No, I’m sure this won’t be the case even if this study is very relevant. It’s worth noticing that the average age of the participants was 28, on which I would suggest that a replication with younger participants would be a very good idea.
Abstract of the study:
It is a widespread concern that violent video games promote aggression, reduce pro-social behaviour, increase impulsivity and interfere with cognition as well as mood in its players. Previous experimental studies have focussed on short-term effects of violent video gameplay on aggression, yet there are reasons to believe that these effects are mostly the result of priming. In contrast, the present study is the first to investigate the effects of long-term violent video gameplay using a large battery of tests spanning questionnaires, behavioural measures of aggression, sexist attitudes, empathy and interpersonal competencies, impulsivity-related constructs (such as sensation seeking, boredom proneness, risk taking, delay discounting), mental health (depressivity, anxiety) as well as executive control functions, before and after 2 months of gameplay. Our participants played the violent video game Grand Theft Auto V, the non-violent video game The Sims 3 or no game at all for 2 months on a daily basis. No significant changes were observed, neither when comparing the group playing a violent video game to a group playing a non-violent game, nor to a passive control group. Also, no effects were observed between baseline and posttest directly after the intervention, nor between baseline and a follow-up assessment 2 months after the intervention period had ended. The present results thus provide strong evidence against the frequently debated negative effects of playing violent video games in adults and will therefore help to communicate a more realistic scientific perspective on the effects of violent video gaming.
This study I found via Gabriel Bouchaud examines the possible effects of the One Laptop per Child in Peru. Other than my normal procedure I want to start with the abstract as it summarizes a lot already:
This paper presents results from a large-scale randomized evalua- tion of the One Laptop per Child program, using data collected after 15 months of implementation in 318 primary schools in rural Peru. The program increased the ratio of computers per student from 0.12 to 1.18 in treatment schools. This expansion in access translated into substantial increases in use of computers both at school and at home. No evidence is found of effects on test scores in math and language. There is some evidence, though inconclusive, about positive effects on general cognitive skills.
This doesn’t sound that bad. The pupils use computers more – what a suprise if they didn’t own a computer before – but does it have an effect on education?
Well? The computers were packed with over 200 books, but… the pupils didn’t start to read more. They didn’t spent more time on education. And… they didn’t really seem to do better in class.
Or as the researchers summarize:
In general, we do not find conclusive evidence indicating clear changes in behavior in these dimensions. Regarding study time at home, we document some positive effects on whether the student studied at home the prior day. Nonetheless, results indicate small effects on whether the student studied one or more hours daily the prior week. In terms of reading, results suggest some negative effects on whether the student read a book the prior day but small effects on whether the student read a book the prior week.20 Overall, we find no statistically significant effects on the specific outcomes analyzed or on the learn- ing behavior summary measure.
…did increased computer access affect academic and cognitive skills? Table 9 shows that there are no statistically signi – cant effects on the academic achievement summary measure when focusing on all students and also when restricting to those in the interviewed sample (columns 1 and 2, respectively). Small standard errors allow ruling out modest effects. Also, there are no statistically signi cant effects on either math or language achievement for both samples.
Our results suggest that computers by them- selves, at least as initially delivered by the OLPC program, do not increase achieve- ment in curricular areas.
I’m a bit in a bind here. I do not want to state that it is a bad idea to give computers to children in need. But this study does show that just giving kids a computer – or putting them in a wall – doesn’t do much.
This morning big news in our Belgian media about a new study by Stijn Baert and his colleagues in which they checked the impact of smartphone usage on the academic performance of the students:
In this study, we contributed to recent literature concerning the association between smartphone use and educational performance by providing the first causal estimates of the effect of the former on the latter. To this end, we analysed unique data on 696 first-year university students in Belgium. We found that a one-standard-deviation increase in their overall smartphone use yields a decrease in their average exam score of about one point (out of 20). This negative relationship is robust to the use of alternative indicators of smartphone use and academic performance. As our results add to the literature evidence for heavy smartphone use not only being associated with lower exam marks but also causing lower marks, we believe that policy-makers should at least invest in information and awareness campaigns to highlight this trade-off.
I have to admit that I do think that while the researchers have taken a lot into account, there always still can be something else maybe causing this differences. The researchers have attempted to bypass this:
This study is the first to attempt to measure the causal impact of (overall) smartphone use on educational performance. To this end, we exploit data from 696 first-year students at two Belgian universities, who were surveyed in December 2016 using multiple scales on smartphone use as well as predictors of this smartphone use and a battery of questions concerning (potential) other drivers of success at university. This information is merged with the students’ scores on their first exams, taken in January 2017. We analyse the merged data by means of instrumental variable estimation techniques. More concretely, to be able to correctly identify the influence of smartphone use on academic achievement, in a first stage, the respondents’ smartphone use is predicted by diverging sets of variables that are highly significantly associated with smartphone use, but not directly associated with educational performance. In a second stage, the exam scores are regressed on this exogenous prediction of smartphone use and the largest set of control variables used in the literature to date.
In the interview this morning on the radio, the researchers didn’t plea for a total ban of smartphones, but still think it can be a very important element for students to take into consideration.
Abstract of the study:
After a decade of correlational research, this study is the first to measure the causal impact of (general) smartphone use on educational performance. To this end, we merge survey data on general smartphone use, exogenous predictors of this use, and other drivers of academic success with the exam scores of first-year students at two Belgian universities. The resulting data are analysed with instrumental variable estimation techniques. A one-standard-deviation increase in daily smartphone use yields a decrease in average exam scores of about one point (out of 20). When relying on ordinary least squares estimations, the magnitude of this effect is substantially underestimated.
I have to admit: despite the fact that I get sick every time I’m using virtual reality goggles, I really think VR and augmented reality (AR) is just impressive. But… does it help learning. It is too early to tell, but this study by Makransky et al that I found via a tweet by Paul Kirschner is pretty clear: in this study VR doesn’t improve learning . The study is extra interesting as it looks at some important principles for learning such as the redundancy principle (Mayer was involved in the study), and while the students did get more motivated, the learning was not better (even worse). Do note that the the amount of participants was pretty low: 52 (22 males and 30 females) students from a large European university.
Also interesting is to know what application the researchers were using:
The virtual simulation used in this experiment was on the topic of mammalian transient protein expression and was developed by the simulation development company, Labster. It was designed to facilitate learning within the field of biology at a university level by allowing the user to virtually work through the procedures in a lab by using and interacting with the relevant lab equipment and by teaching the essential content through an inquiry-based learning approach.
- The consequences of adding immersive virtual reality to a simulation was examined.
- The impact of the level of immersion on the redundancy principle was investigated.
- EEG was used to obtain a direct measure of cognitive processing during learning.
- Students reported higher presence but learned less in the immersive VR condition.
- Students also had higher cognitive load based on EEG in the immersive VR condition.
Abstract of the study:
Virtual reality (VR) is predicted to create a paradigm shift in education and training, but there is little empirical evidence of its educational value. The main objectives of this study were to determine the consequences of adding immersive VR to virtual learning simulations, and to investigate whether the principles of multimedia learning generalize to immersive VR. Furthermore, electroencephalogram (EEG) was used to obtain a direct measure of cognitive processing during learning. A sample of 52 university students participated in a 2 × 2 experimental cross-panel design wherein students learned from a science simulation via a desktop display (PC) or a head-mounted display (VR); and the simulations contained on-screen text or on-screen text with narration. Across both text versions, students reported being more present in the VR condition (d = 1.30); but they learned less (d = 0.80), and had significantly higher cognitive load based on the EEG measure (d = 0.59). In spite of its motivating properties (as reflected in presence ratings), learning science in VR may overload and distract the learner (as reflected in EEG measures of cognitive load), resulting in less opportunity to build learning outcomes (as reflected in poorer learning outcome test performance).
I really like science, I like the self-correcting part of science even more.
Check this paper that was published by Sage and Burgio earlier this year:
Mobile phones and other wireless devices that produce electromagnetic ﬁelds (EMF) and pulsed radiofrequency radiation (RFR) are widely documented to cause potentially harmful health impacts that can be detrimental to young people. New epigenetic studies are proﬁled in this review to account for some neurodevelopmental and neurobehavioral changes due to exposure to wireless technologies. Symptoms of retarded memory, learning, cognition, attention, and behavioral problems have been reported in numerous studies and are similarly manifested in autism and attention deﬁcit hyperactivity disorders, as a result of EMF and RFR exposures where both epigenetic drivers and genetic (DNA) damage are likely contributors. Technology beneﬁts can be realized by adopting wired devices for education to avoid health risk and promote academic achievement.
Sounds pretty alarming, no? Should we worry? Well, no.
The respected journal Child Development recently published a commentary that attributed a number of negative health consequences to RF radiation, from cancer to infertility and even autism (Sage & Burgio, 2017). It is our view that this piece has potential to cause serious harm and should never have been published. But how do we justify such a damning verdict? In considering our responses, we
realized that this case raised more general issues about distinguishing scientiﬁcally valid from invalid views when evaluating environmental impacts on physical and psychological health, and we offer here some more general guidelines for editors and reviewers who may be confronted with similar issues. As shown in Table 1, we identify seven questions that can be asked about causal claims, using the Sage and Burgio (2017) article to illustrate these.
That’s right David Grimes and Dorothy Bischop took a closer look to the alarming article, and well…
Abstract of the paper by David Grimes and Dorothy Bischop that can be downloaded here:
Exposure to nonionizing radiation used in wireless communication remains a contentious topic in the public mind—while the overwhelming scientiﬁc evidence to date suggests that microwave and radio frequencies used in modern communications are safe, public apprehension remains considerable. A recent article in Child Development has caused concern by alleging a causative connection between nonionizing radiation and a host of conditions, including autism and cancer. This commentary outlines why these claims are devoid of merit, and why they should not have been given a scientiﬁc veneer of legitimacy. The commentary also outlines some hallmarks of potentially dubious science, with the hope that authors, reviewers, and editors might be better able to avoid suspect scientiﬁc claims.
Quite often you can see hurray-news being spread virally when discussing technology in education, when things go wrong or turn out less successful than planned news suddenly seem to go much less viral although you will always be able to find people who like a dig at technology-driven reform. I do think that sharing stories about reform-attempts that didn’t go so well is important. Not to say ‘haha’ or ‘duh’, but to learn from those cases. Think of it as a kind of air crash investigation, maybe we should have make such a team.
This weekend Dutch newspaper Het Parool brought a reconstruction how a new school ‘De Ontplooiing’ in Amsterdam went horribly wrong. This school has become famous as one of the very first Steve Jobs schools in the Netherlands, a school vision that heavily relied on the iPad. People from all over the world came to visit this flagship school and one of the people behind this O4NT-vision still sells this story around the globe.
But… this school is in bad shape. Het Parool describes a couple of things:
- One element has nothing to do with the school in itself: some of the children who were brought to the school were children who were having trouble already in their original schools. This is difficult for any new school to handle.
- There was strong vision on personalized learning, but… to much freedom, combined with floating hours, made it very difficult for children to learn. In the newspaper there are testimonies how children leaving this school who go back to ‘normal’ schools are way behind in math and reading as it was not that important.
- Over the half of the other Steve Jobs schools in the Netherlands have left the original vision. Often not because they didn’t like the vision or because it didn’t work, but because it became to expensive to use the software and the vision of the organization.
When you look at the first element, this is something the school or the O4NT-vision couldn’t help, but the second and third element is something different. A air crash investigation team would mention probably how some of the school leaders involved were lacking experience. They would maybe also mention that the for profit-idea in education maybe didn’t help. “Lack of vision” wouldn’t be mentioned as there was truly a vision that was more than ‘use an iPad’. Some of the educational scientists in the team would point out that parts of this vision was doomed from the start, but this would probably remain a discussion, as it has been for over decades – long before the iPad was made. There are more school approaches with a lot of freedom, with strong defenders and as strong opponents.
The sad thing is – as Paul Kirschner pointed out on Twitter – that this has been experiment that went wrong for a lot of children. An experiment that never would have been possible if it were a real scientific experiment as it would never would have passed an ethical committee. Maybe the air crash investigation team could write up what not to do when trying new experiments like this. Not to make experimenting impossible, but just to make sure the changes for a next plain crashing (think Altschool, think Carpe Diem) would diminish.
(I’ve written quite a lot about these schools in the past, but most of it in Dutch. Check here and here. There is a translate button on the blog).
Found this non-surprising Bloomberg-article via Tom Bennett. Why am I not surprised, well because I’ve written about this before: the technology often overlooks the old roots of what they are saying. Many of the ideas have been tried before… But also because of what Morozov has coined as solutionism, the naive idea that there are easy – often technological – solutions for complex problems.
But read the article for yourself, this is an excerpt:
The education system is one of the few industries that has resisted technological reinvention. It’s not for a lack of capital. Zuckerberg, Bill Gates, Netflix Inc.’s Reed Hastings, Salesforce.com Inc.’s Marc Benioff and many others have poured money into reform efforts, with mixed results. Zuckerberg backed a program similar to AltSchool at Summit Public Schools, a U.S. charter school network that uses Facebook technology.
I’m suddenly wondering, is education that last one small village in Armorica that isn’t under control of the Roman Facebook Empire?
Past week I had the pleasure to attend a talk by Eric Schmidt, top leader from Alphabet/Google in the Netherlands. One of the more interesting things he said was a seemingly contradiction: after several pleas for teaching how to code in school, he ended his talk by saying that soon AI would make coding obsolete.
There was also another talk at the event by Clarissa Shen from Udacity. Her talk gave me an important insight: the present courses delivered by the platform aren’t about education, it’s all about job training.
What is the difference? Biesta has described 3 goals of education: subjectification (personal development), qualification and socialisation. If you go to a school or university you’ll get elements that lead to qualification. Good, but most of the time even when discussing qualification in regular education., it will be more than learning only stuff that is related to a particular job.
What Shen described was only a narrow part of what can be regarded as qualification from the three tasks by Biesta, so don’t call Udacity and it’s nano-degrees education, call it what it is: job training.
Oh, btw, I enjoyed reading this related post at Inside Higher Ed.