This morning big news in our Belgian media about a new study by Stijn Baert and his colleagues in which they checked the impact of smartphone usage on the academic performance of the students:
In this study, we contributed to recent literature concerning the association between smartphone use and educational performance by providing the first causal estimates of the effect of the former on the latter. To this end, we analysed unique data on 696 first-year university students in Belgium. We found that a one-standard-deviation increase in their overall smartphone use yields a decrease in their average exam score of about one point (out of 20). This negative relationship is robust to the use of alternative indicators of smartphone use and academic performance. As our results add to the literature evidence for heavy smartphone use not only being associated with lower exam marks but also causing lower marks, we believe that policy-makers should at least invest in information and awareness campaigns to highlight this trade-off.
I have to admit that I do think that while the researchers have taken a lot into account, there always still can be something else maybe causing this differences. The researchers have attempted to bypass this:
This study is the first to attempt to measure the causal impact of (overall) smartphone use on educational performance. To this end, we exploit data from 696 first-year students at two Belgian universities, who were surveyed in December 2016 using multiple scales on smartphone use as well as predictors of this smartphone use and a battery of questions concerning (potential) other drivers of success at university. This information is merged with the students’ scores on their first exams, taken in January 2017. We analyse the merged data by means of instrumental variable estimation techniques. More concretely, to be able to correctly identify the influence of smartphone use on academic achievement, in a first stage, the respondents’ smartphone use is predicted by diverging sets of variables that are highly significantly associated with smartphone use, but not directly associated with educational performance. In a second stage, the exam scores are regressed on this exogenous prediction of smartphone use and the largest set of control variables used in the literature to date.
In the interview this morning on the radio, the researchers didn’t plea for a total ban of smartphones, but still think it can be a very important element for students to take into consideration.
Abstract of the study:
After a decade of correlational research, this study is the first to measure the causal impact of (general) smartphone use on educational performance. To this end, we merge survey data on general smartphone use, exogenous predictors of this use, and other drivers of academic success with the exam scores of first-year students at two Belgian universities. The resulting data are analysed with instrumental variable estimation techniques. A one-standard-deviation increase in daily smartphone use yields a decrease in average exam scores of about one point (out of 20). When relying on ordinary least squares estimations, the magnitude of this effect is substantially underestimated.
I have to admit: despite the fact that I get sick every time I’m using virtual reality goggles, I really think VR and augmented reality (AR) is just impressive. But… does it help learning. It is too early to tell, but this study by Makransky et al that I found via a tweet by Paul Kirschner is pretty clear: in this study VR doesn’t improve learning . The study is extra interesting as it looks at some important principles for learning such as the redundancy principle (Mayer was involved in the study), and while the students did get more motivated, the learning was not better (even worse). Do note that the the amount of participants was pretty low: 52 (22 males and 30 females) students from a large European university.
Also interesting is to know what application the researchers were using:
The virtual simulation used in this experiment was on the topic of mammalian transient protein expression and was developed by the simulation development company, Labster. It was designed to facilitate learning within the field of biology at a university level by allowing the user to virtually work through the procedures in a lab by using and interacting with the relevant lab equipment and by teaching the essential content through an inquiry-based learning approach.
- The consequences of adding immersive virtual reality to a simulation was examined.
- The impact of the level of immersion on the redundancy principle was investigated.
- EEG was used to obtain a direct measure of cognitive processing during learning.
- Students reported higher presence but learned less in the immersive VR condition.
- Students also had higher cognitive load based on EEG in the immersive VR condition.
Abstract of the study:
Virtual reality (VR) is predicted to create a paradigm shift in education and training, but there is little empirical evidence of its educational value. The main objectives of this study were to determine the consequences of adding immersive VR to virtual learning simulations, and to investigate whether the principles of multimedia learning generalize to immersive VR. Furthermore, electroencephalogram (EEG) was used to obtain a direct measure of cognitive processing during learning. A sample of 52 university students participated in a 2 × 2 experimental cross-panel design wherein students learned from a science simulation via a desktop display (PC) or a head-mounted display (VR); and the simulations contained on-screen text or on-screen text with narration. Across both text versions, students reported being more present in the VR condition (d = 1.30); but they learned less (d = 0.80), and had significantly higher cognitive load based on the EEG measure (d = 0.59). In spite of its motivating properties (as reflected in presence ratings), learning science in VR may overload and distract the learner (as reflected in EEG measures of cognitive load), resulting in less opportunity to build learning outcomes (as reflected in poorer learning outcome test performance).
I really like science, I like the self-correcting part of science even more.
Check this paper that was published by Sage and Burgio earlier this year:
Mobile phones and other wireless devices that produce electromagnetic ﬁelds (EMF) and pulsed radiofrequency radiation (RFR) are widely documented to cause potentially harmful health impacts that can be detrimental to young people. New epigenetic studies are proﬁled in this review to account for some neurodevelopmental and neurobehavioral changes due to exposure to wireless technologies. Symptoms of retarded memory, learning, cognition, attention, and behavioral problems have been reported in numerous studies and are similarly manifested in autism and attention deﬁcit hyperactivity disorders, as a result of EMF and RFR exposures where both epigenetic drivers and genetic (DNA) damage are likely contributors. Technology beneﬁts can be realized by adopting wired devices for education to avoid health risk and promote academic achievement.
Sounds pretty alarming, no? Should we worry? Well, no.
The respected journal Child Development recently published a commentary that attributed a number of negative health consequences to RF radiation, from cancer to infertility and even autism (Sage & Burgio, 2017). It is our view that this piece has potential to cause serious harm and should never have been published. But how do we justify such a damning verdict? In considering our responses, we
realized that this case raised more general issues about distinguishing scientiﬁcally valid from invalid views when evaluating environmental impacts on physical and psychological health, and we offer here some more general guidelines for editors and reviewers who may be confronted with similar issues. As shown in Table 1, we identify seven questions that can be asked about causal claims, using the Sage and Burgio (2017) article to illustrate these.
That’s right David Grimes and Dorothy Bischop took a closer look to the alarming article, and well…
Abstract of the paper by David Grimes and Dorothy Bischop that can be downloaded here:
Exposure to nonionizing radiation used in wireless communication remains a contentious topic in the public mind—while the overwhelming scientiﬁc evidence to date suggests that microwave and radio frequencies used in modern communications are safe, public apprehension remains considerable. A recent article in Child Development has caused concern by alleging a causative connection between nonionizing radiation and a host of conditions, including autism and cancer. This commentary outlines why these claims are devoid of merit, and why they should not have been given a scientiﬁc veneer of legitimacy. The commentary also outlines some hallmarks of potentially dubious science, with the hope that authors, reviewers, and editors might be better able to avoid suspect scientiﬁc claims.
Quite often you can see hurray-news being spread virally when discussing technology in education, when things go wrong or turn out less successful than planned news suddenly seem to go much less viral although you will always be able to find people who like a dig at technology-driven reform. I do think that sharing stories about reform-attempts that didn’t go so well is important. Not to say ‘haha’ or ‘duh’, but to learn from those cases. Think of it as a kind of air crash investigation, maybe we should have make such a team.
This weekend Dutch newspaper Het Parool brought a reconstruction how a new school ‘De Ontplooiing’ in Amsterdam went horribly wrong. This school has become famous as one of the very first Steve Jobs schools in the Netherlands, a school vision that heavily relied on the iPad. People from all over the world came to visit this flagship school and one of the people behind this O4NT-vision still sells this story around the globe.
But… this school is in bad shape. Het Parool describes a couple of things:
- One element has nothing to do with the school in itself: some of the children who were brought to the school were children who were having trouble already in their original schools. This is difficult for any new school to handle.
- There was strong vision on personalized learning, but… to much freedom, combined with floating hours, made it very difficult for children to learn. In the newspaper there are testimonies how children leaving this school who go back to ‘normal’ schools are way behind in math and reading as it was not that important.
- Over the half of the other Steve Jobs schools in the Netherlands have left the original vision. Often not because they didn’t like the vision or because it didn’t work, but because it became to expensive to use the software and the vision of the organization.
When you look at the first element, this is something the school or the O4NT-vision couldn’t help, but the second and third element is something different. A air crash investigation team would mention probably how some of the school leaders involved were lacking experience. They would maybe also mention that the for profit-idea in education maybe didn’t help. “Lack of vision” wouldn’t be mentioned as there was truly a vision that was more than ‘use an iPad’. Some of the educational scientists in the team would point out that parts of this vision was doomed from the start, but this would probably remain a discussion, as it has been for over decades – long before the iPad was made. There are more school approaches with a lot of freedom, with strong defenders and as strong opponents.
The sad thing is – as Paul Kirschner pointed out on Twitter – that this has been experiment that went wrong for a lot of children. An experiment that never would have been possible if it were a real scientific experiment as it would never would have passed an ethical committee. Maybe the air crash investigation team could write up what not to do when trying new experiments like this. Not to make experimenting impossible, but just to make sure the changes for a next plain crashing (think Altschool, think Carpe Diem) would diminish.
(I’ve written quite a lot about these schools in the past, but most of it in Dutch. Check here and here. There is a translate button on the blog).
Found this non-surprising Bloomberg-article via Tom Bennett. Why am I not surprised, well because I’ve written about this before: the technology often overlooks the old roots of what they are saying. Many of the ideas have been tried before… But also because of what Morozov has coined as solutionism, the naive idea that there are easy – often technological – solutions for complex problems.
But read the article for yourself, this is an excerpt:
The education system is one of the few industries that has resisted technological reinvention. It’s not for a lack of capital. Zuckerberg, Bill Gates, Netflix Inc.’s Reed Hastings, Salesforce.com Inc.’s Marc Benioff and many others have poured money into reform efforts, with mixed results. Zuckerberg backed a program similar to AltSchool at Summit Public Schools, a U.S. charter school network that uses Facebook technology.
I’m suddenly wondering, is education that last one small village in Armorica that isn’t under control of the Roman Facebook Empire?
Past week I had the pleasure to attend a talk by Eric Schmidt, top leader from Alphabet/Google in the Netherlands. One of the more interesting things he said was a seemingly contradiction: after several pleas for teaching how to code in school, he ended his talk by saying that soon AI would make coding obsolete.
There was also another talk at the event by Clarissa Shen from Udacity. Her talk gave me an important insight: the present courses delivered by the platform aren’t about education, it’s all about job training.
What is the difference? Biesta has described 3 goals of education: subjectification (personal development), qualification and socialisation. If you go to a school or university you’ll get elements that lead to qualification. Good, but most of the time even when discussing qualification in regular education., it will be more than learning only stuff that is related to a particular job.
What Shen described was only a narrow part of what can be regarded as qualification from the three tasks by Biesta, so don’t call Udacity and it’s nano-degrees education, call it what it is: job training.
Oh, btw, I enjoyed reading this related post at Inside Higher Ed.
There is a new Best Evidence in Brief (they have a blog now too) and this study I picked will interest a lot of the readers of this blog:
New educational technology programs are being released faster than researchers can evaluate them. The National Bureau of Economic Research has written a working paper, Education Technology: An Evidence-Based Review
, which discusses the evidence to date on the use of technology in the classroom, with the goal of finding decision-relevant patterns.
Maya Escueta and colleagues compiled publicly available quantitative research that used either randomized controlled trials or regression discontinuity designs (where students qualify for inclusion in a program based on a cut-off score at pretest). All studies had to examine the effects of an ed-tech intervention on any education-related outcome. Therefore, the paper included not only the areas of technology access, computer-assisted learning, and online courses, but also the less-often-studied technology-based behavioral interventions.
Authors found that:
- Access to technology may or may not improve academic achievement at the K-12 level, but does have a positive impact on the academic achievement of college students (ES=+0.14).
- Computer-assisted learning, when equipped with personalization features, was an effective strategy, especially in math.
- Behavioral intervention software, such as text-message reminders or e-messages instructing parents how to practice reading with their children, showed positive effects at all levels of education, plus were a cost-effective approach. Four main uses for behavioral intervention software emerged: encouraging parental involvement in early learning activities, communication between the school and parents, successfully transitioning into and through college, and creating mindset interventions. Research is recommended to determine the areas where behavioral intervention software is most impactful.
- Online learning courses had the least amount of research to examine and showed the least promise of the four areas. However, when online courses were accompanied by in-person teaching, the effect sizes increased to scores comparable to fully in-person courses.