Author Archives: Pedro
An old reform-experiment delivers interesting, quite modern insights:
1. When engaged teachers, administrators, and students are given the freedom to experiment and the help to do it, they will come through.
2. There is no one best way of schooling youth.
3. Students can graduate high school who are academically engaged, involved in their communities, and thoughtful problem-solvers.
4. Standards of excellence that work in schools are those that are set and done locally by adults and students—not imposed from the top-down.
Once upon a time, there was much unemployment, poverty, and homelessness across our land. Leaders tried one thing after another to end these grim conditions. Nothing worked.
In the midst of these bad times, however, a small group of educators, upset over what our youth were learning in high schools decided to take action.
Schools were dull places. Students listened to teachers, read books, and took exams. Schools were supposed to prepare students for life but much of what they studied they forgot after graduating. Worse yet, what they had learned in school did not prepare them to face the problems of life, think clearly, be creative, or fulfill their civic duties. Complaints to school officials got the same answer repeatedly: little could be done because college entrance requirements determined what courses students took in high school.
So to give high schools the freedom to try new ways of schooling…
View original post 692 more words
A new article discusses how music arose and developed. When I first saw the press release, I surely hoped the article would be open access. And great news: it is.
How did music begin? Did our early ancestors first start by beating things together to create rhythm, or use their voices to sing? What types of instruments did they use? Has music always been important in human society, and if so, why? These are some of the questions explored in a recent Hypothesis and Theory article published in Frontiers in Sociology. The answers reveal that the story of music is, in many ways, the story of humans.
So, what is music? This is difficult to answer, as everyone has their own idea. “Sound that conveys emotion,” is what Jeremy Montagu, of the University of Oxford and author of the article, describes as his. A mother humming or crooning to calm her baby would probably count as music, using this definition, and this simple music probably predated speech.
But where do we draw the line between music and speech? You might think that rhythm, pattern and controlling pitch are important in music, but these things can also apply when someone recites a sonnet or speaks with heightened emotion. Montagu concludes that “each of us in our own way can say ‘Yes, this is music’, and ‘No, that is speech’.”
So, when did our ancestors begin making music? If we take singing, then controlling pitch is important. Scientists have studied the fossilized skulls and jaws of early apes, to see if they were able to vocalize and control pitch. About a million years ago, the common ancestor of Neanderthals and modern humans had the vocal anatomy to “sing” like us, but it’s impossible to know if they did.
Another important component of music is rhythm. Our early ancestors may have created rhythmic music by clapping their hands. This may be linked to the earliest musical instruments, when somebody realized that smacking stones or sticks together doesn’t hurt your hands as much. Many of these instruments are likely to have been made from soft materials like wood or reeds, and so haven’t survived. What have survived are bone pipes. Some of the earliest ever found are made from swan and vulture wing bones and are between 39,000 and 43,000 years old. Other ancient instruments have been found in surprising places. For example, there is evidence that people struck stalactites or “rock gongs” in caves dating from 12,000 years ago, with the caves themselves acting as resonators for the sound.
So, we know that music is old, and may have been with us from when humans first evolved. But why did it arise and why has it persisted? There are many possible functions for music. One is dancing. It is unknown if the first dancers created a musical accompaniment, or if music led to people moving rhythmically. Another obvious reason for music is entertainment, which can be personal or communal. Music can also be used for communication, often over large distances, using instruments such as drums or horns. Yet another reason for music is ritual, and virtually every religion uses music.
However, the major reason that music arose and persists may be that it brings people together. “Music leads to bonding, such as bonding between mother and child or bonding between groups,” explains Montagu. “Music keeps workers happy when doing repetitive and otherwise boring work, and helps everyone to move together, increasing the force of their work. Dancing or singing together before a hunt or warfare binds participants into a cohesive group.” He concludes: “It has even been suggested that music, in causing such bonding, created not only the family but society itself, bringing individuals together who might otherwise have led solitary lives.”
It’s a basic rule in education: combining new insights to prior knowledge is key. But… It’s this notion of ‘peculiarity’ that can help us understand what makes lasting memories. It’s not really a published study, but a press release about a talk in Cannes that triggered my attention. I do think it’s relevant!
From the press release:
It’s this notion of ‘peculiarity’ that can help us understand what makes lasting memories, according to Per Sederberg, a professor of psychology at The Ohio State University.
“You have to build a memory on the scaffolding of what you already know, but then you have to violate the expectations somewhat. It has to be a little bit weird,” Sederberg said.
Sederberg talked about the neuroscience of memory as an invited speaker at the Cannes Lions Festival of Creativity in France on June 19. He spoke at the session “What are memories made of? Stirring emotions and last impressions” along with several advertising professionals and artists.
Sederberg has spent his career studying memory. In one of his most notable studies, he had college students wear a smartphone around their neck with an app that took random photos for a month. Later, the participants relived memories related to those photos in an fMRI scanner so that Sederberg and his colleagues could see where and how the brain stored the time and place of those memories.
From his own research and that of others, Sederberg has ideas on which memories stick with us and which ones fade over time.
The way to create a long-lasting memory is to form an association with other memories, he said.
“If we want to be able to retrieve a memory later, you want to build a rich web. It should connect to other memories in multiple ways, so there are many ways for our mind to get back to it.”
A memory of a lifetime is like a big city, with many roads that lead there. We forget memories that are desert towns, with only one road in. “You want to have a lot of different ways to get to any individual memory,” Sederberg said.
The difficulty is how to best navigate the push and pull between novelty and familiarity. Novelty tells us what is important to remember. On the other hand, familiarity tells us what we can ignore, but helps us retrieve information later, Sederberg said.
Too much novelty, and you have no way to place it in your cognitive map, but too much familiarity and the information is similarly lost.
What that means is that context and prediction play critical roles in shaping our perception and memory. The most memorable experiences are those that arise in a familiar and stable context, yet violate some aspect of what we predict would occur in that context, he said.
“Those peculiar experiences are the things that stand out, that make a more lasting memory.”
This is actually a follow-up cartoon on an earlier Funny on Sunday about how school is becoming obsolete.
Paul Kirschner and yours truly just got a new article published in Teaching and Teacher Education on 2 common myths in education: the digital native and the multitasker. You can read it for free here (until Aug. 4)
- Information-savvy digital natives do not exist.
- Learners cannot multitask; they task switch which negatively impacts learning.
- Educational design assuming these myths hinders rather than helps learning.
The abstract of our paper:
Current discussions about educational policy and practice are often embedded in a mind-set that considers students who were born in an age of omnipresent digital media to be fundamentally different from previous generations of students. These students have been labelled digital natives and have been ascribed the ability to cognitively process multiple sources of information simultaneously (i.e., they can multitask). As a result of this thinking, they are seen by teachers, educational administrators, politicians/policy makers, and the media to require an educational approach radically different from that of previous generations. This article presents scientific evidence showing that there is no such thing as a digital native who is information-skilled simply because (s)he has never known a world that was not digital. It then proceeds to present evidence that one of the alleged abilities of students in this generation, the ability to multitask, does not exist and that designing education that assumes the presence of this ability hinders rather than helps learning. The article concludes by elaborating on possible implications of this for education/educational policy.
Sending electricity through the brain to become more creative? I can already see some salesmen with dollar signs in their eyes scratching their head how to turn this research into a business model as Scientists have found a way to improve creativity through brain stimulation. And good news: for this they sent a weak constant electrical current through saline-soaked electrodes positioned over target regions in the scalp, which means there is no need to open the skull :). And for those salesmen, do read the last paragraph of this press release:
They achieved this by temporarily suppressing a key part of the frontal brain called the left dorsolateral prefrontal cortex (DLPFC), which is involved in most of our thinking and reasoning.
The results, published in the journal Scientific Reports, show that participants who received the intervention showed an enhanced ability to ‘think outside the box’.
“We solve problems by applying rules we learn from experience, and the DLPFC plays a key role in automating this process,” commented Dr Caroline Di Bernardi Luft, first author from QMUL’s School of Biological and Chemical Sciences who conducted the research while previously working at Goldsmiths University of London, with Dr Michael Banissy and Professor Joydeep Bhattacharya.
“It works fine most of the time, but fails spectacularly when we encounter new problems which require a new style of thinking — our past experience can indeed block our creativity. To break this mental fixation, we need to loosen up our learned rules,” added Dr Luft.
The researchers used a technique called transcranial direct current stimulation (tDCS), which involved passing a weak constant electrical current through saline-soaked electrodes positioned over the scalp to modulate the excitability of the DLPFC. Depending on the direction of the current flow, DLPFC was temporarily suppressed or activated. The very low currents applied ensured that it would not cause any harm or unpleasant sensation.
Sixty participants were tested on their creative problem solving ability before and after receiving one of the following interventions: DLPFC being suppressed, DLPFC being activated, and DLPFC being unstimulated. The participants solved “matchstick problems,” some of which are hard, because to solve these problems, participants need to relax the learnt rules of arithmetic and algebra.
The participants whose DLPFC was temporarily suppressed by the electrical stimulation were more likely to solve hard problems than other participants whose DLPFC was activated or not stimulated. This demonstrates that suppressing DLPFC briefly can help breaking mental assumptions learned from experience and thinking outside the box.
But the researchers also observed that these participants got worse at solving problems with a higher working memory demand (where many items are needed to be held in mind at once). These problems require the participants to try a number of different moves until finding the solution, which means they have to keep track of their mental operations.
“These results are important because they show the potential of improving mental functions relevant for creativity by non-invasive brain stimulation methods,” commented Dr Luft.
“However, our results also suggest that potential applications of this technique will have to consider the target cognitive effects in more detail rather than just assuming tDCS can improve cognition as claimed by some companies which are starting to sell tDCS machines for home users,” she added.
“I would say that we are not yet in a position to wear an electrical hat and start stimulating our brain hoping for a blanket cognitive gain.”
Abstract of the study:
We solve problems by applying previously learned rules. The dorsolateral prefrontal cortex (DLPFC) plays a pivotal role in automating this process of rule induction. Despite its usual efficiency, this process fails when we encounter new problems in which past experience leads to a mental rut. Learned rules could therefore act as constraints which need to be removed in order to change the problem representation for producing the solution. We investigated the possibility of suppressing the DLPFC by transcranial direct current stimulation (tDCS) to facilitate such representational change. Participants solved matchstick arithmetic problems before and after receiving cathodal, anodal or sham tDCS to the left DLPFC. Participants who received cathodal tDCS were more likely to solve the problems that require the maximal relaxation of previously learned constraints than the participants who received anodal or sham tDCS. We conclude that cathodal tDCS over the left DLPFC might facilitate the relaxation of learned constraints, leading to a successful representational change.
There is a new Best Evidence in Brief and while I skipped the previous one because the mentioned research was less interesting to my personal taste, this time there is a lot to choose from.
I picked this one first:
Children with Attention-Deficient Hyperactivity Disorder (ADHD) can have trouble with hyperactivity, impulsivity, inattention, and distractibility, all of which can affect language and communication and can lead to low academic performance and antisocial behavior.
A systematic review published in the Journal of Child Psychology and Psychiatry seeks to establish the types of language problems children with ADHD experience in order to inform future research into how these language problems contribute to long-term outcomes for children with ADHD.Hannah Korrel and colleagues examined the last 35 years of ADHD research and identified 21 studies using 17 language measures, which included more than 2,000 participants (ADHD children = 1,209; non-ADHD children =1,101) for inclusion in the systematic review.The study found that children with ADHD had poorer performance than non-ADHD children on 11 of the 12 measures of overall language (effect size=1.09). Children with ADHD also had poorer performance on measures of expressive, receptive, and pragmatic language compared with non-ADHD children.
This new paper featured this month in a special edition of Neuron states an interesting thesis: most tasks we use today to test the brain are too simple.
From the press release:
Xaq Pitkow and Dora Angelaki, both faculty members in Baylor’s Department of Neuroscience and Rice’s Department of Electrical and Computer Engineering, said the brain’s ability to perform “approximate probabilistic inference” cannot be truly studied with simple tasks that are “ill-suited to expose the inferential computations that make the brain special.”
A new article by the researchers suggests the brain uses nonlinear message-passing between connected, redundant populations of neurons that draw upon a probabilistic model of the world. That model, coarsely passed down via evolution and refined through learning, simplifies decision-making based on general concepts and its particular biases.
The article, which lays out a broad research agenda for neuroscience, is featured this month in a special edition of Neuron, a journal published by Cell Press. The edition presents ideas that first appeared as part of a workshop at the University of Copenhagen last September titled “How Does the Brain Work?”
“Evolution has given us what we call a good model bias,” Pitkow said. “It’s been known for a couple of decades that very simple neural networks can compute any function, but those universal networks can be enormous, requiring extraordinary time and resources.
“In contrast, if you have the right kind of model — not a completely general model that could learn anything, but a more limited model that can learn specific things, especially the kind of things that often happen in the real world — then you have a model that’s biased. In this sense, bias can be a positive trait. We use it to be sensitive to the right things in the world that we inhabit. Of course, the flip side is that when our brain’s bias is not matched to reality, it can lead to severe problems.”
The researchers said simple tests of brain processes, like those in which subjects choose between two options, provide only simple results. “Before we had access to large amounts of data, neuroscience made huge strides from using simple tasks, and they’ll remain very useful,” Pitkow said. “But for computations that we think are most important about the brain, there are things you just can’t reveal with some of those tasks.” Pitkow and Angelaki wrote that tasks should incorporate more diversity — like nuisance variables and uncertainty — to better simulate real-world conditions that the brain evolved to handle.
They suggested that the brain infers solutions based on statistical crosstalk between redundant population codes. Population codes are responses by collections of neurons that are sensitive to certain inputs, like the shape or movement of an object. Pitkow and Angelaki think that to better understand the brain, it can be more useful to describe what these populations compute, rather than precisely how each individual neuron computes it. Pitkow said this means thinking “at the representational level” rather than the “mechanistic level,” as described by the influential vision scientist David Marr.
The research has implications for artificial intelligence, another interest of both researchers.
“A lot of artificial intelligence has done impressive work lately, but it still fails in some spectacular ways,” Pitkow said. “They can play the ancient game of Go and beat the best human player in the world, as done recently by DeepMind’s AlphaGo about a decade before anybody expected. But AlphaGo doesn’t know how to pick up the Go pieces. Even the best algorithms are extremely specialized. Their ability to generalize is often still pretty poor. Our brains have a much better model of the world; We can learn more from less data. Neuroscience theories suggest ways to translate experiments into smarter algorithms that could lead to a greater understanding of general intelligence.”
Abstract of the study:
It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors.
A little extra: on Friday I did my viva and this means that since Friday I am a doctor. I threw a little party on Friday evening and this was a part of the playlist…