This study on our brain’s ‘prediction machine’ capabilities is both fascinating and fun. Ok, maybe I’m biased because it is about music. But bear with me as to whether listening to a concerto by Bach or the latest pop tunes on Spotify, the human brain does not wait passively for the song to unfold. Instead, when a musical phrase has an unresolved or uncertain quality about it our brains automatically predict how the melody will end. Pretty neat! And yes, I know it’s a small sample as is often the case in this kind of research.
From the press release:
Past ideas on how the human brain processes music suggested that musical phrases are perceived by looking backward rather than forward. New research published in the journal Psychological Science, however, suggests that the human brain considers what has come before to anticipate what comes next.
“The brain is constantly one step ahead and matches expectations to what is about to happen,” said Niels Chr. Hansen, a fellow at the Aarhus Institute of Advanced Studies and one of two lead authors on the paper. “This finding challenges previous assumptions that musical phrases feel finished only after the next phrase has begun.”
Hansen and his colleagues focused their research on one of the basic units of music, the musical phrase — a sequence or pattern of sounds that form a distinct musical “thought” within a melody. Like a sentence, a musical phrase is a coherent and complete part of a larger whole, but it may end with some uncertainty about what comes next in the melody. The new research shows that listeners use these moments of uncertainty, or high entropy, to determine where one phrase ends and another begins.
“We only know a little about how the brain determines when things start and end,” said Hansen. “Here, music provides a perfect domain to measure something that is otherwise difficult to measure — namely, uncertainty.”
To study the brain’s musical predictive power, the researchers had 38 participants listen, note by note, to chorale melodies by Bach. Participants could pause and restart the music by pressing the space bar on a computer keyboard.
The participants were told that they would be tested afterward on how well they remembered the melodies. This allowed the researchers to use the time participants dwelled on each tone as an indirect measure of their understanding of musical phrasing.
In a second experiment, 31 different participants listened to the same musical phrases and then assessed how complete they sounded. The participants judged melodies that ended on high-entropy tones to be more complete — and lingered on them longer.
“We were able to show that people have a tendency to experience high-entropy tones as musical-phrase endings. This is basic research that makes us more aware of how the human brain acquires new knowledge not just from music, but also when it comes to language, movements, or other things that take place over time,” said Haley Kragness, a postdoctoral researcher at the University of Toronto Scarborough and the paper’s second lead author.
Over the long term, the researchers hope that the results can be used to optimize communication and interactions between people — or, alternatively, to understand how artists are able to tease or trick audiences.
“This study shows that humans harness the statistical properties of the world around them not only to predict what is likely to happen next, but also to parse streams of complex, continuous input into smaller, more manageable segments of information,” said Hansen.
Abstract of the study:
Anticipating the future is essential for efficient perception and action planning. Yet the role of anticipation in event segmentation is understudied because empirical research has focused on retrospective cues such as surprise. We address this concern in the context of perception of musical-phrase boundaries. A computational model of cognitive sequence processing was used to control the information-dynamic properties of tone sequences. In an implicit, self-paced listening task (N = 38), undergraduates dwelled longer on tones generating high entropy (i.e., high uncertainty) than on those generating low entropy (i.e., low uncertainty). Similarly, sequences that ended on tones generating high entropy were rated as sounding more complete (N = 31 undergraduates). These entropy effects were independent of both the surprise (i.e., information content) and phrase position of target tones in the original musical stimuli. Our results indicate that events generating high entropy prospectively contribute to segmentation processes in auditory sequence perception, independently of the properties of the subsequent event.