Maybe you have noticed that different languages and/or different dialects can come with different kinds of melodies. A new study shows that your native language could impact your musical ability. A global study that compared the melodic and rhythmic abilities of almost half a million people speaking 54 different languages found that tonal speakers can better discern between subtly different melodies. In contrast, non-tonal speakers can better tell whether a rhythm beats in time with the music.
From the press release:
The researchers report April 26 in the journal Current Biology that these advantages — in melodic perception for tonal speakers and rhythm perception for non-tonal speakers — were equivalent to about half the boost that you would have from taking music lessons.
“We grow up speaking and hearing one or more languages, and we think that experience not only tunes our mind into hearing the sounds of those languages but might also influence how we perceive musical sounds like melodies and rhythms,” says Courtney Hilton, a cognitive scientist at the University of Auckland and Yale University and one of the paper’s first authors.
While non-tonal languages like English might use pitch to inflect emotion or to signify a question, raising or lowering the pitch of a syllable never changes the meaning of a word. In contrast, tonal languages like Mandarin use sound patterns to distinguish syllables and words. “This property requires pitch sensitivity in both speakers and listeners, lest one scold (𝑚𝑎̀) one’s mother (𝑚𝑎‾) instead of one’s horse (𝑚𝑎̆),” says Jingxuan Liu, a native Mandarin speaker and the study’s other first author, who started working on the project as an undergraduate student at Duke University.
The team conducted a web-based citizen science experiment to test whether speaking a tonal versus non-tonal language impacts people’s musical ability. They recruited almost half a million participants from 203 countries and native speakers of 54 different languages, including 19 geographically dispersed tonal languages such as Burmese, Punjabi, and Igbo.
Participants were given three different musical tasks that tested their ability to discern subtle differences in melody (is this melody the same as the others?), rhythm (is the drum beating in time with the song?), and fine-grained pitch perception (is the vocalist singing in tune?). Depending on how well they performed, the participants were given increasingly difficult tests, where the differences in melody were more subtle, the mismatched rhythms were almost on beat, and the mis-tuned vocals were closer to being in tune.
Overall, the researchers found that the type of language spoken impacted melodic and rhythmic ability but did not affect people’s capacity to tell whether someone was singing in tune or not. “Native speakers across our 19 tonal languages were better on average at discriminating between melodies than speakers of non-tonal languages, and similarly, all 19 were worse at doing the beat-based task.,” says Liu.
That tonal speakers have a slight rhythmic disadvantage was a surprise, but the authors think that it’s probably due to a trade-off in attention to different kinds of acoustic features. “It’s potentially the case that tonal speakers pay less attention to rhythm and more to pitch, because pitch patterns are more important to communication when you speak a tonal language,” says Hilton.
This question of whether tonal speakers might have an edge when it comes to musicality has been previously explored, but previous studies were unable to separate linguistic influences from other cultural influences. “Prior studies mostly just compared speakers of one language to another, usually English versus Mandarin or Cantonese,” says Liu. “English and Chinese speakers also differ in their cultural background, and possibly their music exposure and training in school, so it’s very difficult to rule out those cultural factors if you’re just comparing those two groups.”
“We still find this effect even with a wide range of different languages and with speakers who vary a lot in their culture and background, which really supports the idea that the difference in musical processing in tonal language speakers is driven by their common tonal language experience rather than cultural differences,” says Liu.
“Music has a lot of universal features across different cultures, but this paper shows that those universals can underlie inter-individual and cross-cultural variability,” says senior author and cognitive scientist Samuel Mehr of the University of Auckland and Yale University.
Speaking a given type of language is no substitute for music lessons, however. “Tonal language speakers had a boost in their abilities proportional to about half the boost that you would have on average if you had music lessons,” says Hilton, “but non-tonal language speakers were better at rhythm, and both melody and rhythm are important parts of music.”
There was variation in musical processing and ability between the different tonal languages and between the different non-tonal languages, but the authors say that more study would be needed to dig into these smaller-scale patterns. Likewise, more research would be needed to understand the mechanisms and developmental pathways behind these differences.
“One huge challenge for understanding how humans process the world is breaking down big topics like music or language into their components like pitch or beat or melody,” says senior author Elika Bergelson, a professor of psychology and neuroscientist at Duke University. “A second challenge is sampling large enough samples of participants that are diverse enough in their experiences to actually be able to draw confident conclusions. This work takes an important step in both of these directions.”
Abstract of the study:
Tonal languages differ from other languages in their use of pitch (tones) to distinguish words. Lifelong experience speaking and hearing tonal languages has been argued to shape auditory processing in ways that generalize beyond the perception of linguistic pitch to the perception of pitch in other domains like music. We conducted a meta-analysis of prior studies testing this idea, finding moderate evidence supporting it. But prior studies were limited by mostly small sample sizes representing a small number of languages and countries, making it challenging to disentangle the effects of linguistic experience from variability in music training, cultural differences, and other potential confounds. To address these issues, we used web-based citizen science to assess music perception skill on a global scale in 34,034 native speakers of 19 tonal languages (e.g., Mandarin, Yoruba). We compared their performance to 459,066 native speakers of other languages, including 6 pitch-accented (e.g., Japanese) and 29 non-tonal languages (e.g., Hungarian). Whether or not participants had taken music lessons, native speakers of all 19 tonal languages had an improved ability to discriminate musical melodies on average, relative to speakers of non-tonal languages. But this improvement came with a trade-off: tonal language speakers were also worse at processing the musical beat. The results, which held across native speakers of many diverse languages and were robust to geographic and demographic variation, demonstrate that linguistic experience shapes music perception, with implications for relations between music, language, and culture in the human mind.