Often thought of in pop culture as a funny quirk (think: Cameron Diaz singing karaoke in My Best Friend’s Wedding), tone deafness is actually a brain disorder. Whereas most of us derive great pleasure from music, someone who is tone deaf may hear music as a noisy din and may even avoid situations where music is present. A new study has shed light on this condition, finding that people who are tone deaf have weakened sound processing in their brainstems, possibly the result of a lifetime of limited musical exposure.
The new study specifically looked at “congenital amusia,” a form of tone deafness present from birth. The researchers looked at the auditory processing ability in a group of adults with amusia as measured by the Montreal Battery of Evaluation of Amusia (MBEA). The researchers compared the amusics to a control group, matched with respect to a number of factors, including age, sex, and years of education.
To measure automatic sound encoding, they used a variant of electroencephalography (EEG), in which researchers measure electrical brain response to a repeating speech sound (“da”). Participants sat in a quiet room and watched a silent captioned movie, while the experimental stimuli were delivered through earbuds. The purpose of the movie was to help minimize movements during the recording.
From the scalp-recorded brain responses, the researchers were able to measure how weakly or strongly sound was encoded in the brainstem regions and found significantly impoverished signals among the amusics compared to the control group.
Erika Skoe of the University of Connecticut, a co-author on the study just published in the European Journal of Neuroscience, spoke with CNS about the fascinating study and its implications for musical training for everyone, as well as her personal interest in studying the topic.
Skoe: From a young age, I was drawn to music. When I was in preschool, my family moved from the U.S. to West Berlin, Germany. My parents were adamant that I learn German, so they enrolled me in the preschool across the street from our house. Being a shy child, I was reticent to speak German, but music provided my in-road to communicating in this new language. As my parents tell the story, I would return each day from school, sit down at the family’s piano and plunk out the German songs that had learned that day at school. This early aptitude for music paved the way for nearly two decades of formal music training. While I rarely play music any more, my early interest in music scaffolded the questions I now pursue as a scientist.
My research examines how our early experiences with music, language and other highly structured sounds, influences how our brain develops and ages. I became interested in the topic of amusia because the listening experience of someone with this disorder is so distinctly different from most listeners. I was fortunate to be in a position to collaborate on this project with a team from Montreal that included Alexandre Lehmann (lead author), Patricia Moreau, and Isabelle Peretz, an expert in amusia, along with my mentor Nina Kraus at Northwestern University, a leader in the field of auditory neuroscience.
CNS: What is congenital amusia?
Skoe: Congenital amusia, more colloquially known as tone deafness, is a deficit affecting the perception and production of music. Congenital amusia is most commonly associated with difficulty distinguishing one musical note from another. In contrast to acquired amusia, congenital amusia is not the outcome of hearing loss, learning disability or brain damage.
CNS: Can you give an example of what the experience is of someone with congenital amusia?
Skoe: Most listeners, even those without formal musical training, can easily discriminate the sounds of adjacent notes on a piano, for example middle C from its neighbor D. However, amusics experience a type of perceptual “deafness” that impedes hearing notes of similar pitch as separate notes or hearing musical phrases with similar tonal structures as distinct tunes. As a consequence of this perceptual impairment, people with this disorder may have difficulty recognizing popular songs and often describe music as sounding unpleasant. This atypical experience with music may explain why many amusics avoid social situations that involve music.
CNS: How does automatic processing of sounds happen in the general population?
Skoe: Automatic encoding of sound refers to the process by which the brain represents the acoustic details of sound. This process of transforming the physical features of sound into a neural code, which occurs in auditory centers of the brainstem, is automatic in the sense that it always takes place, whether we are awake or asleep, actively listening or not. It is a process that is always on and that cannot volitionally be turned off.
Automatic sound encoding is shaped over time by our experiences with sound; in this sense, no two people encode sound in exactly the same way. We can see the imprint of experience on the auditory system even when someone is asleep. This is likely because the experiences we have during our waking hours have the potential to retool our brains in such a fundamental way that their impact can later be observed, even during more passive experiences with sound.
CNS: How do you measure this process?
Skoe: Automatic processing of sound is measured by placing small sensors on the scalp that detect electrical activity produced by the brain in response to sound. The brain signal that we measure provides rich information about how the brain translates sound, including how faithfully certain features of the sound are represented, such as the onset and offset of sound, as well as the frequency spectrum. In most listeners, we observe a high degree of fidelity between the sound that is played the brain response that is measured. This clear relationship between the input and the output allows us to tease out what aspects of the acoustic signal are either strongly or weakly represented in the brainstem response from individuals with diverse auditory experiences and perceptual abilities.
Skoe: Musicians have more sensitive auditory systems. This manifests as more precise encoding of rapid temporal features of sound, such as the acoustic transitions linking speech sounds. Musicians also show more robust representation of the acoustic cues that contribute to the pitch and spectral quality of a sound. This heightened sensitivity to sound, which may exist to some degree prior to training, has been shown to be strengthened over the course of musical training to influence not only how music is automatically processed in the brain but also how other meaningful sounds in the environment, including speech, are processed.
CNS: And what did you find was different in this processing for amusics?
Skoe: Compared to our control group, amusics had weaker neural encoding that emerged as more sluggish responses to the onset of sound and less robust representation of the spectral quality of sound. Within the population we tested, we found that the more severe the musical disorder was, the greater the degree of auditory impairment.
CNS: What were your most excited to find?
Skoe: We were most excited to find that the patterns of auditory impairments observed in amusics mirror what is observed in musicians. That is, for the auditory encoding measures on which musicians show advantages, amusics show disadvantages. Whereas previous research found that musical expertise can positively impact automatic sound encoding, this study provides the first evidence that musical impairments can weaken the aspects of sound encoding that are strengthened through musical training.
We believe that the impairments in automatic sound encoding in the brainstem are not the underlying cause of amusia in this population but are instead the long-term consequence of a musical processing impairment that originates higher in the brain. As a result of the amusic listener devaluing music and limiting their exposure to music, the centers of the brain that relay information about the physics of sound may undergo a type of neural atrophy that weakens automatic sound encoding in lower, brainstem structures. Our findings support the possibility that the more severe the musical impairment, the more acutely automatic sound encoding becomes impaired. Moreover, because these brainstem centers are a common processing hub for speech and music, this creates the potential for a musical disorder to impinge upon how speech is processed, as we observed in this study.
CNS: How does this fit in with other work on the subject?
Skoe: We interpret our study findings within in the context of several recent lines of work on the auditory brainstem. For example, a recent study out of Honk Kong evaluated automatic sound encoding in Cantonese-speaking young adults with amusia. Cantonese is a tonal language in which pitch provides an important cue for distinguishing one word from another. Despite having difficulty identifying Cantonese tones, these young adult amusics did not show any signs of abnormalities in auditory brainstem encoding.
In light of our findings, this could suggest several (not mutually exclusive) possibilities: (1) that abnormalities in auditory brainstem encoding take decades to emerge and may not be observed until later in life (our participants were on average 20-30 years older than the study in Hong Kong); (2) that the amusics in our study were undergoing more rapid aging of the auditory system than our age-match control group, a viewpoint supported by previous work in the Kraus Lab suggesting that musicians’ auditory systems age at a slower pace; or (3) that the Cantonese amusics are buffered from abnormalities in auditory encoding, an interpretation based on a body of literature originating out of the Krishnan lab at Purdue University, which has demonstrated that tonal language speakers have more robust encoding of pitch cues in the brainstem than listeners who don’t speak tonal languages.
CNS: What is the significance of your findings for most of us when it comes to music processing?
Skoe: Our findings have implications for the general population because they support the idea that the auditory system operates by a “use it or lose it” principle. Opposite from what is seen in musicians, our research suggests that limiting one’s exposure to music may have deteriorating effects on the auditory system over the long-term, which may compound upon declines in hearing that occur naturally with aging.
CNS: What’s next for this work? What do you ultimately hope to accomplish?
Skoe: A running theme in my research is the idea that “your past shapes your present”: The way you hear sound right now is dictated by the experiences with sound you’ve had up until now. The findings of this study are a clear embodiment of this theme.
We’d like to expand upon this work by more directly examining the theory that brainstem impairments post-date the onset of amusia by testing amusics of all ages. We are interested in following individuals over multiple years to observe changes in auditory processing that might coincide with music listening habits in these individuals. By having a more complete understanding on how musical impairments affect the auditory system, we will be in a better position to delineate the potential that musical training might have to ameliorate auditory impairments observed in a variety of populations (e.g., children with learning disabilities, children from families with low socioeconomic standing, hearing impaired individuals).
-Lisa M.P. Munoz
The paper, “Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia” by Alexandre Lehmann, Erika Skoe, Patricia Moreau, Isabelle Peretz, and Nina Kraus, was published online in the European Journal of Neuroscience on June 18, 2015.