In CRBLM Research Spotlight articles, we provide accessible summaries to highlight current and ongoing work being produced by our members on various themes.
How Can We Tell That a Violin Is Afraid? Interpreting Emotional Cues in Language and Music
If you’re looking for the perfect Halloween mood music, check out the piece In Vain by Georg Friedrich Haas. Even if you’re sitting in a comfortable chair with a cup of tea, you can feel the shivers going down your spine as you listen. How does this piece communicate fear so well?
We can experience emotions directly or perceive them secondarily, through media such as language and music. There are auditory cues in music and language that can signal different emotions, and people’s ability to use these cues can vary.
Most research agrees that we use a two-dimensional model to identify emotions. The first dimension is valence: a fancy way of saying whether the emotion is perceived as positive or negative. Joy and peace can be examples of positive valence, while anger and sadness are negative. The second dimension, arousal, has to do with how much energy the emotion carries. Joy and anger would be high-arousal emotions, while sadness and peace are low-arousal. Valence and arousal work independently to characterize an emotion: For example, anger has a negative valence and high arousal.
Arousal and valence ratings for happy, sad, angry, and neutral stimuli. Adapted from Paquette and colleagues [2]
Auditory signals, like music and language, have different qualities that help us separate the two dimensions of valence and arousal. These can be things like loudness, energy, pitch (high or low sound), and rate (speaking rate or musical tempo). For example, sad speech and music have less energy in high frequencies, leading to a “dark” sound, which in turn communicates low arousal. Other qualities of a sad sound would contribute to its negative valence.
Upcoming work from Maël Mauchand and Marc Pell shows that when people complain about something, they use higher average pitch, and the pitch is more variable: cues which evoke anger and surprise. The intensity of emotions communicated through speech seems to be less affected by culture than emotions communicated through facial expressions (upcoming work by Shuyi Zhang, also at the Pell lab).
What areas of the brain are processing the emotional cues in music and language? Paquette and colleagues recently performed an MRI experiment where individuals listened to emotional sounds, and used machine learning to find out what areas of the brain reliably encoded different emotions. These areas were the superior temporal gyrus, anterior superior temporal sulcus, and upper premotor cortex. The patterns of brain activity in these areas were similar for both vocal emotions and musical emotions.
Process of perceiving emotions. The sound comes into the brain, which processes the auditory cues and determines the emotion in the superior temporal and premotor areas, evaluating it in terms of arousal and valence. Note that the brain regions found were bilateral (on both sides of the brain) in the study. Adapted from Paquette and colleagues [1] and [2].
If we aren’t able to process these auditory signals normally, our ability to identify emotions in music and language can be impaired. For example, people with cochlear implants can get good information about rate from their hearing devices, but they don’t get very reliable pitch information. This makes it difficult for them to decipher emotional sounds in both music and language. More recent work from the Peretz and Lehmann labs showed that cochlear implant users pay a lot more attention to the overall energy and roughness of the sound since they are less able to use pitch. However, this doesn’t seem to be enough for them to separate the dimensions of valence and arousal. Music, which relies more heavily on pitch to indicate emotion, is particularly challenging for cochlear implant users.
Emotional identification scores for people with cochlear implants, showing a one-dimensional strategy. Adapted from Paquette and colleagues [2].
This is reflected in brain activity as well. Deroche and colleagues found that the brain’s response to emotional sound bursts was less robust and more spread out in time for cochlear implant users. Music also had a less robust response than voice, even for people without an implant.
On the flip side, emotion in music is readily perceived by individuals with Autism Spectrum Disorder, perhaps because they generally have a good perception of musical pitch. This is a population that normally has a hard time interpreting emotion, so their intact ability to perceive emotions in music can be very useful in both diagnosis (to differentiate Autism from other conditions) and therapy (to help autistic people learn about emotions and transfer that knowledge to other situations). CRBLM member Eve-Marie Quintin recently reviewed this subject in a Frontiers in Neural Circuits article.
Another population that may have an advantage for deciphering musical emotions, is, of course, musicians. Preliminary work by Whitehead and Armony using fMRI shows that while both musicians’ and non-musicians’ brains respond similarly to emotional voices, they process emotional music differently.
To summarize, the brain picks up on auditory cues that indicate valence or arousal, adding them all up to determine what emotion is being communicated. If we don’t have access to all of the auditory cues (like for individuals with cochlear implants), this process may be difficult, especially for music. On the other hand, autism and music training are associated with intact or improved emotional processing of music.
Bonus: if you want more music for your Halloween soundtrack, check out the American Scholar’s article This Is What Terror Sounds Like.
Click here for more CRBLM Research Spotlight posts
References
* Starred references are upcoming conference presentations
Mauchand M & Pell MD. (2019). Emotive attributes of complaining speech. Auditory Perception & Cognition; Montreal, Canada.*
Zhang S & Pell MD. (2020). Cross-cultural Differences in Vocal Expression and Emotion Perception. Society for Personality and Social Psychology, New Orleans, USA.*
Paquette S, Takerkart S, Saget S, Peretz I, & Belin P. (2018). Cross-Classification of Musical and Vocal Emotions in the Auditory Cortex. Annals of the New York Academy of Sciences 1423 (1): 329–37.
Paquette S, Ahmed GD, Goffi-Gomez MV, Hoshino ACH, Peretz I, & Lehmann A (2018) Musical and vocal emotion perception for cochlear implants users, Hearing Research, 370, 272-282
Deroche MLD, Felezeu M, Paquette S, Zeitouni, A, & Lehmann A. (2019). Neurophysiological differences in emotional processing by cochlear implant users, extending beyond the realm of speech, Ear and Hearing 40, 1197-1209.
Quintin EM. (2019). Music-Evoked Reward and Emotion: Relative Strengths and Response to Intervention of People With ASD. Frontiers in neural circuits, 13, 49.
Whitehead J & Armony JL. (2019). Combined fMRI-adaptation (fMRI-a) and multivariate pattern analysis (MVPA) reveal difference between musicians and non-musicians in response to auditory emotional information, Social & Affective Neuroscience Society, New York, USA.*