Cognitive Neuroscience Society

The Journal of Cognitive Neuroscience

  • Facebook
  • RSS
  • Twitter
MENUMENU
  • Home
  • Annual Meeting
        • General Information
          • What to Expect at CNS 2023
          • CNS 2023 Mobile App
          • CNS 2023 Giveaway
          • CNS 2023 Giveaway Winners
          • Accessibility at CNS
          • General Information
          • Code of Conduct
          • Dates and Deadlines
          • Hotel Reservations
          • Poster Guidelines
          • Poster Printing Discount
          • Annual Meeting Workshop Policy & Application
          • Exhibit with us!
        • Program
          • Thank you to our Partners
          • CNS 2023 Program Booklet
          • Schedule Overview
          • Program-at-a-Glance
          • CNS 30th Anniversary Dance Party
          • Keynote Address
          • George A. Miller Awardee
          • Distinguished Career Contributions Awardee
          • Young Investigator Awardees
          • CNS at 30: Perspectives on the Roots, Present, and Future of Cognitive Neuroscience
          • Invited-Symposium Sessions
          • Symposium Sessions
          • Data Blitz Session Schedule
          • Poster Schedule & Session Information
          • JoCN Travel Fellowship Award
          • GSA/PFA Award Winners
          • Workshops, Socials & Special Events
        • Registration
          • Registration
          • Registration FAQ
          • Registration Policies, Cancellations & Refunds
        • News/Press
          • CNS 2023 Press Room
          • CNS 2022 Blog
          • CNS 2021 Blog
          • CNS 2020 Blog
        • Submissions
          • 2023 Poster Printing Discount
          • Submission Requirements
          • Submit a Poster
          • Submit a Symposium
          • GSA or PFA Application
          • Data Blitz
          • Frequently Asked Submission Questions
        • Archive
          • CNS 2020 Conference Videos
          • CNS 2019 Conference Videos
          • CNS 2018 Conference Videos
          • CNS 2017 Conference Videos
          • CNS 2016 Conference Videos
          • CNS 2015 Conference Videos
          • Previous Meetings Programs & Abstracts
  • About CNS
    • Boards and Committees
    • CNS Statement: Black Lives Matter
  • Membership
    • Information and Benefits
    • Join or Renew Membership
    • Membership FAQs
    • Member Discounts
    • Newsletter
      • Submit an Announcement
      • Current Newsletter
      • Newsletter FAQs
      • Past Newsletters
  • Awards
    • George A. Miller Award
    • Fred Kavli Distinguished Career Contributions Award
    • Young Investigator Award
    • Young Investigator Award Nominations
    • 2023 YIA Nomination Form
    • JoCN Travel Fellowship Award
  • News Center
    • CNS Blog
    • CNS 2023 Press Room
    • CNS 2022 Blog
    • CNS 2021 Blog
    • CNS 2020 Blog
    • CNS 2019 Blog
    • Blog Archives
    • Quick Tips for Getting Started on Twitter
    • Media Contact
  • My CNS
  • Contact Us
post

Speech or Song? Identifying How the Brain Perceives Music

April 26, 2022

CNS 2022 Press Release

April 26, 2022 – SAN FRANCISCO – Most neuroscientists who study music have something in common: they play a musical instrument, in many cases from a young age. Their drive to understand how the brain perceives and is shaped by music springs from a deep love of music. This passion has translated to a wealth of discoveries about music in the brain, including recent work that identifies the ways in which the brain distinguishes between music and speech, as will be presented today at the annual meeting of the Cognitive Neuroscience Society in San Francisco. 

“Over the past two decades, many excellent studies have shown similar mechanisms between speech and music across many levels,” says Andrew Chang of New York University, a lifelong violinist, who organized a symposium on music and speech perception at the CNS meeting. “However, a fundamental question, often overlooked, is what makes the brain treat music and speech signals differently, and why do humans need two distinct auditory signals.”

New work, enabled in part by computational advances, is pointing toward differences in pitch and rhythm as key factors that enable people starting in infancy to distinguish speech from music, as well as how the predictive capabilities of the brain underlie both speech and music perception. 

Exploring acoustical perception in infants

From a young age, cognitive neuroscientist Christina Vanden Bosch der Nederlanden of University of Toronto, Mississauga, has been singing and playing the cello, which have helped to shape her research career. “I remember sitting in the middle of the cello section and we were playing some particularly beautiful music – one where the whole cello section had the melody,” she says, “and I remember having this emotional response and wondering ‘how is it possible that I can have such a strong emotional response from the vibrations of my strings traveling to my ear? That seems wild!’” 

That experience started der Nederlanden on a long journey of wanting to understand how the brain processes music and speech in early development. Specifically, she and colleagues are investigating whether babies, who are learning about communicative sounds through experience, even know the difference between speech and song. 

“These are seemingly simple questions that actually have a lot of theoretical importance for how we learn to communicate,” she says. “We know that from age 4, children can and readily do explicitly differentiate between music and language. Although that seems pretty obvious there has been little to no data asking children to make these sorts of distinctions.” 

At the CNS meeting, der Nederlanden will be presenting on new data collected right before and during the COVID-19 pandemic about the acoustic features that shape music and language during development. In one experiment, 4-month-old infants heard speech and song, both in a sing-songy infant-directed manner and in a monotone speaking voice, while recording electrical brain activity with electroencephalogram (EEG). 

“This work novelly suggests that infants are better at tracking infant-directed utterances when they’re spoken compared to sung, and this is different from what we see in adults who are better at neural tracking sung compared to spoken utterances.” -Christina Vanden Bosch der Nederlanden

“This work novelly suggests that infants are better at tracking infant-directed utterances when they’re spoken compared to sung, and this is different from what we see in adults who are better at neural tracking sung compared to spoken utterances,” she says. They also found that pitch and rhythm each affected brain activity for speech compared to song, for example, finding that exaggerated pitch was related to better neural tracking of infant-directed speech – identifying the lack of “pitch stability” as an important acoustic feature for guiding attention in babies. 

While the exaggerated, unstable pitch contours of infant-directed speech, has been well-established as a feature infants love, this new research shows it also helps to signal whether someone is hearing speech or song. Pitch stability is a feature, der Nederlanden says, that “might signal to a listener ‘oh this sounds like someone singing,’” and the lack of pitch stability can conversely signal to infants that they are hearing speech rather than playing with sounds in song.

In an online experiment, der Nederlanden and colleagues asked kids and adults to qualitatively describe how music and language are different. “This gave me a rich dataset that tells me a lot about how people think music and language differ acoustically and also in terms of how the functional roles of music and language differ in our everyday lives,” she explains. “For the acoustic differences, kids and adults described features like tempo, pitch, rhythm as important features for differentiating speech and song.”

In future work, der Nederlanden hopes to move toward more naturalistic settings, including using mobile EEG to test music and language processing outside of the lab. “I think the girl sitting in the orchestra pit, geeking out about music and emotion, would be pretty excited to find out that she’s still asking questions about music and finding results that could have answered her questions from over 20 years ago!”

Identifying the predictive code of music

Guilhem Marion of Ecole Normale Supérieure has two passions that drive his research: music and computer science. He has combined those interests to create novel computational models of music that are helping researchers understand how the brain perceives music through “predictive coding,” similar to how people predict patterns in language.

“Predictive coding theory explains how the brain tries to predict the next note while listening to music, which is exactly what computational models of music do for generating new music,” he explains. Marion is using those models to better understand how culture affects music perception, by pulling in knowledge based on individual environments and knowledge. 

In new work conducted with Giovanni Di Liberto and colleagues, Marion recorded EEG activity of 21 professional musicians who were listening to or imagining in their minds four Bach choral pieces. In one study, they were able to identify the amount of surprise for each note, using a computational model based on a large database of Western music. This surprise was a “cultural marker of music processing,” Marion says, showing how closely the notes were predicted based on a person’s native musical environment. 

“Our study showed for the first time the average EEG response to imagined musical notes and showed that they were correlated with the musical surprise computed using a statistical model of music,” Marion says. “This work has broad implications in music cognition but more generally in cognitive neuroscience, as they will enlighten the way the human brain learns new language or other structures that will later shape its perception of the world.”

“These findings are the basis for the potential applications in clinical and child development domains, such as whether music can be used as an alternative form of verbal communication for individuals with aphasia, and how music facilitates infants learning speech.” -Andrew Chang

Chang says that such computational-based work is enabling a new type of music cognition study that balances good experimental control with ecological validity, something challenging for the complexity involved in music and speech sounds. “You often either make the sounds unnatural if everything is well controlled for your experimental purpose or preserve their natural properties of speech or music, but it then becomes difficult to fairly compare the sounds between experimental conditions,” he explains. “Marion and Di Liberto’s groundbreaking approach enables researchers to investigate, and even isolate, the neural activities while listening to a continuous natural speech or music recording.

Chang, who has been playing violin since he was 8-years old, is excited to see the progress that has been made in music cognition studies just in the last decade. “When I started my PhD in 2013, only a few labs in the world were focusing on music,” he says. “But now there are many excellent junior and even well-established senior researchers from other fields, such as speech, around the globe starting to get involved or even devoted to music cognitive neuroscience research.”

Understanding the relationship between music and language “can help us explore the fundamental questions of human cognition, such as why humans need music and speech, and how humans communicate and interact with each other via these forms,” Chang says. “Also, these findings are the basis for the potential applications in clinical and child development domains, such as whether music can be used as an alternative form of verbal communication for individuals with aphasia, and how music facilitates infants learning speech.”

–

The symposium “From Acoustics to Music or Speech: Their (Dis)Similar Perceptual Mechanisms” is taking place at 1:30pmPT on Tuesday, April 26, as part of the CNS 2022 annual meeting from April 23-26, 2022.

CNS is committed to the development of mind and brain research aimed at investigating the psychological, computational, and neuroscientific bases of cognition. Since its founding in 1994, the Society has been dedicated to bringing its 2,000 members worldwide the latest research to facilitate public, professional, and scientific discourse.

Media contact:

Lisa M.P. Munoz
Public Information Officer, Cognitive Neuroscience Society
cns.publicaffairs[@]gmail.com

 

 

 

By lmunoz Filed Under: Uncategorized Tagged With: music Leave a Comment

Previous article: CNS 2022: Day 3 Highlights
Next article: CNS 2022: Day 4 Highlights

Leave a Reply Cancel reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Posts

  • Poverty: What’s the Brain Got to Do With It?
  • Unraveling Graceful Human Learning Over Time
  • Looking Forward to Understand Working Memory
  • From the Neurology Clinic to the Lab and Back Again: Addressing Frontal Lobe Syndromes
  • When Philosophical Questions Turn to Neuroscience Experimentation

Blog Archives

Quick Tips for Getting Started on Twitter

Cognitive Neuroscience Society
c/o Center for Mind and Brain
267 Cousteau Place, Davis, CA 95618
916-955-6080: for CNS Membership Questions
805-450-7490: for annual meeting questions about- registration, posters, symposium
916-409-5069: Fax Line
email: meeting@cogneurosociety.org

Recent Posts

  • Poverty: What’s the Brain Got to Do With It?
  • Unraveling Graceful Human Learning Over Time
  • Looking Forward to Understand Working Memory
  • From the Neurology Clinic to the Lab and Back Again: Addressing Frontal Lobe Syndromes
  • When Philosophical Questions Turn to Neuroscience Experimentation

Archives

Blog Archives

Previous Meeting Programs and Abstracts

Past Newsletters

All contents © Cognitive Neuroscience Society 1995-2019

Add to Calendar

Add to Calendar
04/16/2022 11:00 AM
04/16/2022 12:00 PM
America/Los_Angeles
How Prior Knowledge Shapes Encoding of New Memories
Description of the event
Grand Ballroom A
Create an Account

Login Utility