Schedule of Events | Search Abstracts | Symposia | Invited Symposia | Poster Sessions | Data Blitz Sessions

Poster E89

Exploring encoding of Timbre Perception in EEG using Machine Learning

Poster Session E - Monday, April 15, 2024, 2:30 – 4:30 pm EDT, Sheraton Hall ABC

Praveena Satkunarajah1 (, Sarah D Power1, Benjamin Rich Zendel1; 1Memorial University of Newfoundland, St. John's, NL, CA

Timbre is the perceptual quality of sound that is neither loudness, nor pitch. It is a critical feature that can be used to help organize the auditory environment into perceptual streams. In music, timbre can be used to perceptually segregate instruments that are playing concurrently. Perception of timbre is related to multiple acoustic cues, some of which include harmonic structure, amplitude envelope, onsets and offsets. This poses challenges in understanding how the brain processes timbre. Machine Learning offers promise by potentially classifying neurophysiological signals based on the timbre of the sound presented to the individual. To test this possibility, participants were presented with a series of brief tones that varied in timbre (trombone, clarinet, cello, piano and pure tone) while their EEG was recorded. A gradient boosting classifier was used. Different types of features were explored, ranging from raw EEG inputs to specific features such as harmonics-based spectral information, regularity-based features and ERP-based features. Using raw EEG yielded the best performance with accuracies exceeding chance by 10-20% for most participants in distinguishing between instrument responses for 5-way classification. More advanced classification algorithms or different features may be able to better distinguish between tones with a musical timbre. This research offers better understanding of how timbre processing is encoded in EEG signals and insights for the development of intelligent, BCI-based hearing aids.

Topic Area: PERCEPTION & ACTION: Audition


CNS Account Login


April 13–16  |  2024