Poster Session C, Sunday, March 24, 5:00 – 7:00 pm, Pacific Concourse
Complexity Matching to EEG Response of Speech and Music
Adolfo G. Ramirez-Aristizabal1, Daniel C. Comstock1, Christopher T. Kello1; 1University of California Merced
Both speech and music have Hierarchical Temporal Structures (HTS) e.g., phonemes occur over short time scales and words at longer time scales. Previous electroencephalography (EEG) studies have demonstrated that neural activity can track the rhythm in complex auditory signals. Complexity Matching (CM) theory poses that coupled complex systems have similar HTS functions, and we use Allan Factor (AF) variance to quantify HTS. This approach has not been applied to understand CM between complex auditory stimuli and corresponding neural activity (done here via fitting dipole models to Independent Components). Thus, the present study applied the AF measure to both complex auditory stimuli and neural activity, to assess similarity in HTS. During 32-channel EEG recording, participants (n = 11) listened to five ~4.5-minute audio clips of speech and music, one of which (classical music) was repeated. While AF analyses revealed divergent scaling for the different auditory stimuli, AF analyses of EEG amplitude envelopes showed marginal differences in their 1/f scaling; no direct CM found. Post-hoc tests showed that the difference in EEG AF was restricted to an effect of repeating the classical music. EEG responses to different acoustic signals were further discriminable by training a support vector machine classifier on low frequency amplitude fluctuations (68% classification accuracy for .1 to .5 Hz). These results suggest that EEG amplitude fluctuations have a relationship with the amplitude structure of auditory signals and point the way to future studies that account for mediating factors, such as attention, on the brain response to speech and music.
Topic Area: METHODS: Electrophysiology