Poster Session D, Monday, March 25, 8:00 – 10:00 am, Pacific Concourse
Neural characteristics of acoustic prosody during continuous real-life speech
Satu Saalasti1,2, Enrico Glerean2, Antti Suni1, Jussi Alho2, Juraj Simko1, Iiro P. Jääskeläinen2, Martti Vainio1, Mikko Sams2; 1University of Helsinki, 2Aalto University
When we exclaim “I told you so!” we mark the word “told” acoustically; this word is longer, pronounced with more effort, with higher pitch and different voice quality compared to the rest of the utterance. Prosody helps the listener to interpret the meaning of the utterance. Studying the neural processing of prosody during real-life speech has been hindered by lack of efficient analysis methods. The current study aims to quantify prosodic characteristics of speech based on a recently developed continuous wavelet transform (CWT) and explore the correlation between the prosodic signals and brain activity. A 3T functional magnetic resonance imaging (fMRI) was used to record brain activity of 29 female participants while they listened to an 8-minute narrative. The similarity of their brain activity was estimated by voxel-wise comparison of the BOLD signal time courses. A CWT based scale-space analysis was used to extract prosodic characteristics of the narrative, and the obtained wavelet timeseries were used as a regressor for the fMRI data to reveal how they map into the brain recordings. We found that acoustic-prosodic properties of speech aligned in a hierarchical fashion encompassing different linguistic levels: phonemes, words, and utterances in the narrated speech. The model predicted brain activity in the medial temporal as well as superior fronto-lateral areas, in line with what is known about brain activity related to speech and language processing. Importantly, the automatically created model identified different levels of linguistic hierarchy, as different wavelet scales of the model elicited different brain activity.
Topic Area: LANGUAGE: Other