Poster Session B, Sunday, March 24, 8:00 – 10:00 am, Pacific Concourse
The semantic timescales of speech prediction unfold along an auditory dorsal processing hierarchy
Lea-Maria Schmitt1, Sarah Tune1, Julia Erb1, Anna Rysop2, Gesa Hartwigsen2, Jonas Obleser1; 1University of Lübeck, Lübeck, Germany, 2Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
When poor acoustics challenge speech comprehension, angular gyrus increasingly draws on semantic context to predict upcoming speech. As research so far focused on predictions informed by short timescales of context (i.e., sentences), we here ask how the brain builds up predictions when confronted with a multitude of timescales underlying natural speech. In a functional magnetic resonance imaging (fMRI) study, healthy participants (N=31, 19–73 years) passively listened to a one-hour, naturally narrated story embedded in a competing stream (SNR 0 dB) of resynthesized natural sounds. We used similarity of word vectors to model semantic predictability at five timescales corresponding to a logarithmic increase in context length and compared the results with a more fine-grained linguistic model scaled by typical segments of written language (i.e., words, sentences, paragraphs, chapters). In an initial analysis of 12 participants, we found the timescales of semantic prediction organized along an auditory dorsal processing hierarchy: informative short timescales elicited increased activity in lower-order cortical regions like the posterior portion of superior temporal gyrus, whereas higher-order cortical regions like angular gyrus favoured informative long timescales. Furthermore, the brain areas most sensitive to the longest timescales widely overlapped with the dorsal default mode network. Critically, the linguistic model outperformed the logarithmic model which speaks to a neural gradient coding for units of semantic abstraction rather than for semantic load per se. Next, we will investigate how brain areas representing different contextual timescales interact along the processing hierarchy to predict what someone is about to say.
Topic Area: LANGUAGE: Semantic