Poster F2, Tuesday, March 27, 8:00-10:00 am, Exhibit Hall C
Prior knowledge guides speech segregation in human auditory cortex
Yuanye Wang1,2,3, Jianfeng Zhang4, Jiajie Zou4, Huan Luo1,2,3, Nai Ding4,5,6,7; 1School of Psychological and Cognitive Sciences, Peking University, 2McGovern Institute for Brain Research, Peking University, 3Beijing Key Laboratory of Behavior and Mental Health, Peking University, 4College of Biomedical Engineering and Instrument Sciences, Zhejiang University, 5Key Labratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Zhejiang Univ, Hangzhou, China, 6State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou, China, 7Interdisciplinary Center for Social Sciences, Zhejiang University, Hangzhou, China # These authors contribute equally
Segregating concurrent sound streams is a computationally challenging task that requires integrating bottom-up acoustic cues (e.g. pitch) and top-down prior knowledge about sound streams. In a multi-talker environment, the brain can segregate different speakers in about 100 ms in auditory cortex. Here, we used magnetoencephalographic (MEG) recordings to investigate the temporal and spatial signature of how the brain utilizes prior knowledge to segregate two speech streams from the same speaker, which can hardly be separated based on bottom-up acoustic cues. In a primed condition, the participants know the target speech stream in advance while in an unprimed condition no such prior knowledge is available. Neural encoding of each speech stream is characterized by the MEG responses tracking the speech envelope. We demonstrate that an effect in bilateral STG and STS is much stronger in the primed condition than in the unprimed condition. Priming effects are observed at about 100 ms latency and last more than 600 ms. Interestingly, prior knowledge about the target stream facilitates speech segregation by mainly suppressing the neural tracking of the non-target speech stream. In sum, prior knowledge leads to reliable speech segregation in auditory cortex, even in the absence of reliable bottom-up speech segregation cue.
Topic Area: ATTENTION: Auditory