Schedule of Events | Search Abstracts | Invited Symposia | Symposia | Rising Stars | Poster Sessions | Data Blitz

Poster F114

Neural integration of syntactic and visual temporal information in sentence comprehension

Poster Session F - Tuesday, March 10, 2026, 8:00 – 10:00 am PDT, Fairview/Kitsilano Ballrooms

Ruixi Piao1 (ruixip@umich.edu), Yike Li1, Cody Zhewei Cao2, David Brang1; 1University of Michigan, 2Northwestern University

During sentence processing, the brain continuously tracks the temporal structure of unfolding speech. Prior work suggests that higher-level linguistic information, such as syntactic structure, provides top-down temporal predictions that help organize incoming speech, whereas visual speech (lip and articulator movements) provides bottom-up temporal cues that reflect moment-to-moment acoustic dynamics. However, how these two sources of temporal information interact remains unclear. Here, we tested whether syntactic and visual timing cues contribute independently to neural encoding of speech or interact during audiovisual sentence comprehension. Intracranial EEG (iEEG) was recorded from nine patients listening to 100 English sentences in five conditions: a visual-only baseline and four auditory-present conditions varying modality (auditory-alone vs. audiovisual) and acoustic degradation (temporal vs. spectral; AM vs. FM). Analyses focused on the auditory-present conditions. For each condition, neural activity was modeled using multivariate temporal response functions (mTRFs) from the acoustic features, with or without additional syntactic predictors. Syntax-related improvement was quantified as the increase in predictive accuracy relative to the acoustic-only models. Results showed greater syntax-related improvements for audiovisual compared to auditory-alone speech, indicating that visual speech enhances syntactic temporal predictions. In addition, improvements were larger for AM- than FM-degraded speech, suggesting that syntactic structure primarily compensates for disrupted temporal acoustic cues. Together, these findings indicate that syntactic and visual temporal cues interact during continuous speech comprehension, supporting a cross-modal temporal scaffolding account.

Topic Area: PERCEPTION & ACTION: Multisensory

CNS Account Login

CNS_2026_Sidebar_4web

March 7 – 10, 2026