Schedule of Events | Search Abstracts | Invited Symposia | Symposia | Poster Sessions | Data Blitz

Sketchpad Series

Neural encoding of visually and acoustically derived speech features during multimodal narrative listening

Poster Session B - Sunday, March 8, 2026, 8:00 – 10:00 am PDT, Fairview/Kitsilano Ballroom

Jacqueline von Seth1 (), Máté Aller1, Matthew H. Davis1; 1University of Cambridge

The presence of visual cues (e.g. lip movements) can enhance speech perception and the neural tracking of acoustic features, such as the amplitude envelope. It is unclear, however, how this neural enhancement relates to behavioural measures of audiovisual benefit. Here, we investigated the neural encoding of acoustically- and visually derived speech features, capturing amplitude, temporal, and spectral modulations in the speech signal during narrative listening. We collected concurrent MEG and eye-tracking data (n=35) alongside objective (3AFC comprehension) and subjective (intelligibility ratings) measures of story comprehension while participants listened to a speaker narrating a story (Varano et al., 2023) in auditory-only (AO), visual-only (VO) and audiovisual (AV) conditions. To assess changes in speech tracking due to changes in modality independent of changes in intelligibility, AO and AV conditions were matched in relative intelligibility using noise-vocoding. We also collected offline behavioural measures of lipreading ability and audiovisual benefit, speech reception thresholds (SRTs) and verbal IQ. Preliminary analyses found evidence for audiovisual benefit for visually-derived and spectral features, most prominently over occipital sensors, in line with a perceptual restoration of spectral dynamics via visual speech cues (Plass et al., 2020). Replicating previous work (Aller et al., 2022; Bröhl et al., 2022), we also found that tracking of speech features in temporal, but not occipital sensors during silent lipreading was associated with lipreading ability and audiovisual benefit. Ongoing work will extend these results by disentangling the relative contribution of modality-specific information through comparisons of neural encoding models including both auditory and visual speech features.

Topic Area: PERCEPTION & ACTION: Multisensory

CNS Account Login

CNS_2026_Sidebar_4web

March 7 – 10, 2026