Schedule of Events | Search Abstracts | Symposia | Invited Symposia | Poster Sessions | Data Blitz Sessions

Poster D111

The neural processing of continuous audiovisual speech in noise in autism: a TRF approach

Poster Session D - Monday, April 15, 2024, 8:00 – 10:00 am EDT, Sheraton Hall ABC

Theo Vanneau1 (theo.vanneau@gmail.com), Mick Crosse4,5, John Foxe1,2,3, Sophie Molholm1,2,3; 1Department of Pediatrics, The Cognitive Neurophysiology Laboratory, Albert Einstein College of Medicine, Bronx, NY, USA, 2Department of Neuroscience, Rose F. Kennedy Center, Albert Einstein College of Medicine, Bronx, NY, USA, 3Department of Neuroscience, The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monde Institute for Neuroscience, University of Rochester, School of Medicine and Dentistry, Rochester, NY, USA, 4SEGOTIA, Galway, Ireland, 5Trinity Centre for Biomedical Engineering, Trinity College Dublin, Dublin, Ireland

Individuals with autism spectrum disorder (ASD) present with core deficits in social interaction and communication. The goal of our current study is to decipher the neural mechanisms underlying impaired audiovisual speech and language processing in autism. Using the temporal response function (TRF) we analyzed the neural encoding of continuous speech at different levels of the speech processing hierarchy, ranging from acoustic to phonetic, to understand the stages at which information processing breaks down in individuals with ASD. We recorded high-density electrophysiology from high-functioning children (8–12 years) with ASD and an IQ-, sex-, and age-matched group of typically developing (TD) children. Videos of an actor reciting a children’s book were generated, and randomly interspersed auditory, visual, and audiovisual versions with different levels of noise were presented in blocks of ~30 seconds while recording EEG and eye tracking data, and participants responded to target words. Preliminary results suggest that neural encoding of the speech stimulus is reduced in ASD at both acoustic and phonemic levels. Furthermore, the TRF from the acoustic envelope of TD and ASD children is higher for the audiovisual condition, suggesting that both groups benefit from the addition of visual speech cues. However, this gain seems to be reduced for ASD children, suggesting impaired neural mechanisms of multisensory integration of speech processing in autism. Additional analyses will focus on the role of top-down processes in impaired neural encoding of multisensory speech, and the relative contribution of altered processing stages in autism.

Topic Area: PERCEPTION & ACTION: Multisensory

 

CNS Account Login

CNS2024-Logo_FNL-02

April 13–16  |  2024