Poster E60, Monday, March 27, 2:30 – 4:30 pm, Pacific Concourse
Neural mechanisms of speech versus non-speech detection in children with autism spectrum disorders
Alena Galilee1, Chrysi Stefanidou2, Joseph P. McCleery3; 1Dalhousie University, Nova Scotia, B3H 4R2, Canada, 2University of Birmingham, Birmingham, West Midlands, B15 2TT, United Kingdom, 3Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania, 19104, USA
In the current study, we utilized a Rapid Auditory Mismatch (RAMM) paradigm in order to investigate event-related potential (ERP) responses associated with the detection and discrimination of speech and non-speech sounds in children with autism spectrum disorders (ASD). Specifically, we compared a group of 4- to 6- year old high-functioning children with ASD with typically developing (TD) children matched on gender, chronological age, and verbal abilities. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited temporal cortex N330 match/mismatch responses reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (temporal P600, central N600) when a speech sound followed a non-speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Furthermore, the ASD participants failed to detect the change from non-speech to speech at a late cognitive stage of evaluation, when a speech stimulus followed a non-speech sound. Together, these findings are consistent with the hypothesis that children with ASD rely more distinctly on physical stimulus properties versus social or emotional cues when distinguishing speech sounds from non-speech sounds.
Topic Area: LANGUAGE: Development & aging