Symposia | Invited Symposia | Poster Sessions | Data Blitz Sessions


Multisensory Development Across the Neurotypical and Neurodivergent Lifespan: The Birth of a Research Consortium

Symposium Session 3: Sunday, April 14, 2024, 1:30 – 3:30 pm EDT, Ballroom West

Chairs: Mark Wallace1, Micah Murray2,3; 1Vanderbilt University, 2Lausanne University Hospital, 3University of Lausanne
Presenters: Mark Wallace, David Tovar, Micah Murray, Monica Gori, David Lewkowicz

One of the most challenging jobs for the developing brain is the almost continually changing nature of the sensory information that it is tasked with processing. In addition to the widening experiential repertoire of the child, the sensory organs and the body itself are changing rapidly as the child matures. Layered on top of this is the fact that these brains must combine (as well as segregate) information coming from the different modalities in order to form a coherent perceptual gestalt. Although we have a number of important snapshots of the function and organization of sensory and multisensory systems at various ages, we are still lacking a more comprehensive understanding of the longitudinal progression of events leading up to the mature system. To address this knowledge void, the symposium highlights the work of a newly formed consortium structured to detail how multisensory processes and their neural correlates change across lifespan, how these developmental processes differ between various neurodivergent and neurotypical children, and how this maturation relates to the development of higher-order cognitive capacities. A special element of the consortium is the creation of immersive environments across each of the sites to examine multisensory development in more naturalistic settings.


Multisensory Temporal Development in Autism: Contributions to the Broader Phenotype

Mark Wallace1, David Tovar1; 1Vanderbilt University

Previous work has shown that the development of multisensory temporal acuity is surprisingly protracted, although its exact longitudinal progression has yet to be fully elucidated. How such an extended maturational timecourse for multisensory development relates to higher-order cognitive abilities has yet to be determined. Such a knowledge gap is surprising, given that aspects of early sensory and multisensory development are highly likely to scaffold the construction and maintenance of various cognitive domains. Some insight into this question has come from our prior work in autistic children ages 8-12, which has shown poorer audiovisual temporal acuity when compared with neurotypical peers. Furthermore, these alterations in audiovisual temporal acuity map onto weaknesses in the magnitude of the child’s ability to integrate or “bind” elements of audiovisual speech. Follow up work has focused on perceptual plasticity-based training approaches in autistic children focused on improving audiovisual temporal acuity and cascading effects onto distal measures of social communication. Such results in neurodiverse children emphasizes the need to better characterize multisensory temporal acuity from birth until adulthood, and to examine links between this facet of sensory development and the maturation of cognitive domains.

Low-level multisensory processes: from the impact of early life experience to the prediction of higher-order cognition

Micah Murray1,2; 1Lausanne University Hospital, 2University of Lausanne

Multisensory processes subserve the combination and segregation of sensory information, oftentimes improving stimulus representations and behavior. Whether multisensory processes are an innate capacity or instead require experience with environmental stimuli remains debated. We addressed this knowledge gap by studying multisensory processes in preterm and full-term infants. Atypical early-life experiences, such as preterm birth, can have a dramatic impact on how sensory information is processed and integrated. We show this across multiple developmental timescales. Multisensory processes at hospital discharge sharply differ between full-term and preterm neonates. While full-term children exhibit linear auditory-somatosensory neural response interactions, responses from preterm children are nonlinear and characterized by topographic modulations. What’s more, the degree of topographic modulation in multisensory, but not unisensory brain responses, was predictive of sensory profiles at 12 months of age and internalizing tendencies at 24 months of age. We further showed that effects of premature birth persist into late childhood. Preterm schoolchildren, when tested on a multisensory simple detection task, exhibited general slowing as well as larger variability in reaction times. Nonetheless, all children exhibit multisensory facilitation. However, while the facilitation observed in full-term children exceeded probability summation predictions and thus forcibly invoked neural response interactions prior to motor response initiation, this was not the case in pre-term schoolchildren. Finally, we provide additional evidence for multisensory processes providing the scaffolding for global cognitive function in healthy schoolchildren. These collective results underscore the interplay of early-life events on sensory and cognitive development and reinforce the call for targeting supportive interventions throughout childhood.

Multisensory integration and sensory interaction during the development

Monica Gori1; 1Italian Institute of Technology

Multisensory integration is functional to our interaction with the environment by improving perceptual precision, accuracy, and reaction times. The mechanisms that subtend multisensory development are still unclear. In the past, we have highlighted the importance of sensory interaction in scaffolding multisensory integration. Results from our works show how visual modality is critical in developing audio and tactile integration. Without vision, audio space and body representations are altered making the integration between audio and touch impossible or reduced in blind with respect to sighted infants and in children. In infancy (age 5-36 months), we observe good audio-tactile integration for simple space localization in sighted participants and reduced multisensory integration in blind infants. Alterations in blind infants are also evident at the tactile level considering body representation in space. EEG data in blind infants show different cortical processing compared to sighted for tactile localization when the hands are positioned in the canonical or un-canonical position. In childhood (age 5-14 years), we observed in sighted participants a late development of audio and tactile integration for ventriloquist tasks, the rubber hand illusion and the temporal binding window. No multisensory integration was evident in blind peers. These results suggest that sensory interaction is one of the building blocks of multisensory development. The strict connection between sensory interaction and multisensory development offers a model to disentangle the basic principles that subtend our ability to interact in a multisensory environment and allow us to develop science-driven technology to improve the quality of life of sensory impaired individuals.

The Multisensory Cocktail Party Problem (MCPP): Perceptual Segregation and Integration of Multisensory Inputs Develops Gradually

David Lewkowicz1; 1Yale University

Social events usually consist of multiple people talking (e.g., a party). Successful communication between any two individuals at such an event requires them to solve the MCPP. They must perceptually segregate the unique auditory and visual attributes of each social partner and then integrate the corresponding pairs of auditory and visual attributes into unitary multisensory entities. This is a challenging task for infants and children whose ability to process and integrate multisensory information develops gradually. To investigate the developmental emergence of the ability to solve the MCPP, we have conducted studies in which participants can see multiple talking faces articulating temporally jittered monologues while they hear an audible monologue which is either only temporally synchronized with one of the talking faces or also correlated in terms of identity and/or semantic cues. Using an eye tracker, we measure selective attention to each of the faces and the eyes and mouth of each face to study perceptual segregation and integration. Findings indicate that starting at 3 years of age children begin preferring the audio-visually synchronized talking face, that this preference increases substantially with age, that the preference is driven by lipreading, and that audio-visual synchrony plays an outsize role compared to identity and semantic cues. These findings demonstrate that the challenges of the MCPP become more tractable with development and suggest that the observed improvement in perceptual segregation and integration of multisensory clutter likely contributes to speech and language acquisition and a general improvement in children’s communication skills.







CNS Account Login