Poster A49, Saturday, March 25, 5:00 – 7:00 pm, Pacific Concourse
Neural decomposition of synergistic and redundant information in interaction between audiovisual speech rhythms and brain oscillations
Hyojin Park1, Robin A. A. Ince1, Gregor Thut1, Joachim Gross1; 1Institute of Neuroscience and Psychology, University of Glasgow
In audiovisual speech processing, auditory and visual information interact and are integrated to lead to a unified percept of speech. Previously, we have shown that low-frequency brain oscillations separately track auditory and visual speech signals to facilitate speech comprehension. However, it is still unclear to what extent auditory and visual information is represented in brain areas, either individually or jointly. Here, we applied a recently developed tool from Information Theory to decompose multivariate mutual information between auditory, visual and brain signals. This method allows quantification of the unique information the brain signals carries of each modality (auditory, visual). Furthermore, we can now address the question if activity in a certain brain area carries a synergistic or redundant representation of both sensory signals. We used low-frequency theta phase of auditory and visual speech signals and brain signals at each voxel measured by MEG. In adverse audiovisual speech condition in which attention to visual speech is critical for speech comprehension, we found redundant information in auditory/temporal region including posterior superior temporal gyrus while synergistic information in left motor and inferior temporal cortex. Importantly, this predicts speech comprehension. By means of this novel information theoretic tools, we here show for the first time evidence for neural decomposition of information of entrained audiovisual speech rhythms interacting with brain oscillations for facilitating speech comprehension. Our finding demonstrates how the brain processes audiovisual inputs efficiently-taking advantage of common information as well as making greater information from multisensory inputs that enable remarkable ability in human communication.
Topic Area: LANGUAGE: Other