Poster F118, Tuesday, March 28, 8:00 – 10:00 am, Pacific Concourse
Mechanisms for Bayesian inference maturation in a biologically inspired neurocomputational model
Mauro Ursino1, Cristiano Cuppini1, Elisa Magosso1; 1University of Bologna, Italy
Several experimental studies suggest that, in multisensory conditions, the brain performs a near-optimal Bayesian estimate of the external events, laying more weight to the more reliable stimuli. However, the neural mechanisms responsible for this behavior, and its progressive maturation in a multisensory environment, are still insufficiently understood. Aim of this work is to investigate this problem with a neurocomputational model of audio-visual integration. Model assumes the presence of two unimodal areas (auditory and visual, respectively) with a topological organisation. Each area receives an input of its own modality through synapses from the external environment (implementing neuron receptive field, RF), and is connected with neurons in the other area with cross-modal recurrent synapses (responsible for multisensory integration). On the basis of past experience, synapses were trained via Hebbian potentiation and a decay term. A third chain of multisensory neurons performs a simple sum of auditory and visual excitations to provide a single causal inference. Simulations show that, after training, the network realizes a maximum likelihood estimate of auditory (or visual) positions in unimodal conditions, and a near-optimal Bayesian estimate in cross-modal conditions. In particular, prior information on the co-occurrence of stimuli is encoded in the cross-modal synapses. Furthermore, the model explains the ventriloquism illusion and, from the activity in the multimodal layer, accomplishes causal inference according to the reliability of the individual cues. The network can represent an important tool to analyze how Bayesian inference can be learned from experience, and to point out some of the underlying neural mechanisms.
Topic Area: PERCEPTION & ACTION: Multisensory