Schedule of Events | Search Abstracts | Symposia | Invited Symposia | Poster Sessions | Data Blitz Sessions

Poster F157

Layer-dependent feedback in a grasping neural network increases robustness to noise

Poster Session F - Tuesday, April 16, 2024, 8:00 – 10:00 am EDT, Sheraton Hall ABC

Romesa Khan1, Hongsheng Zhong1, Jack Cai1, Matthias Niemeier1,2,3; 1University of Toronto, 2Centre for Vision Research, York University, 3Vision: Science to Applications, York University

Top-down predictions from generative models in the brain are conveyed through cortical layer-specific feedback connections during visual perceptual tasks. However, there is a dearth of understanding of the contribution of feedback when visual input is used for action planning, such as object grasping. Recent evidence shows that advanced object shape and movement representations during grasping also involve the reactivation of earlier visual areas, indicating that feedback connections carry information from downstream stages of visuomotor processing to the earlier stages in the visual stream. We investigated the contribution of such neural feedback to the visuomotor control of grasping by using convolutional neural networks, trained to compute grasp positions for real-world objects, as a modelling framework. To make these models computationally and structurally more similar to the human cortex, we added generative feedback loops to a custom feedforward backbone, carrying advanced representations to early layers of the network. When evaluated on images with additive Gaussian noise, after multiple forward and backward passes through the network, we observed an improvement in performance for the network with predictive coding dynamics in comparison to the feedforward baseline. We also find that this performance-enhancing effect under adverse conditions (1) decreases with increasing distance between the feedback source and target layers, and (2) relies on a balance between the relative contributions of local recurrence and top-down feedback. To conclude, our simulations show that introducing biologically plausible predictive coding dynamics improves model robustness to noisy visual stimuli in a neural network model optimized for grasp prediction.

Topic Area: PERCEPTION & ACTION: Vision

 

CNS Account Login

CNS2024-Logo_FNL-02

April 13–16  |  2024