Schedule of Events | Search Abstracts | Symposia | Invited Symposia | Poster Sessions | Data Blitz Sessions

Poster F150

Comparative analysis of optimization trends in dorsal and ventral stream using computation models

Poster Session F - Tuesday, April 16, 2024, 8:00 – 10:00 am EDT, Sheraton Hall ABC

Tahsin Reza1 (, Steven Luo1, Gursimar Singh1, Jessica Tang1, Rohan Jain1, Matthias Niemeier1,2; 1University of Toronto, 2Centre for Vision Research, York University

visual information processing into two distinct neural streams: the dorsal stream, responsible for action, and the ventral stream, responsible for perception. The dorsal stream is instrumental in translating visual characteristics into motor commands, typically modelled as a regression task in artificial neural networks (ANNs), specifically in tasks like grasping. Conversely, the ventral stream converts retinal images into abstract representations for object recognition, typically modelled as a classification task in ANNs. There exists a hypothesis that differences between these two streams are rooted in variations in their optimization processes. Neural networks trained for object classification or robotic grasp movements exhibit different response properties, but these differences are often ascribed to network architecture and training variations. To assess the impact of task-specific training differences, we devised a novel map-based ANN and a task-agnostic double-log loss function suitable for classification and visual grasp analysis. Our method penalizes boundary predictions for both tasks, enabling a direct comparison between classification and grasping networks following identical optimization rules. For a qualitative understanding of the distinctions between the two pathways, we applied Guided Backpropagation, we discovered that the classification network emphasizes local information of object parts and surface features, while the grasp network focuses on global features. The emergence of properties resembling dorsal and ventral streams suggests that our approach provides an equitable and task-agnostic method for comparing optimization trends across action-based and perception-based learning agents. This methodology contributes to the quantitative modelling of the visual cortex.

Topic Area: PERCEPTION & ACTION: Vision


CNS Account Login


April 13–16  |  2024