Mapping neural similarity spaces for scenes with generative adversarial networks
Gaeun Son1 (firstname.lastname@example.org), Dirk B. Walther1, Michael L. Mack1; 1University of Toronto
Recent progress in vision science has focused on characterizing how the perceptual similarity of visual stimuli is reflected in the similarity of neural representations. While such neural similarity spaces are well-established in simple feature domains (e.g., orientation columns in V1), a correspondent finding with complex real-world stimuli has yet to be demonstrated. We explored this topic using scene wheels (Son et al. 2021), an AI-generated continuous scene stimulus space in which various global scene properties changed gradually along a circular continuum. Participants were shown scene wheel images during fMRI scanning with a continuous carry-over design to provide stable estimates of scene-specific neural patterns. After scanning, participants rated pairwise perceptual similarity for the same scene wheel images. We performed representational similarity analysis by comparing the similarity of scene-specific voxel patterns across multiple high-level visual regions as measures of physical (angular distances in the scene wheels; pixel correlation), perceptual, and semantic similarity (category). We found that for scene wheels constrained to a single scene category (e.g., dining room), the neural patterns in visual cortex mainly represented the physical similarity of the scenes. However, when the scene wheels contain notable category boundaries (e.g., dining rooms and living rooms), both perceptual and category similarity structures were present in neural pattern similarity. These results provide important evidence that similarity structures defined by the complex feature spaces of real-world scenes are coded in neural representations and that such neural representations flexibly code for physical, perceptual, and categorical information.
Topic Area: PERCEPTION & ACTION: Vision
CNS Account Login
April 13–16 | 2024