Poster E116, Monday, March 27, 2:30 – 4:30 pm, Pacific Concourse
Integration of visual and motor object features in human cortex
Ariana M. Familiar1, Heath Matheson1, Sharon L. Thompson-Schill1; 1University of Pennsylvania
To accomplish object recognition, we must remember the shared features of thousands of objects, as well as each object’s unique combination of features. While theories differ on how exactly the brain does this, many agree that featural information is integrated in at least one cortical region, or “convergence zone”, which acts as a semantic representation area that links object features across different information types. Moreover, it has been posited that anterior temporal lobe (ATL) acts as a “hub” that associates object features across all modalities, as it is reciprocally connected to modality-specific cortical regions and patients with damage to this area have shown deficits in remembering object information across modalities (Patterson et al., 2007). Our lab recently found evidence that the left ATL encodes integrated shape and color information for objects uniquely defined by these features (fruits/vegetables; Coutanche & Thompson-Schill, 2014), suggesting ATL acts as a convergence zone for these visual object features. However, whether ATL encodes object information from different modalities had not been established. We used fMRI and MVPA to examine whether ATL acts as an area of convergence for object features across sensory-motor modalities. Using a whole-brain searchlight analysis, we found activity patterns during a memory retrieval task in a region within the left ATL could successfully classify objects defined by different combinations of visual and motor features, but not on the basis of either constituent feature. These results suggest left ATL encodes integrated object features across visual and motor modalities, which correspond to an object’s identity.
Topic Area: PERCEPTION & ACTION: Multisensory