The neural basis of verb and noun semantic representations in congenitally blind individuals
Giulia V. Elli1, Rashi Pant1, Rebecca Achtman2, Marina Bedny1; 1Johns Hopkins University, 2DePauw University
How are the meanings of words influenced by sensory experience? We used multi-voxel pattern analysis (MVPA) to compare the neural basis of lexical semantic representations in congenitally blind (N=15) and sighted individuals (N=13). Specifically, we asked how noun- and verb-responsive cortical regions encode semantic distinctions among words within a grammatical category. Participants judged the similarity of pairs of nouns (birds, mammals, man-made places, natural places) and verbs (light emission, sound emission, hand action, mouth action). In each group, we identified regions in the left hemisphere that respond preferentially to nouns – inferior parietal lobule (IP), precuneus (PC) and inferior temporal cortex (IT) – and to verbs – middle temporal gyrus (MTG). A linear support vector machine (SVM) classifier was trained to decode among verbs and among nouns on half of the data, and then tested on the other half (e.g. even/odd runs). In PC, IP and MTG, classification was successful among verbs and among nouns in both groups (p's<0.05). Furthermore, blind and sighted individuals showed similar grammatical class effects: better classification for verbs in MTG and for nouns in IP and PC (WordClass x ROI: F(2,52)=15.38, p<0.000, Group x WordClass x ROI: F(2,52)=1.12, p=0.34). However, decoding among nouns in IT was successful only in sighted participants (Group main effect: F(1,26)=8.19, p=0.008). These results suggest that the lexical-semantic network is largely unchanged in blindness. However, inferior temporal areas that preferentially process concrete object nouns in sighted individuals appear to be less relevant for such processing in those who are born blind.
Topic Area: LANGUAGE: Semantic