Poster B131, Sunday, March 26, 8:00 – 10:00 am, Pacific Concourse
Category Learning Generates Categorical Perception: Behavioral, Neural and Computational Aspects
Fernanda Perez Gay Juarez1,2,3, Christian Thériault2,3, Madeline Gregory1,3, Daniel Rivas2,3, Hisham Sabri2,3, Stevan Harnad1,2; 1McGill University, 2Université du Québec à Montréal, 3Center for Research in Brain, Language and Music
It is known that categorical perception (CP) — between-category separation and within-category compression — occurs innately in both perceived similarity and discriminability for colors, phonemes and facial expressions. It is now emerging that categorical perception can also be induced by category learning. We trained human subjects through trial and error with corrective feedback to sort samples of multidimensional visual stimuli into two categories based on features that covaried with category membership. Event-related potentials (ERP) were measured during the training. We tested two kinds of stimuli: black and white textures made up of distributed microfeatures and fish images with local features. For both types of stimuli pairwise similarity judgments before and after learning revealed between-category separation and within-category compression. These effects were absent in subjects who failed to learn. We also found ERP changes in an early, occipital N1 component (150-220 ms) that correlated with the degree of perceived separation. Learning also had an effect on frontal and parietal late positive components that correlated with learning performance rather than CP effects. To model the observed CP effects, we trained “deep learning” nets to categorize our textures through auto-association followed by supervised reinforcement learning with corrective feedback. Comparing the average within- and between-category distances in hidden-unit activation space before and after category-learning revealed between-category separation and within-category compression, as in the experimental subjects. We hypothesize that CP occurs through dimensional reduction: a learned filter selects for the covarying features and ignores the non-covariant ones, thereby changing the encoded distances between the inputs.
Topic Area: PERCEPTION & ACTION: Vision