Synesthesia and Statistical Learning: Redundant Cues Improve Segmentation
Tess Allegra Forest1, Alessandra Lichtenfeld2, Bryan Alvarez2, Amy Finn1; 1University of Toronto, 2University of California, Berkeley
For people with synesthesia, sensory experience in one domain is yoked to sensory experience in a second, unrelated domain. For example, hearing a certain sound might trigger the perceptual experience of seeing a specific color. As of yet, the impact of consistent, multimodal experience on automatic learning mechanisms, like statistical learning, has yet to be determined. To address the question of whether synesthetes’ redundant multi-modal cues provide them an advantage in using statistical information to segment words in continuous speech, we exposed two groups of synesthetes and a group of controls to a stream of nonsense speech, and examined their ability to segment words from this stream. Results showed that grapheme-color synesthetes (for whom written or spoken graphemes trigger the experience of a particular color) show increased segmentation ability compared to controls, while sound-color synesthetes (for whom waveform properties of speech that are not consistent with statistical boundaries also trigger a color experience) did not show an increase in segmentation ability. This suggests that the improved segmentation in grapheme-color synesthetes was caused by a reliable, secondary cue to word boundaries provided by the consistent color and sound pairings they experienced during exposure. This work has implications for understanding the conditions under which statistical properties of the environment are successfully learned automatically. Particularly, they suggest that further experiments designed to illuminate the nature of statistical learning in multimodal environments will show that providing consistent multimodal cues to non-synesthetes will result in improved segmentation ability as compared to inconsistent multimodal cues.
Topic Area: PERCEPTION & ACTION: Multisensory