Poster B78, Sunday, March 26, 8:00 – 10:00 am, Pacific Concourse
Neurocognitive effects of sentential constraint in visual word recognition
Nyssa Bulkes1, Darren Tanner1; 1University of Illinois at Urbana-Champaign
Prior work using event-related brain potentials (ERPs) shows neural sensitivity to visual anomalies in highly constraining environments (Kim & Lai, 2012). This work describes a feed-forward feedback architecture that is sensitive to constraint information, which uses context to inform the visual system about which features to expect in the upcoming input (i.e. spelling, orthography). Specifically, when embedded in highly constraining sentences, unexpected orthography modulates the N170 component, an effect which has also been tied to domain-general feature-detection processes (e.g. facial recognition). In a series of two experiments, we show that unexpected pseudohomophones (i.e. “metir” in place of expected “meter”) and targets containing letter transpositions (i.e. “meetr”) modulate the N170 amplitude. Additionally, these targets elicit a P600, indexing reanalysis and integration of these anomalies downstream, an effect that is shorter in duration for illegal strings (i.e. “czlxn”). However, it is unknown whether, in language processing, these responses are contingent upon strong top-down, contextual cues, or whether visually anomalies of any kind elicit this early sensitivity. Data from our second study show that, when embedded in neutral contexts, visual anomalies do not elicit an N170 component, but just a P600. These results have implications for neurocognitive models of visual word recognition, highlighting the N170 as an index of feature detection only in cases where comprehenders can form expectations for the visual system. Our data suggest that, when constraint is weak, the visual system abstains from form-based anticipatory expectations.
Topic Area: LANGUAGE: Semantic