Schedule of Events | Search Abstracts | Symposia | Invited Symposia | Poster Sessions | Data Blitz Sessions

Poster D36

The N400 is sensitive to story-level context in naturalistic language comprehension.

Poster Session D - Monday, April 15, 2024, 8:00 – 10:00 am EDT, Sheraton Hall ABC

Ashley L. M. Platt1 (plaal004@mymail.unisa.edu.au), Matthias Schlesewsky1, Ina Bornkessel-Schlesewsky1; 1University of South Australia

Successful language comprehension is facilitated by congruency between linguistic input and current local context. A lack of congruency is thought to be reflected in the N400 ERP response, with unexpected words eliciting larger N400 amplitudes. One way to probe this relationship is via surprisal, which was typically calculated using restricted context windows of only a few words prior to the target word. However, Large Language models now allow for surprisal calculations based on the entirety of linguistic context. This provides an opportunity to explore the sensitivity of the N400 to contextual richness. Here, 40 participants (27 female, mean age 24.6) listened to 12 short stories while their electroencephalogram was recorded. Word-by-word surprisal values were calculated via GPT-2 using two contextual windows: sentence-level and story-level. This allowed us to examine whether the N400 is sensitive to contextual information accrued across the story and outside the current sentence. We used linear mixed effects models to compare the effect of the two surprisal predictors on N400 amplitudes. Akaike Information Criterion-based model comparisons revealed that the story-level surprisal model explains more variance in N400 amplitudes than the sentence-level model. Story-level surprisal interacted with word position and log word frequency, with the N400-surprisal relationship strongest for low frequency words at the beginning of each story. This finding indicates that the N400 reflects a rich amount of contextual information. We suggest that an approach such as that employed here might be used to shed light on how individuals adapt their internal language models to local context.

Topic Area: LANGUAGE: Other

 

CNS Account Login

CNS2024-Logo_FNL-02

April 13–16  |  2024