Poster F112, Tuesday, March 28, 8:00 – 10:00 am, Pacific Concourse
Accounting for nonlinearities in models of language processing: Can linear regression get the job done?
Sean McWhinney1, Kaitlyn Tagarelli1, Antoine Tremblay1, Aaron Newman1; 1Dalhousie University
Strategies in language processing, and their associated neural substrates, are commonly assumed to be homogenous among a language’s native speakers. However, mounting evidence suggests that individual differences in vocabulary, grammatical ability and cognition impact on the onset and latency of the event-related potential (ERP) components elicited during sentence processing. With increasingly complex research questions and the potential for high-level, nonlinear interactions, conventional statistical approaches to examining ERP data face limitations and new techniques must be explored. The present study investigated advances in analytical approaches in the context of language processing. Participants completed a test battery to assess a range of cognitive skills (including language, working memory, attention), habits (e.g., reading), and demographics. Sentences were presented that were well-formed or contained one of three types of violations: semantic, phrase structure, or morphosyntactic. Data were analyzed using two different variants of the general linear model (GLM): linear mixed effects (LME) and generalized additive mixed modelling (GAMM). Both offer advantages over more traditionally-used variants of the GLM (e.g., ANOVA, linear regression), including support for random effects to account for variable residuals and improved tolerance to heteroscedasticity. However, GAMM fits observations to a series of restricted cubic splines, allowing for nonlinear interactions between variables, and is capable of detecting ceiling/floor effects and nuanced fluctuations in effect sizes. Our results demonstrated important nonlinearities that GAMM was able to depict, reducing model residuals and improving sensitivity to effects.
Topic Area: METHODS: Other