Brain Response to Semantic Violations in a Miniature Artificial Language about Time
Seana Coulson1, Tania Delgado1, Tyler Marghetis2, Tessa Verhoef1,3, Esther Walker1; 1University of California, San Diego, 2Indiana University, 3Leiden University
Do people have a bias to learn some time-space mappings over others? Here we recorded EEG as participants learned an artificial language derived from a social communication game. The mini-language included 16 discrete 1.5-second movements of a round knob along a vertical bar, each corresponding to a particular temporal concept (e.g., day, second, year). The behavioral study revealed two strategies, (1) a duration mapping strategy, hypothesized to reflect a cognitive bias, in which participants used larger portions of the bar to refer to temporal intervals that were longer; and (2) an order mapping strategy, that relied more on social interaction, in which dyads used either the top or the bottom of the bar to refer to concepts in the future. Participants viewed each 1.5-second signal followed by a potential English translation of the signal (e.g. yesterday), and then pushed a button to indicate whether or not the translation was correct. Participants’ response triggered a feedback tone that enabled learning. ERPs were time-locked to the onset of English translations. The mean amplitude of ERPs was measured 300-500ms post-stimulus onset to index N400, and 500-800ms post-stimulus to index P600. Analysis contrasted ERPs to matching versus mismatching translations. The duration violations elicited enhanced N400 in the first block, and an N400/P600 complex in the second block. Similar analysis of order mappings revealed no reliable ERP effects in the first block, and significantly larger amplitude N400 for order violations in the second block. Results suggest duration mappings were more rapidly learned than order mappings.
Topic Area: LANGUAGE: Semantic