Schedule of Events | Search Abstracts | Invited Symposia | Symposia | Rising Stars | Poster Sessions | Data Blitz
Poster F57
Investigating Task Modulation in Reading Through Large Language Models
Poster Session F - Tuesday, March 10, 2026, 8:00 – 10:00 am PDT, Fairview/Kitsilano Ballrooms
SIJIE LING1, Alona Fyshe1; 1University of Alberta
Task instructions influence human reading behaviour, producing task-specific patterns of attention allocation and brain activation. For instance, counting words in a sentence engages different brain processes than translating it. However, how the brain selectively encodes text to meet task demands remains unclear. In this study, we use large language models (LLMs) to study changes in information encoding driven by task demands, establishing methods for exploring similar task-related effects in humans with neurophysiological data. To simulate the same person reading identical stimuli sentences with diverse task instructions, we designed prompts for 6 language tasks. We provided the prompts along with sentences as input to several LLMs. We extracted model embeddings for the words of the sentences across each layer of the LLMs. Our analysis revealed four key findings. First, task-driven differences in representation geometry (quantified by participation ratio, a measure of dimensionality) emerged in lower LLM layers. Second, despite task divergence, embeddings retained >60% shared information even in final layers. Third, probing classifiers successfully decoded task-irrelevant features, confirming retention of general linguistic information. Fourth, task-optimized soft prompts, compared to natural language instructions, amplified all effects: greater representational divergence, less information overlap, and reduced task-irrelevant decoding, indicating stronger task specialization. These findings from the simulation experiment demonstrate that while task-specific prompts direct the model to focus on relevant aspects of the input text, general information is still retained in the embeddings. In the future, we will use a similar framework to study task-related changes in human language processing.
Topic Area: LANGUAGE: Other
CNS Account Login
March 7 – 10, 2026