Poster Session D, Monday, March 25, 8:00 – 10:00 am, Pacific Concourse
Learned Feature Distributions Predict Visual Search and Working Memory Precision
Phil Witkowski1,2, Joy Geng1,2; 1University of California, Davis, 2Center for Mind and Brain, University of California, Davis
Previous research has established that attention operates by selectively enhancing sensory processing of task-relevant target features held in working memory. Much of this literature uses search displays in which the target is an exact match of cued features. However, real world visual search rarely involves targets that are identical to our memory representations. The ability to deal with cue-to-target variability is a critical, but understudied aspect of visual attention. In these studies, we test the hypothesis that top-down attentional biases are sensitive to the reliability of target feature dimensions over time. In two experiments, subjects completed a visual search task where they saw a target cue composed of a certain motion direction and color, followed by a visual search display with multiple distractors. The target features changed from the cue, with one dimension drawn from a distribution narrowly centered over the cued feature (reliable dimension), while the other was broad (unreliable dimension). The results demonstrate that subjects learned the distributions of cue-to-target variability for the two dimensions and used that information to bias working memory and attentional selection: Reaction times and first saccades were better predicted by the similarity of the consistent feature than the inconsistent feature and the precision of working memory probe responses was greater for the consistent dimension. Moreover, the working memory precision predicted individual variation in search performance. Our results suggest that observers are sensitive to the learned reliability of individual features within a target and use this information adaptively to weight mechanisms of attentional selection.
Topic Area: ATTENTION: Nonspatial