Poster A106, Saturday, March 25, 5:00 – 7:00 pm, Pacific Concourse
Touchscreen-based speech production without vocal tract sensory feedback
Megan Thompson1, John Houde2, Hardik Kothare2, Srikantan Nagarajan2; 1UC Berkeley-UC San Francisco Joint Graduate Group in Bioengineering, 2UCSF Biomagnetic Imaging Laboratory
Sensory feedback plays a crucial role in speech production in both healthy individuals and in individuals with speech disorders. This study investigates speech production in the absence of vocal tract somatosensory feedback by training subjects to use a touchscreen-based speech production platform. Contact with the screen produces different vowels depending on the touch location, with every possible vowel within a wide formant range producible. As subjects performed the experiment, both accuracy (distance between the target and response) and precision (distance between each response to the same target) rapidly improved within 30 trials. Further, post-training, subjects were able to respond to novel targets with much greater accuracy than pre-training target responses, indicating that the training generalized to other speech sounds. When the map was subjected to a post-training 150Hz F2 shift, subjects shifted their response to certain targets to compensate, just as speakers compensate to formant alterations in speech. Stable, rapid increases in both precision and accuracy, generalization to novel targets, and adaptation to changes imply the development of an internal sensorimotor map, allowing subjects to predict the auditory consequences of each touch and to develop the stereotyped responses necessary to achieve the desired feedback. This provides evidence that healthy adults are capable of rapidly learning a new platform of speech production without vocal tract feedback that bears similarities to vocal speech.
Topic Area: PERCEPTION & ACTION: Motor control