Schedule of Events | Search Abstracts | Symposia | Invited Symposia | Poster Sessions | Data Blitz Sessions

Invited Symposia

The Science and Engineering of the Speaking Brain

Invited Symposium 3: Tuesday, April 16, 2024, 10:00 am – 12:00 pm EDT, Ballroom East

Chair: Gopala Anumanchipalli1; 1UC Berkeley/ UC San Francisco
Presenters: Greg Hickock, Liberty Hamilton, Stephanie Ries, Gopala Anumanchipalli

This session will bring together the latest in our understanding of the neural substrates of speech communication, guided by large scale studies of cortical recordings across humans, underlying the remarkable behavior of speech communication, both using speech impaired and control populations, across ages, and spanning various cognitive processes underlying speaking. We will then discuss current engineering approaches for rehabilitation of speech impaired individuals using cortical implants based on our current insights and neural engineering for creating Brain-Computer Interfaces for communication.

Presentations

Beyond Broca: Neural architecture and evolution of a dual motor speech coordination system

Greg Hickock1; 1Departments of Cognitive Sciences and Language Science, UC Irvine

Classical neural architecture models of speech production propose a single system centerd on Broca’s area coordinating all the vocal articulators from lips to larynx. Modern evidence has challenged both the idea that Broca’s area is involved in motor speech coordination and that there is only one coordination network. Drawing on a wide range of evidence, here we propose a dual speech coordination model in which laryngeal control of pitch-related aspects of prosody and song are coordinated by a hierarchically organized dorsolateral system while supralaryngeal articulation at the phonetic/syllabic level is coordinated by a more ventral system posterior to Broca’s area. We argue further that these two speech production subsystems have distinguishable evolutionary histories and discuss the implications for models of language evolution.

Modulation of neural responses during self-generated speech using intracranial recordings in children and adults

Liberty Hamilton1; 1Department of Speech, Language, and Hearing Science; Department of Neurology; The University of Texas at Austin

Prior work on speech processing in the brain using intracranial recordings has shown that the superior temporal gyrus (STG) can be separated into two subregions: one posteriorly that encodes acoustic onsets, and one more anteriorly that evokes a more sustained response. The onset responses in the pSTG appear to be important for segmenting continuous speech information, but it is unclear how these responses are modulated by audiomotor feedback during speech production -- for example, when people hear their own voice versus external speech. In this talk, I will describe our findings from intracranial recordings from 17 patient participants across a wide age range (aged 8-37) while they performed a dual speaking and listening task and intracranial signals were recorded from auditory, motor, prefrontal, and insular regions of the brain. Participants read sentences aloud and then heard either immediate playback of the same sentence they had said, or playback of another sentence they had uttered in a previous trial. Overall, we found strong, specific suppression of neural onset responses in the STG that were not related to the predictability of playback. In addition, we found a specific subregion of the insula that exhibited fast latency dual onset responses during both perception and production. Our results have implications for understanding audiomotor feedback and the interactions between naturalistic speech perception and production.

Brain Dynamics of Cognitive Control Mechanisms in Language Production

Stephanie Ries1; 1School of Speech, Language, and Hearing Sciences and Center for Abstract: Clinical and Cognitive Neuroscience, San Diego State University, San Diego, California, USA.

Although producing language seems relatively easy, a number of cognitive processes are needed to transform ideas into language output. Cognitive control processes are thought to play a crucial role in helping resolve different forms of conflict throughout language production. Our research examines the brain dynamics of cognitive control processes as they interact with core language processes through the use of behavioral data, and scalp and intracranial EEG. Results show that different parts of the prefrontal cortex are involved in resolving different types of conflict (proactive vs. reactive) at different time points, and that these prefrontal areas are functionally connected with core language regions in the left posterior temporal cortex as we are producing language.

Current state of Communication Neuroprostheses

Gopala Anumanchipalli1; 1EECS, UC Berkeley/ Neurosurgery, UC San Francisco

In this talk, I will go over the latest advances in engineering approaches to create neuroprostheses for communication (Speech Brain-Computer Interfaces). I will summarize the efforts toward understanding the neural bases of speech production in the speech motor cortex, the primary region of interest for current speech prosthesis. I will then give a report on the state of current Brain-decoding approaches for communication (within the BRAVO clinical trial at UCSF) in speech impaired populations using invasive neural implants. I will discuss challenges and opportunities in bringing more insights from neurophysiology of speech production toward future neuroprosthesis.

 

 

 

 

 

 

 

CNS Account Login

CNS2024-Logo_FNL-02

April 13–16  |  2024

Latest from Twitter