Cognitive Neuroscience Society

The Journal of Cognitive Neuroscience

  • Facebook
  • RSS
  • Twitter
MENUMENU
  • Home
  • Annual Meeting
        • General Information
          • What to Expect at CNS 2023
          • CNS 2023 Mobile App
          • CNS 2023 Giveaway
          • CNS 2023 Giveaway Winners
          • Accessibility at CNS
          • General Information
          • Code of Conduct
          • Dates and Deadlines
          • Hotel Reservations
          • Poster Guidelines
          • Poster Printing Discount
          • Annual Meeting Workshop Policy & Application
          • Exhibit with us!
        • Program
          • Thank you to our Partners
          • CNS 2023 Program Booklet
          • Schedule Overview
          • Program-at-a-Glance
          • CNS 30th Anniversary Dance Party
          • Keynote Address
          • George A. Miller Awardee
          • Distinguished Career Contributions Awardee
          • Young Investigator Awardees
          • CNS at 30: Perspectives on the Roots, Present, and Future of Cognitive Neuroscience
          • Invited-Symposium Sessions
          • Symposium Sessions
          • Data Blitz Session Schedule
          • Poster Schedule & Session Information
          • JoCN Travel Fellowship Award
          • GSA/PFA Award Winners
          • Workshops, Socials & Special Events
        • Registration
          • Registration
          • Registration FAQ
          • Registration Policies, Cancellations & Refunds
        • News/Press
          • CNS 2023 Press Room
          • CNS 2022 Blog
          • CNS 2021 Blog
          • CNS 2020 Blog
        • Submissions
          • 2023 Poster Printing Discount
          • Submission Requirements
          • Submit a Poster
          • Submit a Symposium
          • GSA or PFA Application
          • Data Blitz
          • Frequently Asked Submission Questions
        • Archive
          • CNS 2020 Conference Videos
          • CNS 2019 Conference Videos
          • CNS 2018 Conference Videos
          • CNS 2017 Conference Videos
          • CNS 2016 Conference Videos
          • CNS 2015 Conference Videos
          • Previous Meetings Programs & Abstracts
  • About CNS
    • Boards and Committees
    • CNS Statement: Black Lives Matter
  • Membership
    • Information and Benefits
    • Join or Renew Membership
    • Membership FAQs
    • Member Discounts
    • Newsletter
      • Submit an Announcement
      • Current Newsletter
      • Newsletter FAQs
      • Past Newsletters
  • Awards
    • George A. Miller Award
    • Fred Kavli Distinguished Career Contributions Award
    • Young Investigator Award
    • Young Investigator Award Nominations
    • 2023 YIA Nomination Form
    • JoCN Travel Fellowship Award
  • News Center
    • CNS Blog
    • CNS 2023 Press Room
    • CNS 2022 Blog
    • CNS 2021 Blog
    • CNS 2020 Blog
    • CNS 2019 Blog
    • Blog Archives
    • Quick Tips for Getting Started on Twitter
    • Media Contact
  • My CNS
  • Contact Us
post

Communication Control: The Brain Activity that Monitors Our Speech

September 24, 2019

speechWhen we communicate with others, we are constantly monitoring our speech and theirs — taking in multiple external cues — to best engage in meaningful conversation. Despite the multidimensional aspects of speech monitoring, most studies on the topic to date have focused on how we produce a string of accurately sequenced sound units rather than how we actively work to control our speech.

In a new study with senior authors Katie L. McMahon and Greig I. de Zubicaray of Queensland University of Technology in Australia, researchers sought to understand what happens in the brain while we monitor our speech. “We were seeking to clarify the neural mechanisms underlying speech monitoring and inhibition, which might help us better understand neurological disorders characterized by impairments of these processes, such as Tourette’s syndrome and stuttering,” says lead author and Ph.D. student Samuel Hansen at the University of Queensland.

As published in the Journal of Cognitive Neuroscience, the researchers used a modified stop signal task during fMRI to identify the areas of the brain engaged when attempting to halt speech — finding different patterns of brain activity for successful versus unsuccessful speech halting. CNS spoke with Hansen to learn more about the study design, implications of the findings, and next directions for the research.

speechCNS: How did you become personally interested in this research area?

Hansen: Language, for me, has always been the most uniquely defining feature of the human condition. Speaking seems at once so familiar and natural, yet the ease with which we translate thought into sound belies its underlying complexity. Interrupting a conversational partner — to signal a misunderstanding, to correct an inaccuracy, or to chime in with a clarification — is quite common in everyday speaking. It is important that we integrate production and perception fields of language research and study them in concert rather than in isolation.


CNS:
Can you highlight any novel aspects of your study design?

Hansen: The majority of research on the neural mechanisms of speech errors has involved people with aphasia, with few lab paradigms able to generate large numbers of errors in healthy participants. To our knowledge, no-one has ever designed an fMRI study of a modified stop signal task involving picture naming and the use of words as stop signals to generate 50% errors. This design allowed us to identify the neural mechanisms of successful versus unsuccessful speech inhibition for the very first time, i.e., we were able to show what brain regions are engaged when we commit a production error.


CNS: Why did you choose to present stop signals phonologically similar to the target picture name? Can you describe how that worked?

Hansen: The dominant account of speech monitoring is Willem Levelt’s classic “perceptual loop theory”. It proposes monitoring of speech production is accomplished via two loops that feed into the speech perception/comprehension system. The outer loop uses overt speech as input (akin to hearing others speak). The inner speech loop is proposed to use internally generated and phonologically specified/encoded representations as input into the speech perception system.

We reasoned this inner loop should therefore be sensitive to phonologically similar versus dissimilar words presented as stop signals during naming. If the stop signal shared the same initial phoneme with the target picture name — e.g., hearing “cabbage” while naming “camel” — we expected the perception system would have to process the final phoneme before detecting the discrepancy between inner and overt speech and so take longer to halt production. Our results failed to confirm this prediction, raising questions about the inner loop account or at the very least the assumption that it operates at a level of phonological representation.


CNS: What were you most excited to find?

Hansen: We were very excited to find evidence that halting production engaged both language-specific and domain-general neural mechanisms. In addition, some of these regions were known to be involved in stuttering, providing some nice converging evidence. We were a bit surprised that we were unable to find any behavioural or neural evidence for speech monitoring occurring at a level of processing phonological representations, but this is consistent with other recent findings that have questioned the perceptual loop account.


CNS: What’s next for this line of work?

Hansen: Understanding monitoring and control of speaking is important but the next stage involves studying the neural mechanisms of speech repair processes. Once speech is interrupted, how does the speaker determine what to say next? That will be an exciting direction for research.


CNS: Anything I didn’t ask you about that you’d like to add?

Hansen: I would like to acknowledge the wonderful people I work alongside that encourage and inspire me. I have been lucky to be part of a vibrant team of researchers that have fostered and fueled my academic curiosity and enthusiasm.

-Lisa M.P. Munoz

By lmunoz Filed Under: featured Tagged With: language, speech Leave a Comment

Previous article: Bring in the Laughs: Investigating Laughter as a Social Signal
Next article: Models of Our Selves Reflected in Our Friends

Leave a Reply Cancel reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Posts

  • Poverty: What’s the Brain Got to Do With It?
  • Unraveling Graceful Human Learning Over Time
  • Looking Forward to Understand Working Memory
  • From the Neurology Clinic to the Lab and Back Again: Addressing Frontal Lobe Syndromes
  • When Philosophical Questions Turn to Neuroscience Experimentation

Blog Archives

Quick Tips for Getting Started on Twitter

Cognitive Neuroscience Society
c/o Center for Mind and Brain
267 Cousteau Place, Davis, CA 95618
916-955-6080: for CNS Membership Questions
805-450-7490: for annual meeting questions about- registration, posters, symposium
916-409-5069: Fax Line
email: meeting@cogneurosociety.org

Recent Posts

  • Poverty: What’s the Brain Got to Do With It?
  • Unraveling Graceful Human Learning Over Time
  • Looking Forward to Understand Working Memory
  • From the Neurology Clinic to the Lab and Back Again: Addressing Frontal Lobe Syndromes
  • When Philosophical Questions Turn to Neuroscience Experimentation

Archives

Blog Archives

Previous Meeting Programs and Abstracts

Past Newsletters

All contents © Cognitive Neuroscience Society 1995-2019

Add to Calendar

Add to Calendar
04/16/2022 11:00 AM
04/16/2022 12:00 PM
America/Los_Angeles
How Prior Knowledge Shapes Encoding of New Memories
Description of the event
Grand Ballroom A
Create an Account

Login Utility