Cognitive Neuroscience Society

The Journal of Cognitive Neuroscience

  • Facebook
  • RSS
  • Twitter
MENUMENU
  • Home
  • Annual Meeting
        • General Information
          • What to Expect at CNS 2023
          • CNS 2023 Mobile App
          • CNS 2023 Giveaway
          • CNS 2023 Giveaway Winners
          • Accessibility at CNS
          • General Information
          • Code of Conduct
          • Dates and Deadlines
          • Hotel Reservations
          • Poster Guidelines
          • Poster Printing Discount
          • Annual Meeting Workshop Policy & Application
          • Exhibit with us!
        • Program
          • Thank you to our Partners
          • CNS 2023 Program Booklet
          • Schedule Overview
          • Program-at-a-Glance
          • CNS 30th Anniversary Dance Party
          • Keynote Address
          • George A. Miller Awardee
          • Distinguished Career Contributions Awardee
          • Young Investigator Awardees
          • CNS at 30: Perspectives on the Roots, Present, and Future of Cognitive Neuroscience
          • Invited-Symposium Sessions
          • Symposium Sessions
          • Data Blitz Session Schedule
          • Poster Schedule & Session Information
          • JoCN Travel Fellowship Award
          • GSA/PFA Award Winners
          • Workshops, Socials & Special Events
        • Registration
          • Registration
          • Registration FAQ
          • Registration Policies, Cancellations & Refunds
        • News/Press
          • CNS 2023 Press Room
          • CNS 2022 Blog
          • CNS 2021 Blog
          • CNS 2020 Blog
        • Submissions
          • 2023 Poster Printing Discount
          • Submission Requirements
          • Submit a Poster
          • Submit a Symposium
          • GSA or PFA Application
          • Data Blitz
          • Frequently Asked Submission Questions
        • Archive
          • CNS 2020 Conference Videos
          • CNS 2019 Conference Videos
          • CNS 2018 Conference Videos
          • CNS 2017 Conference Videos
          • CNS 2016 Conference Videos
          • CNS 2015 Conference Videos
          • Previous Meetings Programs & Abstracts
  • About CNS
    • Boards and Committees
    • CNS Statement: Black Lives Matter
  • Membership
    • Information and Benefits
    • Join or Renew Membership
    • Membership FAQs
    • Member Discounts
    • Newsletter
      • Submit an Announcement
      • Current Newsletter
      • Newsletter FAQs
      • Past Newsletters
  • Awards
    • George A. Miller Award
    • Fred Kavli Distinguished Career Contributions Award
    • Young Investigator Award
    • Young Investigator Award Nominations
    • 2023 YIA Nomination Form
    • JoCN Travel Fellowship Award
  • News Center
    • CNS Blog
    • CNS 2023 Press Room
    • CNS 2023 Blog
    • CNS 2022 Blog
    • CNS 2021 Blog
    • CNS 2020 Blog
    • CNS 2019 Blog
    • Blog Archives
    • Quick Tips for Getting Started on Twitter
    • Media Contact
  • My CNS
  • Contact Us
post

Using Fruits and Veggies to Break Down How We Remember and Identify Objects

September 4, 2014

credit: http://commons.wikimedia.org/wiki/File:Fruit_and_vegetable_vendor_at_St._Jacobs_Farmers_Market,_2011_July_7.jpg

credit: http://commons.wikimedia.org/wiki/File:Fruit_and_vegetable_vendor_at_St._Jacobs_Farmers_Market,_2011_July_7.jpg

Guest Post by Marc Coutanche, Yale University 

From a young age, we learn the differences between a lemon and a lime and dozens of other fruit, making going to a farmer’s market to shop for fruit a seemingly simple task. But despite appearances, very little is simple about holding what you want in mind, and then identifying it in the world — whether that is a lime in the market or keys on a cluttered counter. It’s a testament to the evolution of the brain that it’s hard to even imagine object identification as anything other than effortless past childhood. But if you’ve known someone with Alzheimer’s disease, or certain other neurological disorders, its fallibility can become all too clear.

This is perhaps most strikingly apparent from observing patients who have developed “associative visual agnosia” after damage to the brain. The impairment can leave a person unable to identify (previously recognized) objects, despite having potentially perfect vision. These patients can even draw an object that’s placed in front of them, with little recognition of what it is; for example, they might draw a perfect replica of a carrot, with no idea that it’s a food. How does our brain store our knowledge of the thousands upon thousands of objects that we encounter in our lifetime, so that we can recognize them effortlessly from their features?

In a new research study, my coauthor, Sharon Thompson-Schill, and I found evidence that our knowledge of objects draws on a centralized hub in the brain. This hub pulls together dispersed pieces of information about an object’s particular shape, color, and so on, from sensory areas. Understanding these hubs, and how they integrate features, could prove critical to ultimately understanding cases where memory can fail, such as in Alzheimer’s disease.

In the past decade, new machine learning approaches that can ‘decode’ brain activity from fMRI scans have provided opportunities to tackle questions about the brain in new and exciting ways. The approach itself — seeing if brain activity patterns alone can be used to predict what someone is perceiving or thinking about — can sound like something from the pages of a sci-fi novel, but asking this question can tell us a lot about how the brain encodes information. The impressive success of decoding methods comes from their ability to pool together information from distributed populations of neurons.

Imagine activity in the brain as a symphony. Previously, fMRI methods have allowed us to listen to one instrument at a time, but the machine learning methods let us hear the whole orchestra; in this case, patterns of brain activity. Just as it’s easier to identify a musical piece when all the instruments are playing, we can now identify what the brain is processing with a lot more specificity than we previously thought possible.

credit: Marc Coutanche

Can you spot the lime in this noise? Researchers used machine-learning methods to use brain activity patterns alone to predict the fruit a person was looking for — before it appeared in noise like this.

In our recent study published in Cerebral Cortex, we investigated how knowledge is organized in the brain by having people visually search for fruits and vegetables. Previously, researchers have decoded memories for very distinct items, such as faces from vehicles. Decoding different fruits and vegetables is a lot more specific, and this category of objects has properties, such as systematic variations in shape and color, that are well suited to studying “semantic memory” — our knowledge of the objects we’ve encountered throughout our lives.

Some theories suggest that semantic memory has no central location — that it is distributed across the sensory and motor brain areas involved in seeing, hearing, touching, and manipulating objects. For example, your knowledge of a telephone would be spread across your auditory, visual, and motor cortices. Other theories suggest that one or more centralized hubs are important. One such idea is that our brain contains “convergence zones” that each integrates converging information from other brain areas. So your knowledge of limes might come from the successful integration of shape, color, and taste information at a convergence site. A key motivation for our study was to test for evidence of such a convergence zone, and for evidence of converging object properties.

In our experiment, we recorded participants’ brain activity with an fMRI scanner, while asking them to look for one of four fruits and vegetables — carrots, celery, limes, or tangerines. We wanted to probe memory, rather than current perception, so we couldn’t just show images of the fruits and vegetables. Instead, we asked participants to look for objects hidden within colorful visual noise (which looks like static on a screen).

In each trial, we first told our participants which fruit or vegetable to look for , and then showed them images of random visual noise. After some time, an object appeared, concealed inside the static. Importantly, we only looked at the brain activity recorded before the object appeared: while our participants were still looking at totally random noise. Focusing on the brain activity collected when they were holding an object in mind (without seeing it) let us truly probe internally-driven brain activity. We wanted to see if this activity would lead us to the location of a centralized hub.

Sure enough, we found that we could decode object identity in just one location: the left anterior temporal lobe, which lies a few inches above and to the front of the left ear. This finding is consistent with previous studies that point to the anterior temporal lobes as being important for semantic memory. For example, the conceptual errors made by dementia patients — including mistakes in naming fruits, and matching fruit names to pictures — is associated with deterioration in this brain region.

Interestingly, the memory-generated activity patterns that we found were very similar to activity patterns we observed when the participants were actually viewing images of each fruit or vegetable. To continue the musical analogy, we found a similar symphony when our participants were both seeing and thinking about the objects.

We next wanted to see which brain regions converge to ultimately make object identity in the anterior temporal lobe. For this, we turned to the visual processing areas responsible for shape and color. We had chosen our fruits and vegetables deliberately: two are green (lime and celery); two are orange (tangerine and carrot); two are elongated (carrot and celery); and two are near-spherical (lime and tangerine).

The idea was to “train” our machine learning decoder to distinctly look for brain activity patterns in regions associated with identifying shape and color, without picking up on activity associated with other distinguishing features, such as taste. We used the pairs of objects to our advantage here, by training the decoder to distinguish two of the fruits and vegetables (limes versus tangerines for color), and asking how it would classify other items with similar features (celery versus carrots). When using activity from a brain region associated with processing color, our decoder ‘mistook’ limes for celery, and tangerines for carrots. And the decoders that used data from the shape-processing area confused carrots with celery, and tangerines with limes. Those results made us confident that the decoders were correctly identifying color and shape information in the early visual regions.

We then reasoned that if shape and color really do converge on the left anterior temporal lobe, our object-decoders should find it easier to identity the searched-for object (e.g. tangerine) when both its color (orange) and shape (spherical) brain activity patterns are found in their respective regions. We found exactly this: a decoder could better identify an object from brain activity in the left anterior temporal lobe when both its color and shape were identified from converging feature regions.

The results of this study give support to theories that our brain contains one or more convergence zones that integrate object properties. This work is also the first to identify and link together the distinct brain patterns associated with both an object and its specific properties (color and shape). As part of the next steps in our research, we are now looking at how this knowledge becomes integrated in our brain during learning.

–

Marc Coutanche is a postdoctoral fellow at Yale University. He conducted this research with Sharon Thompson-Schill while at the University of Pennsylvania.

Are you a member of CNS with an interest in blogging? Consider contributing a guest post about your work or trends in the field. Email your ideas to CNS Public Information Officer, Lisa M.P. Munoz (cns.publicaffairs@gmail.com). 

 

By lmunoz Filed Under: Uncategorized Tagged With: language, memory, visual

Previous article: Coordinating Movement, Language, and Thoughts? An Expanded Role for the Cerebellum
Next article: Your Brain Has Got Rhythm Even If You Don’t

Recent Posts

  • CNS 2023: Day 4 Highlights
  • Psychedelics and Cognition: A New Look
  • CNS 2023: Day 3 Highlights
  • CNS 2023: Day 2 Highlights
  • Forget About It: Investigating How We Purge Thoughts from Our Minds

Blog Archives

Quick Tips for Getting Started on Twitter

Cognitive Neuroscience Society
c/o Center for Mind and Brain
267 Cousteau Place, Davis, CA 95618
916-955-6080: for CNS Membership Questions
805-450-7490: for annual meeting questions about- registration, posters, symposium
916-409-5069: Fax Line
email: meeting@cogneurosociety.org

Recent Posts

  • CNS 2023: Day 4 Highlights
  • Psychedelics and Cognition: A New Look
  • CNS 2023: Day 3 Highlights
  • CNS 2023: Day 2 Highlights
  • Forget About It: Investigating How We Purge Thoughts from Our Minds

Archives

Blog Archives

Previous Meeting Programs and Abstracts

Past Newsletters

All contents © Cognitive Neuroscience Society 1995-2019

Add to Calendar

Add to Calendar
04/16/2022 11:00 AM
04/16/2022 12:00 PM
America/Los_Angeles
How Prior Knowledge Shapes Encoding of New Memories
Description of the event
Grand Ballroom A
Create an Account

Login Utility