March 25–28, 2023

CNS 2023 |  Symposium Sessions






1 Stable Perception in the Wavering Brain - Reconciling Perceptual Stability with Dynamic Neuronal Representations Sunday, March 26, 1:30PM - 3:30PM (PT) Bayview Room
2 'Stop Thinking About It!': Cognitive and Neural Mechanisms of the Removal and Inhibition of Information in Memory Sunday, March 26, 1:30PM - 3:30PM (PT) Grand Ballroom A
3 The Data Science Future of Cognitive Neuroscience Sunday, March 26, 1:30PM - 3:30PM (PT) Seacliff Room
4 Beyond the Brain: Tracking Mind and Brain through the Periphery Sunday, March 26, 1:30PM - 3:30PM (PT) Grand Ballroom B/C
5 Can't Stop Won't Stop: Statistical Learning Persists through Development, Brain Damage and Competing Demands Monday, March 27, 10:00AM - 12:00PM (PT) Grand Ballroom B/C
6 Neurocomputational Accounts of Agency Monday, March 27, 10:00AM - 12:00PM (PT) Seacliff Room
7 Events and Their Boundaries: A Developmental Perspective Monday, March 27, 10:00AM - 12:00PM (PT) Bayview Room
8 From Observed Experience to Concepts: Multiple Views on the Mechanisms of Concept Formation in the Human Brain Monday, March 27, 10:00AM - 12:00PM (PT) Grand Ballroom A
9 In Memoriam Leslie G. Ungerleider (1946-2020) Tuesday, March 28, 1:30PM - 3:30PM (PT) Grand Ballroom B/C
10 The Brain is Complex: Have we Been Studying it all Wrong? Tuesday, March 28, 1:30PM - 3:30PM (PT) Grand Ballroom A
11 Altered States of Cognition: The Acute and Persisting Consequences of Psychedelic Drugs on Cognition Tuesday, March 28, 1:30PM - 3:30PM (PT) Bayview Room
12 Methodological Advances in the Study of Autobiographical Memory Tuesday, March 28, 1:30PM - 3:30PM (PT) Seacliff Room


Stable Perception in the Wavering Brain - Reconciling Perceptual Stability with Dynamic Neuronal Representations

Sunday, March 26, 2023, 1:30 PM - 3:30 PM (PT), Bayview Room

Chair: Leon Deouell, The Hebrew University of Jerusalem

Speakers: Laura Nicole Driscoll, Adam Kohn, Leon Deouell, Rafael Malach

A central goal of cognitive neuroscience is to find forms of isomorphism between behavior, including perception and action, and neural activity. A critical challenge to this endeavor is the finding that the same perceptual experience or the same motor behavior could be associated with varying neural signals, a phenomenon known as representational drift. For example, neural signals quickly habituate upon repetition of a stimulus or when the stimulus doesn't change, but the perception does not seem to dim. Well learned, routine motor processes can be associated with differing neural signals from day to day. These glaring dissociations between subjective perception and overt action on the one hand and neural signals on the other hand raise fundamental questions about the nature of neural representation. If the same perceptual experience or the same behavior are associated with different neural signals at different times of measurement, in what way are the neural signals representative of the behavior? The question has major implications to diverse fields, from understanding the neural correlates of conscious awareness, to the development of reliable brain-computer interfaces that can infer intentions from neural signals and activate external devices. In this symposium we bring together findings and models from mice to humans, from single units to intracranial LFPs, across the brain and across temporal scales from sub-seconds to days, in an attempt to reconcile variability with stability.

TALK 1: Dynamic Reorganization of Neuronal Activity Patterns in Parietal Cortex

Laura N. Driscoll, Stanford University

Neuronal representations change as associations are learned between sensory stimuli and behavioral actions. However, it is poorly understood whether representations for learned associations stabilize in cortical association areas or continue to change following learning. We tracked the activity of posterior parietal cortex neurons for a month as mice stably performed a virtual-navigation task. The relationship between cells' activity and task features was mostly stable on single days but underwent major reorganization over weeks. The neurons informative about task features (trial type and maze locations) changed across days. Despite changes in individual cells, the population activity had statistically similar properties each day and stable information for over a week. As mice learned additional associations, new activity patterns emerged in the neurons used for existing representations without greatly affecting the rate of change of these representations. We propose that dynamic neuronal activity patterns could balance plasticity for learning and stability for memory.

TALK 2: Communication Subspaces: A Mechanism for Flexible Interareal Signaling

Adam Kohn, Albert Einstein College of Medicine

Perception fluctuates over time and depends on context and goals, even for a fixed sensory input. One hypothesis is that such fluctuations in perceptual experience are due to altered interactions between the many cortical areas involved in sensory processing. I will present recent work that explores how different early visual cortical areas interact under different stimulus and behavioral conditions. In our approach, we leverage simultaneous measurement of neuronal population spiking activity in different areas of macaque cortex to understand principles of inter-areal communication. Across experimental conditions, we find that inter-areal interactions occur in a low-dimensional subspace—the communication subspace—of the measured population activity. The communication subspace defines which neuronal population activity patterns are effectively propagated to downstream targets, and which remain private to the source areas. Changes in neural activity that fall in the private subspace are not propagated between areas and thus would not be expected to affect perception. We propose that communication subspaces offer a new framework for understanding flexible and dynamic functional interactions between different brain areas and for elucidating which patterns of neural activity contribute to perceptual experience.

TALK 3: Stable Conscious Experience is Represented by Time-Invariant Neural Patterns

Leon Y Deouell, The Hebrew University of Jerusalem

Everyday experience contains many instances where sensory input is stationary, and so is our conscious experience of it. However, neural activity dramatically decreases shortly after the onset of a new stimulus, even when the stimulus remains unchanged. This glaring dissociation between subjective experience and neural signals presents a critical challenge for any theory of the neural basis of conscious experience. We used intracranial recordings from ten human patients viewing images from variable categories with multiple durations. Our findings reveal that the distributed neural representation of categories and exemplars in posterior-sensory regions remains sustained and stable for the duration of the stimulus, mirroring the stability of conscious experience, despite a five-fold neural activity attenuation. Furthermore, we find transient frontoparietal representations linked to stimulus onset, despite an absence of behavioral responses. Together, our results provide strong evidence that sustained perception is maintained by time-invariant spatial patterns in sensory regions while frontoparietal regions reflect perceptual changes. We suggest that the content of a visual experience is embedded in a lower order ‘experience subspace’, or manifold, within the high dimensional space of neuronal activity. The evident dissociation between the variability of response magnitudes and the invariance of distributed representations advocates a shift from activation-based to information-based understanding of perception and experience. The findings directly inform recent large-scale attempts to test theories of consciousness and support elements of both the Global Neuronal Workspace and the Integrated Information Theories of conscious experience.

TALK 4: Relational Coding Underlies the Stability of Perceptual Content in Human Visual Cortex

Rafael Malach, Weizmann Institute of Science, Israel

A fundamental question in visual neuroscience concerns the neuronal coding of visual percepts. It is generally assumed that response magnitude is central to this process. This assumption is supported by diverse experiments such as binocular rivalry and backward masking. However, this notion was challenged by findings in intra-cranial recordings conducted in patients. These studies demonstrated that the absolute magnitude of neuronal activations to stationary visual stimuli rapidly and dramatically declines, within about a second, during which perception remains highly stable. While this decline in activation poses a serious challenge to the presumed link between activation magnitude and perception, it offers a unique opportunity to identify the neuronal coding that underlies extended perception. Here we report on an intracranial IEEG study in which multiple (2571 contacts) recordings were made in visual and fronto-parietal areas in 13 patients, undergoing a clinical diagnosis for epilepsy. The patients’ task was to memorize and recall vivid images of famous faces and places presented for 1.5 seconds each. We find that while activation (High-frequency-broad-band) magnitude rapidly declined to about 25% of its initial level, the profiles (relative activations across contacts) of activation patterns and their relational structures –i.e. the similarity distances between these profiles, were sustained in high-order visual areas. These results are compatible with the hypothesis that perceptual content is coded by the profiles of neuronal patterns and their similarity relations, rather than the overall activation magnitude, in human visual cortex. The role of the activation magnitude (local Ignitions) in perception remains to be elucidated.


'Stop Thinking About It!': Cognitive and Neural Mechanisms of the Removal and Inhibition of Information in Memory

Sunday, March 26, 2023, 1:30 PM - 3:30 PM (PT), Grand Ballroom A

Chair: Marie Banich, University of Colorado Boulder

Speakers: Marie Banich, Sara Festini, Lili Sahakyan, Michael Anderson

How does one suppress a thought or a memory? This question has been vexing as it is difficult to know how to detect the disappearance of a memory trace. However, cognitive and cognitive neuroscience research is now shedding light on this issue. From that perspective, the goal of this symposium will be to explore recent theoretical ideas and empirical findings that provide insights into how information is removed or inhibited in memory. Evidence from behavioral approaches, functional neuroimaging, and EEG will be used to address this important issue. The first two talks will focus on these mechanisms with regards to working memory, while the second two will focus on mechanisms with regards to long-term memory. Distinct methods by which information may be removed, such as replacement vs. suppression, will be considered. These recent studies suggest an important contrast between the mechanisms invoked for the removal of information from working memory, which requires a shift of attention away from a thought and/or the likely inactivation of a representation, as compared to long-term memory, which may be accomplished by shutting down access to or processing of the hippocampus and/or modality-relevant regions of posterior cortex. Finally, the implications of these findings for psychopathological states, such as recurrent and intrusive (negative) thoughts will be considered.

TALK 1: Removal of Information from Working Memory Via Three Distinct Mechanisms

Marie Banich, University of Colorado Boulder

How can we as scientists determine when someone has stopped thinking of something? Said differently, how can we find an experimental signature of a thought that no longer exists? In this talk I will discuss our behavioral and neuroimaging research that addresses this question to elucidate the cognitive control mechanisms that allow information in working memory to be actively removed. Our approach, using a marriage of functional neuroimaging and machine learning techniques (including multi-voxel pattern analysis), along with with behavioral experiments has been able to follow the trace of a thought and then verify that it has indeed been removed. Moreover, this research provides evidence of at least three distinct ways of removing information from working memory: by replacing it with something else, by specifically targeting it for suppression, and by clearing the mind of all thought. In this talk, I will discuss a) the neural mechanisms that enable each of these three types of operations, b) provide evidence regarding the time course of each removal operation, and c) elucidate the consequences of these removal operations for the encoding of new information, which is critical for new learning. I will briefly discuss the implications of this work for psychological and psychiatric disorders, many of which are characterized by recurrent or intrusive thoughts that individuals cannot remove from the current focus of attention.

TALK 2: Directed Forgetting within Working Memory: Evidence for Successful Removal and an Active Mechanism

Sara Festini, University of Tampa

Directed forgetting is one method to voluntarily remove information from working memory. After encoding a small number of memoranda, participants are instructed to forget certain items and to remember other items. Following a short delay of several seconds, participants perform an item-recognition task to indicate whether or not a presented probe was a to-be-remembered item. Here, I discuss a series of experiments that utilized this Sternberg item-recognition working memory task with a directed forgetting manipulation, to evaluate the consequences and mechanisms of directed forgetting within working memory. In some experiments, semantic interference was induced by presenting associatively related memory lists. In other experiments, familiarity-based proactive interference was induced by presenting probe items that had been studied on the prior working memory trial instead of the current trial (i.e., recent probes). Results indicated that directed forgetting cues successfully reduced the level of both semantic interference and proactive interference for to-be-forgotten information relative to both to-be-remembered information and a control encode-only condition. Moreover, an attention-demanding articulatory suppression manipulation disrupted directed forgetting efficacy, providing evidence for an active mechanism. Finally, similar directed forgetting efficiency was observed regardless of whether simultaneously encoded to-be-remembered items were present or absent. Collectively, these results demonstrate that individuals are capable of successfully removing information from working memory following directed forgetting instructions, resulting in weaker memory representations and diminished memory interference, and that this intentional forgetting requires attention-demanding executive resources that can be disrupted by secondary tasks. Converging neuroimaging evidence supporting active removal/inhibition will be highlighted.

TALK 3: Disentangling Cognitive and Neural Mechanisms of Intentional Forgetting in Long-term Memory

Lili Sahakyan, University of Illinois at Urbana-Champaign

The focus on this talk will be the cognitive and neural mechanisms that are engaged when trying to get rid of unwanted memories in long-term memory. The presented studies use the list-method and item-method variants of directed forgetting procedure, which demonstrate that we have a remarkable ability to intentionally forget unwanted information. The critical question is how we do it. Our studies indicate that we engage in different strategies to accomplish intentional forgetting, including mental replacement of the to-be-forgotten information via thought substitution or via direct suppression of encoding. I will present new evidence from EEG studies indicating that these strategies rely on different neural mechanisms; namely, encoding suppression induces pre-frontally mediated inhibition, whereas thought substitution is accomplished through a contextual shift/unbinding mechanism. In addition, using a cross-task design, we related the stop-signal task to a variant of directed forgetting task, which involved not only “forget” cues but also “thought-substitution” cues. The results showed that performance in the stop-signal task was correlated with magnitude of encoding suppression via forget cues, but not thought substitution cues. To address the long-term implications of different forgetting strategies, I will present a study of delayed testing across multiple time points comparing the forget cues and thought-substitution cues. Finally, I will present evidence from fMRI and eye-tracking studies indicating the involvement of both inhibitory suppression and contextual shift/unbinding mechanisms, which are engaged in intentional forgetting in long-term memory. Collectively, these studies underscore that intentional forgetting is complex and has multiple neural and cognitive mechanisms.

TALK 4: Active Forgetting of Unwanted Memories via Global Hippocampal Suppression

Michael Anderson, Cambridge University

Being able to remember the past is a good thing, except when the past is unwelcome. When reminded of unpleasant events, people often seek to exclude the unwanted memory from awareness by stopping episodic memory retrieval. A large body of work indicates that intentionally suppressing episodic retrieval reduces hippocampal activity via control mechanisms mediated by the lateral prefrontal cortex and leads to the forgetting of the suppressed events. Here I present evidence that when people suppress retrieval given a reminder of an unwanted memory, they are far more likely to forget unrelated “bystander” experiences from periods of time surrounding retrieval suppression; in essence, retrieval suppression induces an “amnesic shadow” for nearby memories. This amnesic shadow follows a dose-response function, becomes more pronounced after practice suppressing retrieval, exhibits characteristics indicating disturbed hippocampal function, and is predicted by reduced hippocampal activity. Strikingly, any memory activated near in time to retrieval suppression—irrespective of whether people are aware of its reactivation—becomes vulnerable to disruption by the amnesic shadow. Retrieval suppression reduces hippocampal activity via GABAergic inhibition, broadly compromising hippocampal encoding, consolidation, and retrieval processes. Taken together, these findings indicate that people can disrupt unwelcome memories, and that is ability is accomplished by inhibitory control processes that interrupt hippocampal function, mimicking organic amnesia. These dynamics may contribute to significant memory deficits that often arise in the aftermath of trauma.


The Data Science Future of Cognitive Neuroscience

Sunday, March 26, 2023, 1:30 PM - 3:30 PM (PT), Seacliff Room

Chair: Bradley Voytek, UC San Diego

Speakers: Justine Hansen, Ellie Beam, Michael Hawrylycz, Cory Inman

Cognitive neuroscience is rapidly changing, increasingly moving towards ever larger and more diverse datasets analyzed using increasingly sophisticated methods. There is a strong need for cognitive neuroscientists who can think deeply about problems that incorporate information from a wide array of domains including psychology and behavior, cognitive science, genomics, pharmacology and chemistry, biophysics, statistics, and AI/ML. With its focus on combining many large, multidimensional, heterogeneous datasets, data science provides a framework for achieving this goal. Determining what data one needs, and how to effectively combine datasets, is a daunting process. For example, a neural data scientist might be tasked with combining demographic information and multiple cognitive and behavioral measures from individuals. From those same people we might also collect biometric information, motion capture, and/or eye-tracking data. We might also collect structural and functional brain imaging data. We must then contextualize our results within the broader neuroscientific literature (>3,000,000 papers), as well as understand how our neuroimaging results relate to other domains such as human brain gene expression patterns, electrophysiology, and so on. All the above data types are very different: continuous and ordinal, time-series, video and images, graphs, spatial, high-dimensional categorical / nominal, and unstructured natural language. In this Symposium we will discuss approaches to multimodal, heterogeneous data integration. We will focus on appropriate methods for aggregating and synthesizing heterogenous cognitive neuroscience data, as well as how to leverage large, open datasets to better contextualize results within the larger neuroscientific framework.

TALK 1: Neuromaps: Structural and Functional Interpretation of Brain Maps

Justine Hansen, McGill University

The development of advanced neuroimaging techniques has made it possible to annotate the brain in increasingly rich detail. In parallel, the open science movement has given researchers from diverse disciplines access to an unprecedented number of human brain maps. Integrating multimodal, multiscale human brain maps is necessary for broadening our understanding of brain structure and function. However, data are often shared in disparate coordinate systems, precluding systematic and accurate comparisons. Furthermore, no data sharing platforms integrate standardized analytic workflows. Here we introduce neuromaps, an open-access Python software toolbox for contextualizing human brain maps. Neuromaps currently features over 40 curated brain maps, including genomic, neuroreceptor, microstructural, electrophysiological, developmental, and functional ontologies. The toolbox implements functionalities for generating high-quality transformations between four standard neuroimaging coordinate systems (MNI152, fsaverage, fsLR, CIVET), and can parcellate vertex- and voxel-level data according to a specified brain atlas. Robust quantitative assessment of map-to-map similarity is enabled via a suite of spatial autocorrelation-preserving null models, including permutation-based and generative models. Neuromaps combines open-access data with transparent functionality for standardizing and comparing brain maps, providing a systematic workflow for comprehensive structural and functional annotation enrichment analysis of the human brain. Collectively, neuromaps represents a step towards creating systematized knowledge and rapid algorithmic decoding of the multimodal multiscale architecture of the brain.

TALK 2: Data-Driven Mapping and Validation of a Framework for Human Brain Function

Ellie Beam, Stanford University

The neuroscience community has yet to propose a consensus framework for brain function that that meets the quality standards required for applications in mental healthcare. In this talk, I describe a neuroinformatics approach to engineering and validating candidate frameworks for brain function. A data-driven framework for domains of brain function was derived by synthesizing the texts and brain coordinate data of nearly 20,000 human neuroimaging articles. The resulting domains characterize several novel brain circuits that are absent from the conceptually dominant expert-determined frameworks in neuroscience and psychiatry. The data-driven framework was validated through three quantitative measures for organizational principles, in many cases outperforming models for the expert-determined frameworks. The data-driven domains are shown to be reproducible, meaning that their structure-function links replicate in held-out articles. Second, they are modular, partitioning the literature into subfields which are internally consistent and distinct from one another. Third, they are generalizable, comprised of brain structures and functions on the domain level that are similar to those reported on the level of single articles. Taken together, the data-driven framework offers a comprehensive and validated characterization of human brain function seen through the lens of functional neuroimaging. This neuroinformatics approach may be extended in future work to synthesize and compare knowledge across neuroscience subfields.

TALK 3: The BRAIN Initiative Cell Census and Cell Atlas Networks

Michael Hawrylycz, Allen Institute for Brain Science

The BRAIN Initiative Cell Census Network (BICCN) is an integrated network of centers and laboratories, data archives, and Brain Cell Data Center (BCDC, with the goal of systematic multimodal brain cell type profiling and characterization in mouse and human. The BICCN data ecosystem provides extensive data, tools, and resources for the analysis of cell types and circuits. We overview this environment and the illustrate use of the BICCN resources for accessing data and tools, illustrating the value of these data for cognitive neuroscience. Studies of homology mapping of cell types across mouse and human, have led to the formation of a new consortium, the BRAIN Initiative Cell Atlas Network (BICAN) in which this work is now being extended to human and non-human primate. The combined BICCN/BICAN infrastructure and tools provides an important resource for the exploration and analysis of cell types in the brain with important implications for research in cognitive neuroscience.

TALK 4: CogNèuro GO: Capturing Synchronized Neural and Experiential Data in the Wild

Cory Inman, University of Utah

The ultimate goal of neuroscience is to explain real-world behavior in terms of the activities of the brain and to translate these discoveries into therapeutic approaches that can help those suffering from neural disorders. Our ability to understand and treat neurological disorders depends on our knowledge of how the human brain operates, not only in controlled laboratory experiments, but also in experiments that capture the complexity, scale, and functional characteristics of real-world behaviors. Reaching these goals requires an expansion of current cognitive neuroscience paradigms to study naturalistic behaviors synchronized with neural recordings and close collaboration with computer and neural data scientists to best understand these multimodal datasets. The explosion of commercially available wearable sensors, the rapid development of smart phone and extended reality technology, and the advent of implanted neural recording technologies provide exciting new opportunities for doing human neuroscience in the wild. Critically, given the unique opportunities afforded by these developments, new naturalistic paradigms must be considered, and some theoretical questions might be best served by considering whether laboratory experiments have fully captured the essential characteristics of real-world behaviors. As an example, I'll discuss our recent work in which patients with implanted neural recording systems (i.e., NeuroPace RNS) completed a navigation and episodic memory task around a college campus while medial temporal lobe (MTL) activity was recorded and synchronized with a variety of wearable sensors (video cameras, eye tracking, motion tracking, GPS, autonomic physiology, etc.) to examine how human MTL activity is modulated by real-world environmental contexts and experiences.


Beyond the Brain: Tracking Mind and Brain through the Periphery

Sunday, March 26, 2023, 1:30 PM - 3:30 PM (PT), Grand Ballroom B/C

Chair: Freek van Ede, Vrije Universiteit Amsterdam

Speakers: Giacomo Novembre, Shlomit Yuval Greenberg, Benjamin O. Rangel, Baiwei Liu

Cognitive neuroscience has naturally studied the cognitive brain by looking inward, into the brain itself. Yet, as the brain is fundamentally intertwined with the body, the periphery itself can provide valuable complementary sources of evidence. In this symposium, we bring together multiple recent demonstrations of peripheral "fingerprints of mind" that have opened new windows into our understanding of mind and brain. We illustrate the power of the general approach of looking "beyond the brain" across domains ranging from perception and action, to timing, inhibitory control, attention, and memory â?? thereby speaking to the diverse interests of the cognitive neuroscience community. We bring together telling examples from saliency processing (Novembre), temporal anticipation (Yuval-Greenberg), cognitive action control (Wessel), and internally directed attention (van Ede). We show how both peripheral inactivity (in bodily and oculomotor "freezing responses") as well as overt activity (in force-output responses and spatial microsaccade biases) can uncover new insights that inform cognitive neuroscience theory and implicate involvement of specific neural circuits and computations. Looking beyond the specific insights themselves, the symposium collectively aims to raise awareness for the utilisation of peripheral fingerprints as a currently largely unseized opportunity that uniquely complements traditional brain-imaging approaches.

TALK 1: Sensory Saliency at the Tip of Your Fingers: Evidence from Isometric Force Recordings

Giacomo Novembre, Italian Institute of Technology

Survival in a fast-changing environment requires animals not only to detect unexpected sensory events, but also to react. Such salient events, regardless of their sensory modality, evoke a large electrical brain response, dominated by a widespread negative-positive potential (N-P complex). I will first show that a pattern similar to the N-P complex can also be observed by recording isometric force output, simply measured by asking participants to exert force while holding a transducer with two fingers. Next, combining non-invasive (electroencephalography) and invasive (local field potentials) electrophysiological measures with simultaneous force recordings, I will show how (i) central and peripheral responses to salient events are strongly coupled, and that this coupling is (ii) mediated by cortical motor regions, (iii) well preserved across primate evolution and (iv) not reflexive but adaptive, i.e. sensitive to contextual changes in the environment. Finally, I will propose a putative neural network that might explain these findings and even reconcile them with others presented from other speakers of the symposium. These results reconceptualize the significance of the N-P complex, suggesting that we should look at it as a correlate of a reactive process rather than a merely perceptual one, preparing an individual for subsequent appropriate behavior. More broadly, these findings demonstrate that changes in isometric force – notably measured from the peripheral nervous system – can be a window into the neural network responsible for detecting salient environmental events.

TALK 2: Eye Movements as a Window on Temporal Expectations

Shlomit Yuval-Greenberg, Tel Aviv University

Temporal expectations, the ability to anticipate the timings of events based on temporal regularities of the environment, is typically assessed by measuring brain responses or collecting behavioral measurements such as reaction time (RT) and accuracy-rate. But brain markers require extensive preprocessing and are often difficult to interpret, and RTs and accuracy-rates are measured only after expectations have already been formed and therefore provide only a retrospective estimate. In contrast, eye movements are a continuous behavior which can provide reliable and interpretable information on fluctuations of cognitive states across time, and specifically those that are related to temporal expectation. In a series of studies, we have shown that eye movements constitute a window on temporal expectation processes. First, we showed that eye movements are inhibited prior to anticipated visual targets. This effect was found for targets that were anticipated either because they were embedded in a rhythmic stream of stimulation or because they were preceded by an informative temporal cue. Second, we showed that this effect is not specific to the visual modality but is present also for temporal orienting in the auditory modality. Last, we rule out alternative explanations and show that this effect is directly linked to the estimation of temporal expectations. We conclude that pre-target inhibition of eye movements is a reliable correlate of temporal expectations of various types and modalities. More generally, these findings suggest that eye movements are a powerful tool for understanding internal cognitive processes including temporal expectations.

TALK 3: Peripheral Motor Measures as a Window into the Fronto-Subthalamic Inhibitory Control Circuit

Benjamin O. Rangel, University of Iowa

Inhibitory control is one of the primary executive functions the human brain uses to overcome habitual responses and implement goal-directed behavior. Changes in cortico-spinal excitability (CSE) measured via EMG at peripheral muscles and transcranial magnetic stimulation (TMS) offer a direct physiological measurement of the effects of the brain’s inhibitory control circuitry. When an action is stopped, CSE of the underlying muscles is suppressed. Notably, however, the effects of inhibition on the cortico-motor system are even broader, as task-unrelated muscles also show suppressed CSE during stopping. Neuroanatomic theories have suggested that this is due to the involvement of the subcortical subthalamic nucleus (STN), which is broadly connected to the output nuclei of the basal ganglia that ultimately inhibit the motor system. In this talk, I will present recently published work (Wessel et al., Current Biology, 2022) in which we measured CSE in patients with implanted STN deep-brain stimulators (DBS). DBS allows a causal manipulation of STN. When STN was disrupted via DBS, non-selective CSE suppression was no longer observed. This provides first causal evidence for the involvement of STN in CSE suppression during stopping. In the second part of the talk, I will present new work from peripheral isometric force recordings, which provide a novel method of quantifying the non-selective effects of action-stopping. Force recordings address many of the known shortcomings of the TMS-based CSE method. Together, this work will show how measurements of physiological changes of the peripheral motor system can be used to interrogate the subcortical cognitive control circuitry.

TALK 4: Microsaccades as a window into the role of the brain's oculomotor system in internal selective attention

Baiwei Liu, Vrije Universiteit Amsterdam

Recent studies from our lab have uncovered how internally directed selective attention is associated with directional biases in small eye movements known as microsaccades – extending the role of the brain’s oculomotor system to internal orienting of visual attention. In my talk, I will start by highlighting this finding. I will then go on to show how we have started to utilise directional biases in microsaccades to track attentional coding inside the brain’s oculomotor system. Finally, I will discuss the relation between microsaccades and neural modulations by internal selective attention, showing how attentional modulations in microsaccades are correlated with, but not necessary for, neural modulations by covert attention. This will highlight the utility of microsaccades as a non-invasive read out of the brain’s oculomotor system and its contribution to internal shifts of attention.


Can't Stop Won't Stop: Statistical Learning Persists through Development, Brain Damage and Competing Demands.

Monday, March 27, 2023, 10:00 AM - 12:00 PM (PT), Grand Ballroom B/C

Chair: Laura Batterink, University of Western Ontario

Speakers: Zhenghan Qi, Amy Finn, Laura Batterink, Brynn Sherman

Regularities are ubiquitous in the environment. As we navigate our daily lives, we repeatedly encounter similar clusters of items, from the arrangement of objects in a room to co-occurring syllables in spoken language. We humans are capable of automatically extracting these regularities simply through passive exposure to input, a remarkable ability known as statistical learning. Statistical learning is involved in virtually all domains of cognition, but despite its importance as a cognitive construct, we still lack a clear understanding of how the brain learns patterns from incoming input, how learning mechanisms change with brain development, and how statistical learning differs from other forms of learning and memory. In this symposium, we discuss recent progress in understanding the neurocognitive mechanisms supporting statistical learning. Across the four presentations, we cover a number of central questions: (1) how ongoing brain development impacts and interacts with what is learned via statistical learning; (2) the degree to which statistical learning across different domains recruits domain-general versus domain-specific mechanisms; (3) whether the hippocampus plays a necessary role in statistical learning, and (4) how statistical learning diverges from -- and may sometimes compete with -- other long-term memory functions such as episodic encoding. Taken together, our findings demonstrate that statistical learning continues to operate in the face of developmental changes, hippocampal damage, and competing memory demands. We speculate that the strikingly diverse and widespread brain regions involved in statistical learning may contribute to the powerful and robust nature of this learning mechanism.

TALK 1: The Neurobiology of Auditory Statistical Learning is More Domain-Specific Early in Life

Zhenghan Qi, Northeastern University

Decades of behavioral research have demonstrated robust statistical learning (SL) abilities in the auditory domain across development. However, whether similar or different neurobiological learning mechanisms underlie SL in children and adults remains unclear. Using a triplet-pattern learning paradigm, we previously reported that although children and adults show similar performance in offline triplet recognition, children show faster learning during the exposure phase (Authors, 2022). In the current study, we compared the engagement of the domain-specific language networks and the domain-general attention network during SL of speech syllable patterns and monotone patterns. In 27 adults (mean age = 21.0) and 25 children (mean age = 8.6), we found that language networks, defined functionally for each participant, were sensitive to the embedded patterns in the speech syllable sequence, but to a greater degree in children than in adults. As expected, language networks were not sensitive to tone SL in either group. Adults’ dorsal attention networks showed sensitivity to embedded patterns in both syllable and tone sequences, while children’s dorsal attention networks showed sensitivity only to patterns in tone sequences. However, interestingly, for syllable SL, greater activation in the attention network is associated with greater activation in the language network in children. In contrast, such a relationship does not exist for adults. These results suggest that children’s developing language networks in the brain are still shaped by incoming language inputs, while adults’ short-term plasticity is reflected more as a changing mental state with flexible allocation of domain-general attentional resources.

TALK 2: The Developing Brain Represents Specific and Group Level Regularities Differently

Amy Finn, University of Toronto

Recent work shows that even though children and adults show robust statistical learning, they do not form the same memories of their repeated experiences: adults remember both general and specific aspects while children remember just the specific. To understand how these memory differences are related to ongoing neural development, we had 9-10-year-old children and adults complete a visual statistical learning task during fMRI. In response to structured versus random sequences, adults engaged middle frontal regions more than children, while children activated posterior hippocampus more than adults. These differences reveal shifts in the neurobiology of statistical learning that are consistent with the ongoing progression of brain development, both cortically and within the hippocampus. Critically, we additionally examined the representational structure of statistical memories, and found that while adults represented general group information in the vmPFC—consistent with where one might expect more schematic or general memories to be stored—children represented this information in the earlier developing IFG and posterior hippocampus. In the posterior hippocampus only, children showed a unique signature of general group knowledge in which items in the same statistical groups came to be represented as less similar following learning, rather than more. Together these data reveal age-related differences in the representation of general information after statistical learning, and suggest that these differences are biologically based. While the adult neurobiology underlying statistical learning may not be fully available to children, learning persists elsewhere in the brain, with consequences for how that same information is represented.

TALK 3: Statistical Learning does not Require the Dentate Gyrus

Laura Batterink, Western University

Statistical learning refers to the gradual process of extracting regularities across experiences. In contrast, pattern separation refers to the creation of distinct, separate representations of similar experiences, allowing us to distinguish events with overlapping features. Although statistical learning and pattern separation have seemingly opposing goals, both have been linked to hippocampal processing. To account for this puzzle, it has been proposed that there may be functional differentiation within the hippocampus, such that the trisynaptic pathway (entorhinal cortex > dentate gyrus > CA3 > CA1) supports pattern separation, whereas the monosynaptic pathway (entorhinal cortex > CA1) supports statistical learning. To test this hypothesis, we investigated the behavioral expression of these two processes in patient BL, an individual with highly selective bilateral lesions in the dentate gyrus that presumably disrupt the trisynaptic pathway. We obtained multiple behavioural assessments of pattern separation as well as tests to assess both implicit and explicit expressions of statistical learning. As expected based on prior work, patient BL showed significant deficits in pattern separation, as well as on an explicit, rating-based measure of statistical learning. Critically, and in contrast, BL showed intact statistical learning on an implicit measure as well as a familiarity-based recognition measure of statistical learning. Together, these results suggest that dentate gyrus integrity is critical for high-precision discrimination of similar inputs, but not the implicit expression of statistical regularities in behaviour. Our findings offer unique novel support for the view that pattern separation and statistical learning rely on distinct neural mechanisms within the hippocampus.

TALK 4: Learning from Abstract Regularities in the Hippocampus and Visual Cortex

Brynn Sherman, University of Pennsylvania

The human brain is highly sensitive to structure. We rapidly extract spatial and temporal regularities via statistical learning and exploit learned regularities to make predictions about upcoming experiences. However, our experiences are not fully regular; the stable aspects of experience (e.g., sequences of landmarks on a daily commute) co-exist with idiosyncrasies (e.g., variable traffic patterns across days). Thus, our experiences enable two competing memory representations: statistical learning of abstract structure and episodic encoding of specific idiosyncrasies. To understand how the brain arbitrates between these two competing processes, we tested how participants simultaneously learn both episodic information and abstract (category-level) regularities. Using behavior and fMRI, we found competition between statistical learning and episodic memory, whereby predictions from statistical learning interfered with episodic encoding, a trade-off that may arise within the hippocampus. Using intracranial EEG, we replicated and strengthened evidence for this trade-off by measuring predictive evidence in visual cortex in a time-resolved manner. These findings suggest that learned regularities can influence episodic encoding, but raise questions of how we abstract over episodes to learn higher-order regularities in the first place. In a second intracranial EEG study, we employed neural entrainment to measure learning and found that visual cortex rapidly entrained to both low- and high-level structure, suggesting that the brain rapidly generalizes across episodes to uncover reliable, abstract structure. Together, these data shed light on how we learn across experiences to form statistical models, and how those models in turn regulate the encoding of new experiences into memory.


Neurocomputational Accounts of Agency

Monday, March 27, 2023, 10:00 AM - 12:00 PM (PT), Seacliff Room

Chair: Ivan Grahek, Brown University

Speakers: Hayley Dorfman, Ivan Grahek, Mimi Liljeholm, Daniel Polani

Humans and artificial agents act adaptively by choosing actions which reliably lead to desired outcomes. Reinforcement learning provides a framework for understanding how outcome value is learned and maximized to guide behavior. However, mounting evidence suggests that humans also aim to maximize the amount of control they can exert over their environment. Going beyond pure value maximization, here we consider the role of agency in goal-directed behavior of humans and artificial agents. In this symposium we will present work on the neurocomputational bases of controllability representations in the human brain, and relate those mechanisms to recent progress in artificial intelligence research. The first three talks will provide a mechanistic perspective on controllability learning, and demonstrate that beliefs about controllability guide what actions humans take and how much effort they exert. The last talk will connect this work to research in computer science, to show that artificial agents that maximize not only expected reward, but also the amount of control they can exert over their environment, exert rich and intuitive behavior, including in interactions with humans. By combining neural data (fMRI & EEG), reinforcement learning, Bayesian inference, and information theory, the studies presented in this symposium provide a path toward a mechanistic account of controllability and agency. The integration of findings from cognitive neuroscience and computer science enhances our understanding of how agency drives goal-directed behavior, and provides an avenue for developing intrinsically motivated autonomous artificial agents.

TALK 1: Neurocomputational Mechanisms of Agency-Modulated Reward Learning

Hayley Dorfman, Harvard University

It is not always possible to know for certain whether our actions caused an outcome. Instead, we must infer how much control we have in any given environment. These inferences about control can influence the extent to which we learn from outcomes. When outcomes are attributed to our own actions, are we more likely to update our subsequent behavior? I will first present a behavioral study demonstrating that the extent to which individuals learn from positive relative to negative outcomes can be directly modulated by beliefs about control. This behavior is best explained by a Bayesian reinforcement learning model that can account for causal inference. These results suggest that asymmetric patterns of learning for positive and negative outcomes can be explained by individual differences in causal inference. I will also present a fMRI study investigating the neural mechanisms underlying how beliefs about control integrate with learning signals. These results show that enhanced activation in frontotemporal regions is associated with increased beliefs about control, suggesting that these regions represent self-related processes. We also show that prediction errors that are modulated by control beliefs are represented in the dorsal striatum, while prediction errors that are not modulated by these beliefs are represented in the ventral striatum. These findings suggest a functional dissociation between learning signals that are computed separately in distinct striatal subregions. We also show that control beliefs enhance effective connectivity from frontotemporal regions to the dorsal striatum, providing evidence for a neural mechanism that integrates control beliefs with learning signals.

TALK 2: Neural Dynamics Underlying Updating and Adaptation to Changes in Performance Efficacy

Ivan Grahek, Brown University

To determine how much cognitive effort to invest in a task, people need to consider whether exerting control matters for obtaining potential rewards. In particular, they need to account for the efficacy of their performance – the degree to which potential rewards are determined by their performance or by independent factors (e.g., random chance). Yet it remains unclear how people learn about their performance efficacy in a given environment. In this talk I will present a series of studies which combine computational modeling with measures of task performance and EEG, to provide a mechanistic account of how people learn and dynamically update efficacy expectations in a changing environment, and how they use these expectations to proactively adjust control allocation. In this series of studies subjects performed incentivized cognitive control tasks while their performance efficacy (the likelihood that rewards are determined by performance or at random) varied over time. We show that people learn efficacy estimates through a neural mechanism similar to the one used to learn from rewards, and that they use this information to adjust how much control they allocate. Using a computational model, we show that these cognitive control adjustments reflect changes in information processing, rather than the speed-accuracy tradeoff. These findings demonstrate the neurocomputational mechanism through which people learn how worthwhile their cognitive efforts are.

TALK 3: Controllability and Goal-Directedness

Mimi Liljeholm, University of California, Irvine

Theories of instrumental behavior distinguish between goal-directed decisions, motivated by a deliberate consideration of the probability and current utility of their consequences, and habits, which are rigidly and automatically elicited by the stimulus environment based on reinforcement history. Generally, while computationally expensive, a goal-directed strategy offers greater levels of flexible instrumental control. A critical aspect of flexible choice, however, is that alternative actions yield distinct consequences: Only then does discrimination and selection between actions allow an agent to flexibly obtain the currently most desired outcome, warranting the processing cost of goal-directed computations. One possibility, therefore, is that a goal-directed decision strategy is deployed when contingencies afford high levels of flexible instrumental control. Moreover, since subjective outcome utilities often change from one moment to the next, flexible instrumental control is essential for reward maximization and, as such, may serve to reinforce and motivate decisions that guide the organism toward high-agency environments. I will present a series of behavioral and fMRI experiments demonstrating that sensitivity to outcome devaluation - a defining feature of goal-directed choice - is greater in environments in which available action alternatives yield distinct outcomes, that participants prefer to make decisions in such high-agency environments, and that these effects are predicted by neural activity in the right supramarginal gyrus. Implications for the optimization of artificial autonomous agents will be discussed.

TALK 4: Inter-Agent Empowerment as Social Incentive and the Three Laws of Robotics

Daniel Polani, University of Hertfordshire

Living beings are distinct from artificial agents in that they need to continuously operate in largely unexplored, novel state spaces. They do not have the opportunity to train extensively in controlled, reproducible, and safe environments. The more, it is crucial that they can make effective decisions and take actions quickly, effectively, even if not always strictly optimally. Niche-specific drives and incentives may cover certain aspects of an organism’s requirements, but it is implausible that these would be comprehensive. It has thus been hypothesized that organisms instead employ “intrinsic motivations”, generic drives which are determined only by their sensorimotor contingency to determine saliency and direct their behaviour in novel situations when immediate concrete tasks which would trigger goal-directed action have not yet crystallized. In this talk I will present the work on one such intrinsic motivation – empowerment. This is an information-theoretic generalization of controllability/observability of the action-perception loop, and it measures how much detectable changes (measured in bits) the agent can inject in the environment via the actions she takes. I will present a series of simulation and experimental studies across a variety of agent scenarios which demonstrate that in multiagent scenarios involving humans, which include human-agent companions, assistance, and interaction relations, empowerment as an incentive mediates behaviours which are surprisingly interpretable in intuitive terms, including operational surrogates for the Three Laws of Robotics.


Events and Their Boundaries: A Developmental Perspective

Monday, March 27, 2023, 10:00 AM - 12:00 PM (PT), Bayview Room

Chair: Susan Benear, New York University

Speakers: Susan Benear, Andrei Amatuni, Erika Wharton-Shukster, Sang Ah Lee

The ability to organize a lifetime of episodic memories into a coherent structure relies heavily on grouping our ongoing experience into events separated by boundaries. Adults’ brains show evidence of boundaries between perceived events that align well with behaviorally defined boundaries, which in turn relates to their memory for events. Understanding how event cognition changes across development can provide insight into how events shape perception and memory. In this symposium, we will begin with an investigation into how young children and adults parse and remember events, showing that children who demarcate boundaries similarly to adults have better event memory, and that behavioral boundaries are represented neurally in young children. Next, we will discuss work examining the aspects of events that lead children versus adults to define a boundary, showing that children tend to mark boundaries at semantic shifts, while adults focus on perceptual changes. After this, we will shift to a study evaluating how young children and adults use spatial boundaries to structure their memory, demonstrating that children’s perception of boundaries grows more abstract with development, and that spatial boundaries are represented in the brain and support associative memory in adults. Finally, we will move beyond event boundaries to cover the way children and adolescents’ brains represent full event models, revealing that neural response patterns across events in children are similar to those of adults and suggesting that children’s memories are likely shaped by events like adults’ are.

TALK 1: Setting Boundaries: Development of Neural and Behavioral Event Cognition in Early Childhood

Susan Benear, New York University

The ongoing stream of sensory experience is so complex and ever-changing that we tend to parse this experience at “event boundaries”, which structures and strengthens memory. Memory processes undergo profound change across early childhood. Whether young children also divide their ongoing processing along event boundaries, and if those boundaries relate to memory, could provide important insight into the development of memory systems. In Study 1, 4-7-year-old children and adults segmented a cartoon, and we tested their memory. Children’s event boundaries were more variable than adults’ and differed in location and consistency of agreement. Older children’s event segmentation was more adult-like than younger children’s, and children who segmented events more like adults had better memory for those events. In Study 2, we asked whether these developmental differences in event segmentation had their roots in distinct neural representations. A separate group of 4-8-year-old children watched the same cartoon while undergoing an fMRI scan. In the right hippocampus, greater pattern dissimilarity across event boundaries compared to within events was evident for both child and adult behavioral boundaries, suggesting children and adults share similar event cognition. However, the boundaries identified by a data-driven Hidden Markov Model found that a different brain region—the left and right angular gyri—aligned only with event boundaries defined by children. Overall, these data suggest that children’s event cognition is reasonably well-developed by age 4 but continues to become more adult-like across early childhood.

TALK 2: Linking perceptual and semantic predictability to patterns of event segmentation in development

Andrei Amatuni, University of Texas

Children identify different transitions between events, i.e. event boundaries, relative to adults. Here, we test whether developmental differences in event segmentation arise from how participants experience uncertainty when viewing naturalistic video narratives. It’s suggested that neural systems supporting event segmentation in adults do so by monitoring changes in event content, wherein perceptual or semantic changes trigger uncertainty, leading to boundary formation within continuous experience. Here, we use deep neural nets to quantify the predictability of individual movie frames based on preceding frames, using two different models to independently quantify prediction based on perceptual and semantic content. We modeled the perceptual and semantic uncertainty on a moment-by-moment basis and related these computationally-derived measures of uncertainty to participants’ subjective experiences of event boundaries, testing whether age-related differences in event segmentation stem from greater sensitivity to predictive properties of the input. Children (5-12 years old) and adults (N=74) watched five naturalistic movies and pressed a button whenever they perceived an event boundary. Children were less likely to experience event boundaries at perceptually unpredictable moments than adults (p < 0.05), while event boundaries tracking semantic uncertainty decreased with age (p < 0.05). In particular, we find that adults’ estimates of event boundaries minimize semantic belief updating (p < 0.01), consistent with the idea that adults use pre-existing semantic knowledge when encoding novel narrative events. Collectively, these findings indicate that developmental differences in event segmentation arise from how children and adults use perceptual and semantic content to derive expectations during novel event encoding.

TALK 3: Spatial boundaries and the development of episodic memory structure

Sang Ah Lee, Seoul National University

Research has shown that navigation and episodic memory are intimately related and supported by the hippocampus. We suggest that the spatial structures that guide navigation are the same ones that influence the organization of episodic memory. In one study, we found that temporal sequencing of memory was facilitated by spatial information in 3-6-year-old children, and the ability to accurately remember the order of events that crossed a spatial boundary developed later than those on the same side of the boundary. Furthermore, we found that a 3D wall-like boundary facilitated event sequence memory in 3-year-olds, while 2D lines and a row of objects exerted an influence from about 5 or 6 years of age, demonstrating that children’s representation of boundaries becomes more abstract over development. To gain insight into the neural correlates of boundary-based event memory structure, adult participants performed a conceptually similar, computer-based episodic memory task in the fMRI scanner. We found that boundary-induced memory facilitation (compared to the no-boundary control condition) was not simply due to an improvement in spatial sequence memory but an improvement in episodic binding (e.g., object-space association). Moreover, individual differences in boundary facilitation were reflected in the degree of activation of the hippocampus and scene-selective cortical areas in response to the presence of the spatial boundary. Together, these results suggest the hippocampal representation of the spatial structure of the environment aids in binding the elements of memory into events and that developmental changes in spatial representations may result in changes in children’s episodic memory organization.

TALK 4: Beyond the Boundaries: Event Model Maintenance Across Development

Erika Wharton-Shukster, University of Toronto

It is no surprise there is growing interest in children’s event segmentation, as parsing experiences has such an important impact on memory (Jie et al., 2021). What is more surprising, however, has been the focus on boundaries at the expense of the events themselves. Event representations, or “event models”, employ schematic knowledge while integrating new information. This is instantiated in increased activation of brain regions like the mPFC (Ezzyat and Davachi, 2011) over the course of an event. However, with knowledge and the brain developing across childhood, might children’s event maintenance change into early adulthood? Might they rely less on their limited knowledge, showing less of this ramping effect in the mPFC or other semantically relevant regions (e.g., Default mode network (DMN); Chen et al., 2016)? Alternatively, might event model maintenance remain stable across childhood despite increasing knowledge? To address these questions, we analyzed fMRI data of a sample of children and adolescents (5–19 years; n=46) as they watched a narrative movie. Using a whole-brain parametric regressor, we analyzed activation increases over the course of events (as defined by independent raters) within the whole sample and across age. As a group, we found increases in regions making up the DMN (mPFC, PCC, precuneus, angular gyrus, and MTL). Importantly, there was no difference in this ramping effect with age. Thus, although children have less experience, they may rely on and integrate it similarly to adults, suggesting that children’s memory may be similarly shaped by events, even if the content differs.


From Observed Experience to Concepts: Multiple Views on the Mechanisms of Concept Formation in the Human Brain

Monday, March 27, 2023, 10:00 AM - 12:00 PM (PT), Grand Ballroom A

Chair: Anna Leshinskaya, University of California, Davis

Speakers: Ella Striem-Amit, Morgan Barense, Anna Leshinskaya, Yanchao Bi

Concepts offer a scientific puzzle: although many concepts are learned from experience, it is notoriously difficult to reduce their meanings to sensory-motor features. This suggests that the mind richly transforms sensory experiences, but this cognitive and neural mechanism remains poorly understood. Our speakers address this problem each from a different angle, offering multiple views on the origins and nature of abstraction. Striem-Amit studies individuals who are born blind or without certain motor effectors, like hands. She demonstrates striking plasticity in some, but not other, parts of the brain's action and vision systems that allows them to adapt to these diverse experiences and thus capture a common meaning independent of sensori-motor parameters. Barense shows how the brain creates multi-modal object representations from component sensory features, finding a uniquely integrative code in the anterior temporal lobe. From this view, object concepts obtain their abstraction by multi-modal binding that creates a whole greater than the sum of its parts. Leshinskaya takes another view on the mechanisms of abstraction: the ability to encode relational structure among objects and events. She shows that by encoding stimuli in terms of their relations, neural representations of those stimuli diverge from the representations of their sensory properties, and this can align with domain specializations. Bi argues that abstraction originates from language. She shows that semantic representations in the anterior temporal lobe are weakened in human participants with language delay resulting from early deafness and late exposure to sign language. This illuminates the influence of language on concept representations.

TALK 1: Inferring Representational Abstraction from Plasticity Patterns in those Born Blind or Without Hands

Ella Striem-Amit, Georgetown University

How abstract are the representations of objects or actions in different parts of our brains? One way to study this question is if they are indifferent to transformations related to their low-level sensory or motor features. For example, in the sensory domain, that entails representing the same content regardless of being presented visually or auditorily; in the motor domain, representing the same action regardless of the body part performing it. However, as many concepts entail complex sensorimotor features, one aspect of the concept may elicit imagery or recall of related aspects. Studying people born without experience in one sensory or motor channel provides an additional step in verifying how tied brain representations are to their low-level features, as people born blind, for example, cannot imagine or recall visual features. We will discuss how studying these populations can be used to probe the question of abstraction across domains. In individuals born without hands, who use their feet to perform everyday actions, we find that association motor areas maintain similar preferences for actions in these individuals to those of control participants performing the actions with their hands. This suggests abstraction beyond the low-level sensorimotor features. In contrast, plasticity in primary cortices in these individuals shows different patterns of plasticity that are not invariant to sensorimotor features. Together, these findings suggest that different patterns of plasticity can be informative in understanding the hierarchy of action abstraction.

TALK 2: Multimodal Object Representations Rely on Integrative Coding

Morgan Barense, University of Toronto

Combining information from multiple senses is essential to object recognition, core to the ability to learn concepts, make new inferences, and generalize across distinct entities. Yet how the mind combines sensory input into coherent multimodal representations – the multimodal binding problem – remains poorly understood. Here, we applied multi-echo fMRI across a four-day paradigm, in which participants learned 3-dimensional multimodal object representations created from well-characterized visual shape and sound features. Our novel paradigm decoupled the learned multimodal object representations from their baseline unimodal shape and sound features, thus tracking the emergence of multimodal concepts as they were learned by healthy adults. Critically, the representation for the whole object was different from the combined representation of its individual parts, with evidence of an integrative object code in anterior temporal lobe structures. Intriguingly, the perirhinal cortex – an anterior temporal lobe structure – was by default biased towards visual shape, but this initial shape bias was attenuated with learning. Pattern similarity analyses suggest that after learning the perirhinal cortex orthogonalized combinations of visual shape and sound features, transforming overlapping feature input into distinct multimodal object representations. These results provide evidence of integrative coding in the anterior temporal lobes that is distinct from the distributed sensory features, advancing the age-old question of how the mind constructs multimodal objects from their component features.

TALK 3: Relational Encoding Drives Sensory Abstraction in Lateral Temporal Cortex

Anna Leshinskaya, University of California, Davis

Across two studies, we investigated the influence of temporal relation learning on neural responses to novel visual events. In study 1, participants learned that event A was followed by event B but not C, resulting in a correlated neural response to A and B (vs C). Such relational knowledge is often thought to be represented in the perceptual system, but we report a divergence: areas that represented the A-B relationship were reliably dissociated from areas that represented the visual features of the same stimuli. In Study 2, we showed that event structure associated with novel objects can differentially engage domain-selective areas in the temporal lobe, controlling for shape or motor association. Objects that had been shown to move prior to an event, seeming to cause it, elicited more activity in tool-selective brain areas than objects that moved after an event (all when shown as static pictures following learning). This differential response was absent in areas selective to familiar non-tool objects. Overall, we argue that the ability to learn relations is a fundamental mechanism by which the brain builds representations that diverge from sensory-motor features and move closer to conceptual ones.

TALK 4: The Effects of Early-Life Language Experience in Deriving Neural Semantic Representations

Yanchao Bi, Beijing Normal University

One signature of the human brain is its ability to derive knowledge from language inputs, in addition to nonlinguistic sensory channels such as vision and touch. Information derived from sensory and language inputs are often convergent and the nature of the neural representations are difficult to be disentangled. Does human language experience specifically modulate the way in which semantic knowledge is stored in the human brain? We investigated this question using a unique early-life language-deprivation human model: early deaf adults who are born to hearing parents and thus had delayed acquisition of any natural human language (speech or sign; N = 23), with early deaf adults who acquired sign language from birth as nonlinguistic, sensory experience matched controls (N = 16). Neural responses in a meaning judgment task with 90 written words that were familiar to both groups were measured using fMRI. Using representation similarity analyses, we found that the early language-deprived group, compared with the deaf control group, showed reduced semantic sensitivity, in both univariate (preference for abstract/nonobject words) and multivariate pattern (semantic structure encoding) analyses, in the left dorsal anterior temporal lobe (dATL). These results provide positive, causal evidence that the neural semantic representation in dATL is specifically supported by language, , as a unique mechanism of representing (abstract) semantic space, beyond the sensory-derived semantic representations distributed in the other cortical regions.


In Memoriam Leslie G. Ungerleider (1946-2020)

Tuesday, March 28, 2023, 1:30 PM - 3:30 PM (PT), Grand Ballroom B/C

Chair: Sabine Kastner, Princeton University

Speakers: Helen Barbas, Chris Baker, Peter De Weerd, Marlene Behrmann

The scientific community lost one of its giants, Leslie G. Ungerleider (1946-2020), who for many years was Chief of the Laboratory of Brain and Cognition at the National Institute of Mental Health and an NIH Distinguished Investigator. Leslie united, in one single remarkable scientist, multiple disciplines and the rich knowledge that comes with each. She pursued several different career paths and scientific interests in parallel: as a neuroanatomist, her work provided the bedrock for an understanding of structural connectivity within the visual system and beyond; as a neurophysiologist, her studies laid the foundation for an understanding of the visual systems' functions; and as a neuroimager, she translated her rich knowledge of the primate brain to the exploration of structure-function relationships in human cognition. Leslie was one of the founders and an immense driver of the cognitive neuroscience field. In addition to her fundamental contributions to the field of neuroscience, Leslie was an exceptional mentor and shaped the careers of numerous scientists. In this symposium, we will celebrate Leslie Ungerleider's career and life as a scientist in many ways. Trainees, collaborators and colleagues will reflect on her profound contributions to our field, her mentorship and role model to female scientists.

TALK 1: Primate Parallel Pathways in Visual Areas and Beyond

Helen Barbas, Boston University

Theoretical models provide a basis to unravel the organization of neural circuits from sensory to high-order processing cortices. One model with heuristic value proposed the existence of “two cortical visual systems” with distinct attributes. Bolstered by classical circuit and functional studies, Ungerleider and Mishkin (1982) proposed that one system originates in the ventral part of the primate primary cortex (V1) and extends through ventral occipital and temporal visual cortices, and has a role in object perception. The other originates in dorsal V1 and extends to dorsal occipital and parietal cortices and is specialized in processing object location. This model provided the basis for the detailed study of anatomic circuits within the visual cortical systems and the discovery of new and specialized visual areas in the primate temporal and parietal regions. These studies laid the foundation for the use of functional neuroimaging approaches in humans. Progress in the study of functional and anatomic pathways through visual sensory areas and high-order association areas showed strong influence from the amygdala, revealing the impact of emotional significance on neural processing for social cognition. The core parallel systems are reminiscent of the classical dual modes of the origin of the cortex and its systematic architectonic variation across systems. The classical and recent models have significant implications for recruitment of multiple parallel systems in neural signal processing, ranging from simple sensory attributes to complex cognition, and for the development and evolution of the cerebral cortex.

TALK 2: The Essential Interaction of Function and Anatomy in Primate Vision

Chris Baker, National Institute of Mental Health

The pioneering work of Leslie Ungerleider led to the division of cortical visual processing into separate dorsal and ventral visual pathways, specialized for spatial and object processing, respectively. I will present an updated view of cortical visual processing and discuss why this framework which connects human and non-human primate studies and links anatomy, function and behavior remains a critical foundation for much of modern cognitive neuroscience. The dorsal and ventral pathways are now understood to be highly recurrent, interactive networks defined by the nature of both input and output targets. The dorsal visual pathway bridges between retinotopic cortex and structures involved in spatial working memory, visually guided action and navigation, while the ventral pathway bridges to at least six cortical and sub-cortical structures involved in long term learning and memory. A third pathway has now been proposed, extending from retinotopic cortex into the banks of the superior temporal sulcus and proposed to be specialized for social perception. Each of these three pathways shows distinct properties relating to retinotopy and motion, reflecting the underlying anatomical connectivity. The complex connectivity along these pathways challenges both the standard hierarchical view of visual processing and the parcellation of cortex into discrete modules. Given our limited knowledge of anatomical connectivity in human, these frameworks provide a critical grounding for human neuroimaging studies and a set of constraints that inform computational modeling. Ultimately, anatomy forms the scaffolding on which cortical processing develops, providing a critical window into function and its link to behavior.

TALK 3: On the Relevance of Gamma Oscillations for Figure-Ground Segregation in Visual Textures

Peter De Weerd, Maastricht University

The relevance of gamma oscillations for perception has been a matter of debate. We addressed this question by testing whether a mathematically grounded theory of synchronization (the theory of weakly coupled oscillators, TWCO) would be able to predict human texture segregation performance and related neural synchronization data in monkey V1. In monkey V1 experiments, we recorded LFPs and action potentials from two neighboring V1 locations. The physical distance between two recording probes was manipulated to vary lateral coupling strength between the recorded neuronal populations. The local stimulus contrasts driving the recorded populations were varied to manipulate the gamma frequency difference (detuning) between the recorded neuronal populations. As predicted by TWCO, the two recorded neuronal populations synchronized within a triangular region in a detuning by coupling space (the ‘Arnold tongue’). To test the relevance of our findings in monkeys for human perception, a texture was designed with circular Gabor elements. Human participants had to detect a figure defined by a random variation among Gabor element contrasts that was confined within a smaller range than in the background. Human texture segregation plotted as a function of spacing and contrast variation revealed a ‘behavioral Arnold tongue’ in line with predictions from TWCO. Training-induced improvements in performance could also be predicted by TWCO-inspired models fitted with a learning rule. The present work shows that a mathematically specified theory of gamma synchronization predicts interrelated neurophysiological and visual psychophysical observations. The data support a contribution of gamma oscillations to figure-ground segregation for the textures used.

TALK 4: Face Patches and Circuitry in Human and Non-Human Inferotemporal Cortex

Marlene Behrmann, University of Pittsburgh

Although the presence of face patches in primate inferotemporal (IT) cortex is well established, the functional and causal relationships among these patches remain elusive. To examine the circuitry as well as the necessary (and perhaps sufficient) neural contributions to face perception, two parallel lines of research are discussed. The first approach includes behavioral and functional MRI studies with humans with normal or impaired face perception. The results demonstrate that, in individuals with congenital prosopagnosia (CP), connectivity to ‘extended’, more anterior regions such as the anterior temporal lobe is compromised structurally and functionally. The complementary second approach involves transient inactivation of face patches with concurrent functional MRI with non-human primates. As with the CP, the results revealed that anterior face patches required input from middle face patches, while the face selectivity in middle face patches arose, in part, from top-down input from anterior face patches. In both investigations, the face patches also evinced activation in response to objects, albeit to a lesser extent. These findings of the causal relationship among the face patches demonstrate that the primate IT face (and object) circuit is organized into multiple necessary feedforward and feedback pathways.


The Brain is Complex: Have we Been Studying it all Wrong?

Tuesday, March 28, 2023, 1:30 PM - 3:30 PM (PT), Grand Ballroom A

Chair: Brad Postle, University of Wisconsin, Madison

Speakers: Luiz Pessoa, Lucina Uddin, John Krakauer, Felipe De Brigard

The human brain is an enormous nonlinear dynamical system. Its roughly 100 billion neurons are massively, recurrently interconnected, and each of its hundreds of trillions of synapses can undergo rapid plastic change. Increasingly, scientists inside and outside the neurosciences have advocated for a radical rethinking of how we approach the study of the brain. It has been argued, for example, that much of contemporary cognitive neuroscience approaches the brain as a 'near-decomposable' system, in which interactions within functional subsystems are much stronger than interactions between them. This approach is doomed to fail if, instead, the brain is interactionally complex, with functions that are emergent from this complexity. But does acknowledgement of brain's complexity necessitate that we abandon, for example, the study of small groups of brain regions/circuits to understand the neural bases of constructs from cognitive, affective, or social psychology? Does it fundamentally challenge inferences of causality drawn from observing the behavioral effects of lesions or experimental manipulations? These and other questions are debated in a Special Focus in the Journal of Cognitive Neuroscience, to appear early in 2023, organized around a precis of Luiz Pessoa's book The Entangled Brain (2022). This symposium will feature presentations by Pessoa and authors of three of the commentaries in JoCN, followed by a moderated discussion/debate among these four plus additional contributors to the Special Focus (including S. Sadaghiani and B. Wyble). It will conclude by broadening the Q&A to the audience.

TALK 1: The Entangled Brain

Luiz Pessoa, University of Maryland

We need to understand the brain as a complex, entangled system. Why does the complex systems perspective, one that entails emergent properties, matter for brain science? In fact, many neuroscientists consider these ideas a distraction. I discuss three principles of brain organization that inform the question of the interactional complexity of the brain: (1) massive combinatorial anatomical connectivity; (2) highly distributed functional coordination; and (3) networks/circuits as functional units. To motivate the challenges of mapping structure and function, I will discuss neural circuits illustrating the high anatomical and functional interactional complexity typical in the brain, and will consider potential avenues for testing for network-level properties, including those relying on distributed computations across multiple regions. The complex systems perspective has important implications for brain science, including the need to characterize decentralized and heterarchical anatomical–functional organization. It also has important implications for causation, because traditional accounts of causality provide poor candidates for explanation in interactionally complex systems like the brain, given the distributed, mutual, and reciprocal nature of the interactions. Ultimately, to make progress understanding on how the brain supports complex mental functions, we need to dissolve boundaries within the brain—those suggested to be associated with perception, cognition, action, emotion, motivation—as well as outside the brain, as we bring down the walls between biology, psychology, mathematics, computer science, philosophy, and so on.

TALK 2: A Perspective from Network Neuroscience

Lucina Uddin, UCLA

Many agree that, because human brain function is context-dependent and interactionally complex, we should embrace brain networks as the functional units of interest. A more contentious issue for the field, however, is how to define brain networks in ways that will facilitate further discovery. Important questions that I will address include: what constitutes a brain network? what are the spatial topographies of commonly observed brain networks? how many brain networks exist? can a taxonomy of brain networks be delineated? And what naming conventions and terminology should be adopted to facilitate communication amongst scientists? Although it may be that a rose by any other name would smell as sweet, a brain network that goes by multiple names obscures potential insight into functional brain organization. Building a universal taxonomy of large-scale brain networks will help us reach the goal of making progress along the lines of understanding dynamics, decentralized computation, and emergence in the brain.

TALK 3: Modular Brain, Entangled Argument

John Krakauer, Johns Hopkins University

Considerations of the brain as a complex dynamical system often raise dichotomies: reductionism vs. emergence, network vs. region, heterarchy vs. hierarchy, interactivity versus decomposability, and entangled vs. modular. When considering these, it is important to see what is really at stake conceptually. A particular problem with these dichotomies, when raised in this context, is that it can be tacitly assumed, for each binary opposition, that one of its elements is a priori better than the other. This assumption, in turn, can lead to an agenda to unseat a human-centric hierarchical view of the mind and the mental that hinges on psychological words, such as cognition, thinking, reasoning, planning, attention and emotion. I will argue that what is problematic with this progression is that it conflates the modular with the psychological. Complex systems evolved by compartmentalization and specialization – prokaryotes to eukaryotes to multicellular organisms. The brain, as an evolved complex system, is no different – cells have organelles, bodies have organs, and brains have modules and areas. The kinds of cognition at which humans excel should be thought of as new specializations, perhaps emergent, within identifiable brain areas. Although there is no doubt that interactions between brain areas are of critical importance and should be studied, they do not supplant modularity. To give networks (and sensorimotor loops) exaggerated creative powers will not advance understanding.

TALK 4: Not Every Thing Must Go

Felipe De Brigard, Duke University

There is much to like in emergentist perspectives that seek to align dynamic assemblies of neurons and neural regions with psychological categories drawn from an evolutionary perspective on behavior. These perspectives emphasize that the brain is an interactionally complex system that fails to be near-decomposable. In other words, the functions of neural systems are radically context-sensitive and the networks that support behavior cannot be productively broken down into functional subcomponents. I do not believe, however, that accepting these tenets demands that we abandon traditional psychological categories, explanatory strategies that rely on functional decomposition, and even the notions of causality that support those strategies. Rather, I will argue that functional decomposition remains our best bet for building up a mesoscale theoretical understanding of the brain. Furthermore, there are costs to jettisoning traditional psychological categories that may be too steep to pay.


Altered States of Cognition: The Acute and Persisting Consequences of Psychedelic Drugs on Cognition

Tuesday, March 28, 2023, 1:30 PM - 3:30 PM (PT), Bayview Room

Chair: Manoj Doss, John Hopkins University

Speakers: Carli Domenico, Manoj Doss, Natasha Mason, Philip Corlette

Psychedelic research is growing rapidly, and with recent policy reforms and 'breakthrough therapy designation' from governmental bodies, they are becoming more accessible. Whereas the subjective and clinical effects of these substances have received global attention, how psychedelic drugs alter cognitive function has been largely overlooked. In light of reports that psychedelics obfuscate the sense of space and time, alter how memories are remembered, enhance creativity, and dramatically change world beliefs, here, we discuss psychedelics under the lens of cognitive neuroscience. This symposium provides an overview of the research-to-date, investigating the effects of psychedelics on 1) rodent place cell function and spatial cognition, 2) episodic memory, 3) processes relevant to creativity, and 4) belief updating. With this symposium, we will provide insight into acute and persisting (mal)adaptive alterations in cognition, a pertinent question given the parallel rise in recreational and therapeutic use. Furthermore, it will be discussed what we can learn from these alterations in cognitive function about cognition itself. Finally, whether such cognitive alterations may play a role in the immediate and persisting symptomatic relief witnessed in clinical trials will also be discussed.
The symposium combines state-of-the-art multidisciplinary methodological, fundamental, theoretical, and clinical perspectives of the science of psychedelics to highlight the current state of knowledge. This symposium will bring together diverse researchers from four research laboratories around the world across different career stages. As a subject that is recently gaining traction quickly, this symposium will be the first to examine how cognitive neuroscience can shed light on psychedelic science.

TALK 1: LSD-Altered Spatial Cognition with Tetrode Recording of Hippocampal Place Cells

Carli Domenico, Baylor College of Medicine

Psychedelic drugs that agonize the 5-HT2A receptor like lysergic acid diethylamide (LSD) exert hallucinogenic and therapeutic effects in humans and enhance synaptic plasticity. While human research continues to accumulate in this accelerating field of interest, animal studies directly observing neuronal activity are limited despite lacking mechanistic understanding. In our lab, we recorded hippocampal CA1 place cells in rats to determine how the established spatial map alters with LSD. Place cells are considered building blocks for episodic memory, firing in a map representing the environment and tethering to other sensory data corresponding to that context. We can observe how a familiar trajectory is activated during rat running behavior and how it is reactivated during consolidation or recall events like in sleep and immobility. Generally, place cells exhibit a tight relationship often seen in co-firing activity with visual cortical neurons coactive in the same position as their corresponding place cells, and they continue to burst together during rest. We have observed that with LSD, this relationship is altered both during rest and active movement, and we also see that place cells behave less discriminately with LSD- similar to how they behave in a less familiar, more novel environment. These changes do not occur following a control saline condition nor with LSD and a 5HT2AR antagonist given together. These findings give a direct look at how neurons in an area critically involved in episodic memory report their cognitive map in a state in which the perception of the world is altered in humans.

TALK 2: The Current State of Research on the Impact of Psychedelics on Episodic Memory

Manoj Doss, John Hopkins University

Psychedelics (5-HT2A agonists) cause drastic changes in how memories are remembered, and disorders treated with psychedelics (e.g., depression and PTSD), exhibit abnormalities in episodic memory. With episodic memory dependent on plasticity, and psychedelics driving plasticity, psychedelics may be a useful tool for understanding the malleability of memory. Here, I will discuss how psychedelics impact encoding, consolidation, and retrieval and draw comparisons to other psychoactive drugs. While most psychoactive drugs including psychedelics impair hippocampally-dependent recollection-based encoding, psychedelics may uniquely enhance cortically-dependent familiarity-based encoding. These impairments and enhancements of Tulving’s conceptions of “autonoetic” and “noetic” consciousness, respectively, coincide with how psychedelics impair one’s sense of self (“ego dissolution) and drive feelings of insight (“noetic quality”). Considering the highest density of 5-HT2A receptors on layer V pyramidal neurons, psychedelics may facilitate rapid cortical learning in otherwise rigid, slow-learning semantic networks. Regarding consolidation, like GABAA sedatives and stress manipulations, post-encoding psychedelics enhance memory in both humans and animals, though it is currently unclear how this form of retrograde facilitation differs from others. Finally, whereas many drugs distort memory retrieval, it will be discussed how psychedelics may particularly drive false memories with their ability to enhance mental imagery and suggestibility. Nevertheless, such memory distortions may also be part of the therapeutic efficacy of these drugs. Together, psychedelics may be well-posited for altering maladaptive memories, though more research will be needed on directing their neuroplastogenic effects to avoid disrupting adaptive semantic networks or inducing false memories.

TALK 3: Spontaneous and Deliberate Creative Cognition During and After Psilocybin Exposure

Natasha L Mason, Maastricht University

Creativity is an essential cognitive ability linked to all areas of our everyday functioning. Thus, finding a way to enhance it is of broad interest. A large number of anecdotal reports suggest that the consumption of psychedelic drugs can enhance creative thinking; however, scientific evidence is lacking. Following a double-blind, placebo-controlled, parallel-group design (N=60), we demonstrated that psilocybin (0.17 mg/kg) induced a time- and construct-related differentiation of effects on creative thinking. Acutely, psilocybin increased ratings of (spontaneous) creative insights, while decreasing (deliberate) task-based creativity. Seven days after psilocybin, the number of novel ideas increased. Furthermore, we utilized an ultrahigh field multimodal brain imaging approach, and found that acute and persisting effects were predicted by within- and between-network connectivity of the default mode network. These findings suggest a nuance in the historical claims that psychedelics can influence aspects of the creative process. Namely despite reports of overall enhanced creative capacity, they seem to alter creative cognition in a construct and time-dependent manner. It will be discussed as to whether psychedelic interventions can be used as a tool to investigate creative cognition and subsequent underlying neural mechanisms, and as to whether changes in creativity may contribute to symptomatic relief witnessed in clinical trials.

TALK 4: Beliefs, Psychedelics, and the Brain

Philip Corlette, Yale School of Medicine

It has been claimed that psychedelics induce rapid and enduring changes in beliefs about the self, others, society, and metaphysics. These effects may be related to the profundity of the acute psychedelic experience. However, the study designs and methods of belief solicitation may be flawed. Furthermore, in clinical neuroimaging studies, claims about cognitive functions like belief updating are often made with brain imaging data in the absence of a behavioral task. In this talk I will describe efforts to highlight these shortcomings (and the subsequent social media fallout), and I will adumbrate a series of studies with ketamine and belief updating that may provide a path forward. I will suggest that these data contradict the popular predictive processing model of psychedelic drug action, “Relaxed Beliefs Under Psychedelics” (REBUS) . Instead, I will offer a framework within which many more psychotomimetic interventions (including sensory deprivation) might be understood – that of precision-weighted belief updating. I hope these critiques are received in the spirit they are intended, one of cautious optimism, tempered by solid methodology and appropriate inferences.


Methodological Advances in the Study of Autobiographical Memory

Tuesday, March 28, 2023, 1:30 PM - 3:30 PM (PT), Seacliff Room

Chair: Roni Setton, Harvard University

Speakers: Roni Setton, Hongmi Lee, Signy Sheldon, Asieh Zadbood

Our repository of past experiences forms the fabric of who we are and how we engage with the world. Inquiry into this autobiographical form of memory often involves unique tasks that capture complex mental representations in a way that mimics real-world remembering. There is growing consensus that many brain regions support and synchronize these representations during recollection, and dynamically interact to form distributed networks. This advancement has paved the way for more sophisticated methodological and analytical techniques to probe these interactions and investigate how they reorganize across stages of recollection and across individuals. In the first talk, Roni Setton provides evidence that optimized pipelines for individual differences in functional connectivity at rest can identify discrete ensembles of brain regions that scale with variation in autobiographical recollection across the adult lifespan. Hongmi Lee then highlights how transitions, both in thought content and brain activity, can be examined to understand how memories arise during spontaneous thought. Next, Signy Sheldon suggests that eye movements may be a window into how visual and memory systems interact during reconstruction. In the last talk, Asieh Zadbood discusses methods for examining memory replay, and evidence for how emotional memories may be differentially reinstated in healthy and depressed individuals. Together, these talks showcase the breadth and novelty of current techniques used to study autobiographical memory. These talks promise to promote further discussion on how methodological innovation can deepen our understanding of human recollection across a variety of populations.

TALK 1: Age and Individual Differences in Autobiographical Memory Relate to Default Network Connectivity

Roni Setton, Harvard University

Autobiographical memory (AM), involves the retrieval of both rich spatiotemporal details and deeper semantic context. The balance of details recalled systematically shifts with advancing age, as the episodic quality of memory diminishes and semantic features become more prominent. AM is supported by the default network, whose functional integrity also changes with age3. We take an individual differences approach to examine resting-state functional connectivity of key episodic and semantic memory regions—the hippocampus and temporal pole—with the default network, and test for associations with episodic and semantic AM in a cohort of healthy younger and older adults. We combined multiecho resting-state fMRI acquisition and denoising, individualized cortical parcellation, and automated segmentation of the hippocampus to increase BOLD sensitivity, account for individual variability in the default network, and investigate the separable contributions of anterior and posterior hippocampus to AM, respectively. The Autobiographical Interview was administered to measure AM. Multivariate analyses first identified age group differences in functional connectivity of this circuit. This pattern was associated with posterior hippocampus volumes in older adults, suggesting a link between local structural and distributed functional differences with age. Connectivity associations with AM showed two patterns: (i) an age-invariant pattern that dissociated episodic and semantic AM, and (ii) a pattern related to overall recollection only in younger adults. Our findings provide a high-resolution map of functional connectivity between temporal lobe structures and regions of the default network, and strong evidence for how variance in this map is sensitive to individual differences in recollection across the lifespan.

TALK 2: Autobiographical Memory Recall in a Spontaneous Flow of Thoughts

Hongmi Lee, Johns Hopkins University

When our minds wander, memories of past events arise amidst other thoughts. What are the cognitive and neural states that trigger autobiographical memory recall within the flow of spontaneous thoughts? To explore this question, we performed a “think aloud” fMRI experiment in which subjects were asked to verbally describe any thoughts that entered their consciousness for 10 minutes. Five major thought categories were identified: autobiographical episodic memory, autobiographical semantic memory, non-autobiographical semantic memory, future thinking, and current sensations and feelings. By computing transition probabilities between different categories of thoughts, we found that autobiographical recall was not triggered by any specific thought category more than expected by chance. However, semantic similarity, measured using a natural language processing model, was higher between an autobiographical memory and its immediately preceding and following thoughts compared to more distant ones, suggesting that memory recall was triggered by related thought content. We also searched for a specific spatial activation pattern in the default mode network which was previously associated with major shifts in mental context during naturalistic movie watching (Lee & Chen, 2022, eLife); we observed this pattern when the semantic contents of thoughts changed (i.e., broad shifts in topic), but less so when the thought category changed, implying that semantic content was the more important determinant of mental context transitions. Together, these results demonstrate that autobiographical memories are naturally retrieved by shared meanings without any task demands, and suggest that semantic connections may be a major organizing principle of the flow of spontaneous thoughts.

TALK 3: New Insights into the Link Between Visual and Mnemonic Processing during Autobiographical Retrieval

Signy Sheldon, McGill University

The ability to reactivate and recombine visuospatial components of a past event is a central aspect of autobiographical event representations and suggests a critical link between visual and memory system. However, our understanding of how visual processes contribute to the construction of autobiographical event representations is limited. One reason for this limitation is that there is a dearth of studies that measure visual processing during autobiographical memory assessments. Here, I will describe an experiment that overcame this limitation by using eye movement monitoring and behavioural interferences techniques. In a first experiment, we first tested the impact of interfering with visual processing via Dynamic Visual Noise (DVN) on the ability to form and describe in detail various autobiographical events. When we scored these descriptions for detail specificity, we found that the presence of the DVN significantly reduced the ability of participants to provide specific details of the events. To better understand how visual processing contributes to event construction, we collected eye movement data as participants described these autobiographical events. Applying gaze similarity analysis to this eye movement data revealed that visual processing is particularly useful for reinstating schema-specific visuospatial details when constructing autobiographical events. Together these studies provide new approaches that enrich our understanding of the intimate link between visual and mnemonic processing for autobiographical event construction.

TALK 4: Event-level neural representations as a window to the content of past episodes

Asieh Zadbood, Columbia University

Recalling and communicating past experiences is a common daily life activity. The content of these recalls, however, varies greatly between individuals. Idiosyncrasies in the recall of past episodes may stem from a variety of factors such as different original experiences or differences in retrieval and communication of the same events. These idiosyncrasies impose a challenge to studying the neural mechanisms supporting autobiographical memories. Event-level analysis, including methods that average the time-course of brain response within events or obtain a single regressor for the entire event, has been proposed as a tool to overcome this issue. As time-point data is not preserved in these methods, the question arises as to what type and granularity of information these representations contain. Across two fMRI studies, we investigated the information captured using these methods during communicating and updating memories. In one experiment, participants watched the same movie and one of them recounted the movie to a group of individuals naïve to the story. Using spatial pattern similarity analysis, we show that shared neural patterns across individuals in the default mode network are event-specific, are shared irrespective of the modality, and can be built from limited information. In the second study, we manipulated the interpretation of a movie after encoding that triggered updating of past memories to incorporate new knowledge. In some regions of the default mode network, we find evidence for memory updating, but only in the scenes relevant to the new interpretation. Our work suggests that event-level representations carry information about fine-grained content during the retrieval of past episodes.




MARCH 25–28

Latest from Twitter