APRIL 23–26 • 2022

CNS 2022 |  Symposium Sessions






2 LOCALIZATION OF FUNCTION IN TIMES OF NETWORK SCIENCE Saturday, April 23, 12:00PM - 2:00PM (PT) Grand Ballroom B/C
7 COGNITIVE NEUROSCIENCE OF VOLITION Monday, April 25, 10:00AM - 12:00PM (PT) Grand Ballroom A
12 THE FLEXIBLE AND ADAPTIVE NATURE OF EMOTIONAL MEMORY Tuesday, April 26, 1:30PM - 3:30PM (PT) Grand Ballroom B/C


High Time to Unravel the Neuronal Computations Across Cortical Layers in Mice, Monkeys, and Humans

Saturday, April 23, 2022, 12:00 PM - 2:00 PM (PT), Grand Ballroom A

Chair: Andre Bastos, Vanderbilt University

Speakers: Shailaja Akella, Peter Kok, Andre Bastos, Konrad Wagstyl

It has been well known for over 100 years that an over-arching anatomical feature of the cerebral cortex is its six-layer structure. This six-layer motif segregates distinct input and output connections and contains distinct cell types. This laminar motif may also support different computations and is therefore a key aspect to leading neuroscience theories about the brain such as predictive coding. Until recently it has been extremely difficult and expensive to observe functional activity across the laminar sheet during tasks that engage perception and cognition. With recent technological developments, that has now changed. With the advent of dense neurophysiological recording arrays it is now possible in animal models to record the activity of hundreds of neurons in a single cortical column spanning all layers. Multiple probes can be combined to achieve large-scale coverage of many areas. With the advent of higher-field (7 Tesla) Magnetic Resonance Imaging (MRI) and functional-MRI (fMRI), it is now possible to push the spatial resolution of MRI imaging to a few hundred microns and fMRI to the sub-millimeter domain. These technologies have enabled a new era for cognitive neuroscience research. Researchers from a diverse array of fields studying humans and animal models can now access the laminar dimension of cortical structure and function. This symposium will highlight these recent technological advances and showcase some of the groundbreaking discoveries linking cognition to microstructural laminar properties of cortex.

TALK 1: BigBrain 3D Atlas of Cortical Layers

Konrad Wagstyl, University College London 

The human cerebral cortex exhibits a regionally varying laminar structure which cannot readily be resolved using in vivo MRI. Nevertheless, laminar structure underpins many of the functional, developmental and pathological signals we can measure in vivo. In this presentation I will first discuss work to characterize histological laminar cytoarchitecture using the BigBrain, a 3D histological model of the human brain. Integrating classical histological techniques and deep-learning, we captured laminar variations in staining intensity through cortical profiles and segmented these to generate an atlas of cortical layers. This 3D comprehensive atlas of cortical laminar cytoarchitecture has been applied to in vivo laminar neuroimaging at multiple levels: as a histological ground truth for validating and characterising MRI cortical morphology, to characterise interregional patterns of structure and as ground-truth model for the development of analytical tools in high-field structural and functional MRI.

TALK 2: The Neural Circuit Underlying Perceptual Expectations

Peter Kok, University College London

The way we perceive the world is strongly influenced by our expectations about what we are likely to see at any given moment. However, the neural mechanisms by which the brain achieves this remarkable feat have yet to be established. In order to understand the neural mechanisms underlying the interplay between sensory inputs and prior expectations, we need to investigate the way these signals flow through the cortical layers. Until recently, it was not possible to do this in non-invasive studies of humans, because the typical voxel size in fMRI is bigger than the full thickness of the visual cortex (2-2.5mm). I will discuss recent work in which we met this challenge by using fMRI at ultra-high field (7T) to obtain BOLD signals at very high resolution, and using a novel spatial regression analysis to disambiguate signals from the different cortical layers. This approach has allowed us to reveal the neural circuitry underlying effects of expectation on sensory processing, and how these relate to subjective perception. Together, this work demonstrates that expectations play a fundamental role in sensory processing, and ultimately in the way we perceive the world.

TALK 3: A Spectro-Laminar Framework for Cortical Computations

Andre Bastos, Vanderbilt University

To understand the neural basis of cognition, we must understand how top-down control of bottom-up sensory inputs is achieved. We have marshaled evidence for a canonical cortical control circuit that involves rhythmic interactions between different cortical layers. By performing multiple-area, multi-laminar recordings, we've found that local field potential (LFP) power in the gamma band (40-100 Hz) is strongest in superficial layers (layers 2/3), and LFP power in the alpha/beta band (8-30 Hz) is strongest in deep layers (layers 5/6). The gamma-band is strongly linked to bottom-up sensory processing and neuronal spiking carrying stimulus information, while the alpha/beta-band is linked to top-down processing. Deep layer alpha/beta projects to superficial layers and is negatively coupled to gamma. These oscillations give rise to separate channels for neuronal communication: feedforward for the gamma-band, and feedback for the alpha/beta band. Attention, working memory, and prediction all involve modulation of gamma and alpha/beta synchronization, both within and across areas of the frontal/parietal/visual network. These rhythmic interactions breakdown during anesthesia-induced unconsciousness. Based on these observations, we hypothesize that the interplay between alpha/beta and gamma synchronization in different cortical layers is a canonical mechanism to enable cognition and consciousness.

TALK 4: Laminar Specific Interactions in the Visual Cortex

Shailaja Akella, Allen Institute

Hierarchical organization is a remarkable feature of the neocortex, with laminar-specific integration between the microcircuits driving feedforward and feedback interactions at the global level. However, we still have only a rudimentary understanding of these connectional principles pertaining to the presence of extensive recurrent and parallel pathways that limit our understanding of information flow across the multiple levels. Here, we undertake a causal analysis of the multi-scale neural activity in the macaque visual cortex to elucidate the functional asymmetries between feedforward and feedback connections. We find that the feedback connections during top-down processing show increased intrinsic connectivities originating from the deep layer neurons to neurons in other layers in contrast to connectivity patterns during bottom-up processing, suggesting that the deep layers are hierarchically higher than the superficial layers. A summary of the extrinsic multi-scale connectivities between the visual and prefrontal areas revealed that these hierarchical interactions are characterized by distinct oscillatory patterns. Moreover, oscillations in different frequencies displayed task-dependent functional coupling with spiking activity, thereby pointing to more complex dynamics than previously assumed. Interestingly, anatomical projection patterns of feedforward and feedback connections in the mouse visual cortex also showed similarity to that of primates. Notwithstanding the small size of the brain, the mouse cortex is hierarchically organized with feedforward or feedback connections. Here, we tracked the flow of spiking activity recorded from six interconnected levels of the mouse visual hierarchy. We find all four hierarchical metrics – response latency, receptive field size, receptive field complexity, and intrinsic response timescale – change systematically from superficial to deep layers, confirming superficial layers are hierarchically lower than deep layers. By directly inferring relative spike timing between pairs of neurons using cross-correlation, we find the superficial layer neurons are leading the activity of deep layer neurons and the trend holds true throughout the hierarchy. Overall, these findings uncovered a detailed laminar-specific signal flow map in the macaque and mouse visual cortex.



Localization of Function in Times of Network Science

Saturday, April 23, 2022, 12:00 PM - 2:00 PM (PT), Grand Ballroom B/C

Chair: Brenda Rapp, Johns Hopkins University

Speakers: Bradford Mahon, Brenda Rapp, Maurizio Corbetta, Danielle Bassett

Since the earliest days of modern neuroscience, the question of localization of function has been vigorously debated, with differing views at every level of brain organization. From single-unit neurophysiology, to neuropsychology, to functional neuroimaging-functional localization has been a standard assumption guiding the interpretation of neural data. But, what are we localizing when we 'localize function,' and in what ways does the answer to that question interact with the method used to carry out the localization?  In this 'age of network science,' new mathematical tools for describing network properties have bolstered support for a distributed, network view of the mapping between cognition and brain.   Given the fundamental role that such theoretical assumptions play in our interpretation of neural data, their resolution has fundamental implications for existing theories of brain function, as well as clinical applications. This symposium focuses on the role and contributions of network science to our understanding of mind-brain mapping, both in the intact system and in the context of acquired brain injury. The symposium brings together researchers using network-based analyses of neural phenomena to constrain theories of the localization of function. Historically, lesions have provided key observations relevant to this debate. This symposium connects lesion work with the broader range of methods that have been used in the last two decades to study network organization of cognitive processes.

TALK1: Content-Specific Modulation of Functional Networks in the Setting of Transient and Chronic Lesions

Bradford Mahon, Carnegie Mellon University

Object directed actions, such as picking up and using a fork, involve the coordinated integration of a set of sensory, motor and cognitive processes that neuropsychological research has shown to be functionally dissociable: visual recognition, conceptual processing, grasping, and manipulation. More recently, functional MRI studies have emphasized a largely left lateralized network of temporal, parietal and frontal regions that collectively support object recognition and object-directed action. I will first discuss studies using fMRI in healthy participants that I will argue show how the information processed by a given region, and the fingerprint of functional connectivity among regions, can be modulated in predictable ways by the task in which participants are engaged for a given object (e.g., recognition versus use). I will then discuss a series of studies that use functional MRI to study how transient (TDCS) and chronic (brain tumor, stroke) lesions affect neural responses in anatomically distal regions of the object processing network. I argue that neural responses, in some specific regions of occipito-temporal cortex, depend in real time on inputs from parietal-based processes that compute aspects of object directed action. This proposal makes specific predictions, currently being tested, about how focal lesions to parietal cortex will affect neural responses in the temporal lobe in a content-, or category-specific, manner. More generally, the object processing system may provide clues about broader questions about how functionally and neuroanatomically dissociable processes are integrated, and how the dynamics of that integration can be pulled part by combining functional mapping with lesion studies.

TALK 2: From Voxels to Hemispheres: Functional Network Consequences of Damage and Treatment

Brenda Rapp, Johns Hopkins University

From the beginning of cognitive neuroscience, the study of lesions has provided key evidence for the localization of cognitive functions.  However, many studies of intrinsic functional connectivity have now shown that even focal brain damage can result in functional network changes that are distant from the damage site and can affect large-scale network properties, raising questions about the coherence and utility of the notion of functional localization. I will report on various projects from our lab that specifically examine the consequences of lesions and behavioral interventions, evaluating their impact on small, medium and large-scale functional networks.  Our work comparing the network effects of real and simulated lesions provides foundations for understanding the neuroplastic changes that affect local and global network communication.  Building on this, at the macro scale we find large-scale inter-hemispheric changes associated with focal left hemisphere lesions and response to highly targeted behavioral interventions. At the meso scale, we find that focal damage and treatment affect graph-analytic and other properties of the functional communication within and between certain networks and not others. Finally, at the micro scale, examination of the heterogeneity of voxel-based network responses reveals localized changes in response to lesions and behavioral interventions.  We conclude that although there may be global and widespread network consequences of lesions and recovery that challenge notions of functional localization, many of the observed changes are sufficiently local (at their scale) that these concepts still play a valuable role in understanding the mapping between cognition and its neural substrates

TALK 3: The Low Dimensional Correlation of Behavioral Deficits and Network Changes in Focal Injury

Maurizio Corbetta, University of Padua

Traditional teaching in neurology and neuropsychology emphasizes the localization of specific cognitive functions to different brain regions and/or systems. However, recent trends in neuroscience guided by human functional neuroimaging, and simultaneous neural recordings from thousands of neurons in experimental animals, emphasize the strong correlation of ongoing and task-driven neural activity in space and time. These new results highlight a view of brain function in which cognitive operations are distributed and behavior  reflects the  modulation of ongoing dynamics rather than the construction from scratch of sensory-cognitive-motor neuronal ensembles. In our work we asked whether focal brain lesions  (stroke, tumors) cause many different syndromes (dissociations), as traditionally taught, or  produce a simpler structure of cognitive impairment reflecting the correlation among underlying cognitive/neural processes. We show that hundreds of lesions in different locations (both stroke and tumors) cause a few clusters of correlated deficits that correspond to a linguistic/memory axis, and a visuospatial/action axis. Interestingly, these axes are also found in hemispheric lateralization studies in healthy subjects. Moreover, hundreds of lesions cause  relatively few patterns of structural/functional disconnection that differ in topology, but consistently correspond to a loss of inter-hemispheric integration, and abnormal  intra-hemispheric integration. Overall, our work shows evidence for low dimensionality in behavior, cognitive operations, and neural  synchronization in  focal brain injury.  These results point to fundamental properties of brain function including the low dimensionality of spontaneous and task activity patterns, and representations.

TALK 4: A Network Perspective on Cognitive Effort

Danielle Bassett, University of Pennsylvania

Cognitive effort has long been an important explanatory factor in the study of human behavior in health and disease. Yet, the biophysical nature of cognitive effort remains far from understood. In this talk, I will offer a network perspective on cognitive effort. I will begin by canvassing a recent perspective that casts cognitive effort in the framework of network control theory, developed and frequently used in systems engineering. The theory describes how much energy is required to move the brain from one activity state to another, when activity is constrained to pass along physical pathways in a connectome. I will then turn to empirical studies that link this theoretical notion of energy with cognitive effort in a behaviorally demanding task, and with a metabolic notion of energy as accessible to FDG-PET imaging. Finally, I will ask how this structurally-constrained activity flow can provide us with insights about the brain's non-equilibrium nature. Using a general tool for quantifying entropy production in macroscopic systems, I will provide evidence to suggest that states of marked cognitive effort are also states of greater entropy production. Collectively, the work I discuss offers a complementary view of cognitive effort as a dynamical process occurring atop a complex network.


How do Eye Movements Structure Hippocampal-Dependent Memories?

Sunday, April 24, 2022, 1:30 PM - 3:30 PM (PT), Grand Ballroom A

Chair: Joel Voss, University of Chicago

Speakers: Jordana Wynn, James Kragel, Ian Fiebelkorn, Kari Hoffman

It seems like a trivial fact that visual memories are shaped by eye movements, as one must see something in order to remember it. However, recent findings suggest surprisingly deep entanglement of eye movements with memory, leading many to propose that eye movements are a fundamental, yet poorly understood, part of the mechanism by which structures such as the hippocampus support memory. For instance, it appears that eye movements both influence and are influenced by spatiotemporal information coding in episodic memory. Further, the rhythmicity of eye movements seems to both influence and be influenced by rhythmic activity of the hippocampus in the theta frequency range, which is consequential as hippocampal theta is widely believed to provide a framework for information coding in memory. Different research groups using different methods across different model organisms have generated divergent mechanistic explanations for these phenomena, with major disagreements over the directionality of interactions between eye movements and memory, the means of communication between brain networks supporting memory versus eye movements, and the nature of the relationship between eye movements and hippocampal theta. This symposium brings together investigators working across methods (fMRI and intracranial electrophysiology), experimental frameworks (long-term and working memory), and model organisms (humans and non-human primates) to discuss recent developments in this area and to attempt to find common ground, which will be facilitated by an audience-involved discussion panel at the end of the talks. We hope that this will bring the important relationships between eye movements and hippocampal-dependent memory into better focus.

TALK 1: Eye Movements Support Active Memory Retrieval

Jordana Wynn, Harvard University

Research indicates that overt visual attention (i.e., eye movements) and memory are intimately linked. Yet, most research to date has only considered eye movements as a by-product of attention and memory processes. In this talk, I will provide evidence that beyond reflecting these processes, eye movements play an active and functional role in memory retrieval. First, I will demonstrate, using eye movement monitoring, that gaze patterns during retrieval reinstate gaze patterns executed during encoding and that this pattern of gaze reinstatement predicts mnemonic performance. Specifically, I will show that reinstated gaze patterns during retrieval in the absence of visual stimulation are linked to both correct recognition of old images and false alarms to lure images. Next, I will show that the relationship between gaze reinstatement and mnemonic performance is modulated by both age and task demands. That is, when retrieval is made more difficult due to increased age or task demands, eye movements may play a compensatory role. Finally, I will discuss recent neuroimaging work investigating the neural correlates of functional gaze reinstatement. This work indicates that trial-wise gaze reinstatement is correlated with activity in the hippocampus, suggesting that it may be supported by the same neural mechanisms that support memory retrieval. Together, these studies advance a critical role for eye movements in memory retrieval and reinstatement.

TALK 2: Hippocampal Theta Oscillations Coordinate Effective Visual Exploration

James Kragel, University of Chicago

People typically remember events in the sequence in which they occurred. Neuroimaging and electrophysiological studies highlight the hippocampus as central to spatiotemporal information in long-term memory. However, the timescale of hippocampal involvement remains poorly understood. We will discuss our recent work to test the theory that the hippocampus contributes to long-term memory formation through remarkably short-term processing, as reflected in eye movements made during encoding and retrieval. We will present findings that individuals reinstate fixation sequences from one episode to the next and that this eye-movement replay involves the same cognitive systems as the recall of episodes. We will then discuss findings from experiments that use intracranial electroencephalography in human neurosurgical cases to measure neural activity on the same timescale as eye-movement sequences. Our results indicate that hippocampal theta oscillations coordinate memory-guided eye movements during encoding and retrieval. Further, decreases in theta oscillations consistently align to eye movements guided by short-term memory expressed over hundreds of milliseconds within an encoding episode and by long-term memory expressed over minutes across episodes. We will also discuss our findings of interactions between hippocampus and networks for spatial attention and visual perception, which show disruption of feedforward theta activity when short-term memory guides eye movements. We conclude that theta rhythmic activity in the hippocampus supports memory across multiple timescales by coordinating visuospatial exploration and self-directed learning.

TALK 3: Theta-Rhythmic Coordination to Prevent Conflicts During Attention and Working Memory

Ian Fiebelkorn, University of Rochester

We have previously demonstrated that neural and behavior effects associated with spatial attention wax and wane on a sub-second timescale (i.e., about four times per second). We propose that theta-rhythmic neural activity (at ~4-6 Hz) helps coordinate competing sensory and motor functions in the large-scale network that directs environmental sampling (i.e., spatial attention and saccadic eye movements). For example, rhythmically occurring periods of worse visual-target detection, at a spatially cued target location, are associated with a greater likelihood of eye movements (i.e., a release from motor inhibition). In my talk, I will present neural (electroencephalographic recordings) and behavioral evidence from humans that this theta-rhythmic coordination of neural activity is a more general mechanism for mediating functional conflicts in the brain. Here, I will specifically present results from a study demonstrating that the strength of competing item representations (i.e., neural representations of to-be-remembered items), during a working memory task, wax and wane over time. Behavioral performance during the working memory task was linked to the phase of theta-rhythmic neural activity. Our findings further demonstrate that different theta phases are associated with better behavioral performance in response to different to-be-remembered items. These findings suggest that the relative strength of neural representations associated with different to-be-remembered items alternate over time. We therefore propose that theta-rhythmic coordination of neural activity not only helps to coordinate functional conflicts during environmental sampling, but also helps to mediate representational conflicts during working memory.

TALK 4: The Control of Gaze for and by Hippocampal and Retrosplenial Synchrony During Memory-Guided Search

Kari Hoffman, Vanderbilt University

Memory retrieval during visual search is predicted by hippocampal activity. Such neural activity could reflect a mechanism for retrieving mnemonic content, to inform search to locations of interest. This would be an example of the brain guiding behavior. But behaviors also guide the brain. Here, saccadic eye movements act as potent timekeepers throughout a wide range of cortical regions, and may similarly be informing the neural activity within the hippocampus. We will describe our work uncovering both sides of this loop. Using direct electrophysiological recordings in macaques and humans performing the same memory-guided search task, we've found signals that may facilitate retrieval, including 1. Hippocampal sharp-wave ripples that predict successful retrieval in visual search, with concomitant drops in hippocampal CA1 theta oscillations, and 2. Changes in hippocampal and retrosplenial cortical synchrony when retrieving year-old memories. These retrieval correlates do not preclude a role for saccades to influence hippocampal activity. I will describe hippocampal single unit responses and field potentials that show sensitivity to saccade timing, suggestive of corollary discharge and, more generally, active sensing. We suggest that the hippocampus and connected structures are both driven by and drivers of search behaviors, as part of an extended circuit supporting adaptive search.


Contributions of Lower Structures to Higher-Cognition

Sunday, April 24, 2022, 1:30 PM - 3:30 PM (PT), Grand Ballroom B/C

Chair: William Saban, University of California, Berkeley

Speakers: Michael Ullman, Peter Strick, Josef Parvizi, William Saban

Reflecting our anthropocentric belief that humans sit at the top of the cognitive pyramid, we point to brain anatomy as a correlate of 'intelligence.' Higher cognition is often attributed to the exceptional enlargement of neocortical regions over the course of evolution. Implicit in this approach is the assumption that the subcortex is of secondary importance when considering higher cognition. We recognize that while the past decade has provided evidence that subcortical regions can be involved in cognition, it is still a question of how the subcortex contributes to computations, essential for the emergence of higher cognition. In addition, the whole brain (cortical-subcortical network) dynamic functional architecture is still a mystery that needs well-defined models. In this symposium, we focus on delineating the specific contributions of different subcortical areas to higher cognition. Drawing on varied methodologies (neuropsychology, intracranial electrophysiological recordings, connectivity mapping), we show how different subcortical regions (cerebellum, hypothalamus, basal ganglia) contribute to arithmetic, language, and social cognition. We also present novel theories and findings for subcortical-cortical relations in the manifestation of cognitive processes. We propose that cortical regions have access to subcortical computations, resulting in a network that has allowed for the evolution of complex cognitive representations.

TALK 1: Subcortical Cognition: The Fruit Below the Rind

Michael Ullman, Georgetown University

Cognitive neuroscience has highlighted the cerebral cortex while often overlooking subcortical structures. This cortical proclivity is found in basic and translational research on many aspects of cognition, especially higher domains such as language, reading, music, and math. We hypothesize that, for both anatomical and evolutionary reasons (especially connectivity and cooptation), subcortical contributions to higher as well as lower cognition are extensive—much more so than has generally been acknowledged—and that multiple subcortical structures play significant roles across cognitive domains. First, we present a comprehensive review of existing evidence. The review suggests that numerous subcortical structures throughout the brain, from the brainstem through the telencephalon, indeed contribute to higher as well as lower cognitive domains, and that these contributions are both reliable and important. Based on this and other evidence, we propose a many-to-many neurocognitive model of cognition, such that each subcortical and cortical structure supports multiple lower and higher functions via basic computations (which operate analogously across domains), and each function depends on multiple complementary and redundant structures that work together in a dynamic network. Finally, we lay out how the structure-function map revealed by our review can be expanded: we suggest that new subcortical cognitive roles can be identified by leveraging anatomical and evolutionary principles, and laying out specific methods that can be employed to reveal subcortical cognition. Overall, this work (see Janacsek et al., 2022, Annual Review of Neuroscience) aims to advance basic and translational neurocognitive research by highlighting subcortical cognition and facilitating its future investigation.

TALK 2: The Basal Ganglia, Cerebellum and Cerebral Cortex are Nodes in an Interconnected Network

Peter Strick, University of Pittsburgh

The basal ganglia and the cerebellum are generally considered to be distinct subcortical systems that perform unique functional operations. The outputs of the basal ganglia and the cerebellum influence many of the same cortical areas, but do so by projecting to distinct thalamic nuclei. As a consequence, the two subcortical systems were thought to be independent and communicate only at the level of the cerebral cortex. We will review recent data showing that the basal ganglia and the cerebellum are interconnected at the subcortical level. The subthalamic nucleus in the basal ganglia is the source of a dense disynaptic projection to the cerebellar cortex. Similarly, the dentate nucleus in the cerebellum is the source of a dense disynaptic projection to the striatum. Thus, an output stage of each subcortical system projects to an input stage of the other system. These observations lead to a new functional perspective that the basal ganglia, cerebellum, and cerebral cortex form an integrated network. This network is topographically organized so that the motor, cognitive, and affective territories of each node in the network are interconnected. This perspective explains how synaptic modifications or abnormal activity at one node can have network-wide effects. A future challenge is to define how the unique learning mechanisms at each network node interact to improve motor, cognitive and affective performance.

TALK 3: A Common Subcortical Ground for Unifying Two Contradictory Theories of Emotion

Josef Parvizi, Stanford University

The functional relationship between cortical and subcortical structures is traditionally viewed in hierarchical terms, with the cerebral cortex in the tower of control and the subcortical structures playing subservient roles. This presentation will offer an alternative view to the traditional corticocentric theories in human cognitive neuroscience by showing evidence from a recent study of the human hypothalamus using direct electrical stimulation, intracranial electrophysiological recordings, and connectivity mapping methods. Our study confirms that the electrical stimulation of the human hypothalamus leads to the experience of shame while the stimulation of its cortical counterparts does not. In addition, our findings raise questions about the corticocentric interpretation of observations from similar experiments in animals that links the hypothalamus with the behavior of “sham” rage. Implications of our findings to current theories of emotion and the importance of subcortical structures, such as the hypothalamus, in human emotional and social cognitive functions will also be discussed.

TALK 4: Distinct Contributions of the Cerebellum and Basal Ganglia to Arithmetic Procedures

William Saban, University of California, Berkeley

Humans exhibit complex mathematical skills, often attributed to the exceptionally large neocortex. Using a neuropsychological approach, we report that degeneration within two subcortical structures, the basal ganglia and cerebellum, impairs performance in symbolic arithmetic. Moreover, we identify distinct computational impairments in individuals with Parkinson's disease (PD) or cerebellar degeneration (CD). The CD group exhibited a disproportionate cost when arithmetic sum increased, suggesting that the cerebellum is critical for iterative procedures required for calculations. The PD group exhibited a disproportionate cost for equations with an increasing number of addends, suggesting that the basal ganglia are critical for the coordination of multiple cognitive operations. In Experiment 2, the two patient groups exhibited intact practice gains for repeated equations at odds with an alternative hypothesis that these impairments were related to memory retrieval. Overall, the results provide a novel demonstration of the contribution of subcortical structures to the computations required for complex cognition.


Cognitive and Brain Aging: New Insights from Biomarkers, Lifestyle Factors, and Genetics

Sunday, April 24, 2022, 1:30 PM - 3:30 PM (PT), Bayview Room

Chair: Anja Soldan, Johns Hopkins University

Speakers: Corinne Pettigrew, Kaitlin Casaletto, William Kremen, Kristen Kennedy

Cognitive and brain aging trajectories are characterized by great inter-individual variability. Using data from four different studies of middle-aged and older adults, this symposium will provide a comprehensive look at biomarker, lifestyle, and genetic factors that impact cognitive and brain aging. The symposium will examine a broad array of relevant measures and techniques, including measures of vascular risk, MRI brain structural and functional connectivity and volumetrics, genetic risk factors for Alzheimer's disease (AD) and frontotemporal dementia (FTD) genetics, AD biomarkers measured in PET and CSF, as well as cognitive reserve and physical activity. Longitudinal cognitive and brain outcomes to be discussed include global and domain-specific cognitive and clinical measures, plasma markers of axonal injury and inflammation, brain structural connectivity, and fMRI BOLD variability. The first presentation will focus on the role of midlife biomarker and lifestyle factors, measured among cognitively normal individuals, on long-term cognitive outcomes. The second presentation will examine the protective effect of physical activity on changes in cognition, clinical status and plasma axonal injury and inflammation biomarkers in middle-aged adults and FTD mutation carriers. The third talk will discuss the relationship of structural brain network changes (i.e., modal controllability) and AD polygenic risk scores to executive function decline among cognitively unimpaired older adults. Lastly, the symposium will examine the effect of amyloid deposition on longitudinal changes in task-based BOLD signal variability in aging. Together, these presentations highlight the complex interplay of genetic, lifestyle, and brain measures that underlie inter-individual differences in cognitive and brain aging trajectories.

TALK 1: The Impact of Midlife Biomarker and Lifestyle Variables on Long-Term Cognitive Trajectories

Corinne Pettigrew, Johns Hopkins University

Variables measured in midlife may be important predictors of cognitive outcomes among older adults. This report discusses recent findings from the longitudinal BIOCARD Study, examining measures obtained in midlife in relationship to long-term cognitive and clinical trajectories (M follow-up=12y, maximum=20y), including brain volume and cortical thickness, vascular risk factors, level of cognitive reserve (CR), and Alzheimer's disease (AD) biomarkers. All analyses included >220 individuals who were cognitively normal and primarily middle-aged at baseline (M age=57y). Participants have undergone comprehensive assessments including MRI scans, cerebrospinal fluid (CSF) collection for AD biomarkers, and annual neuropsychological testing. In separate analyses using longitudinal cognitive performance or time to onset of Mild Cognitive Impairment (MCI) as the dependent variable, higher vascular risk scores (VRS), greater white matter hyperintensity (WMH) burden, greater atrophy in AD vulnerable brain regions, and the presence of amyloid and tau CSF biomarkers in midlife were associated with poorer cognitive outcomes. Higher CR, measured with a proxy index, was associated with a later time to MCI symptom onset and better baseline cognitive performance, but CR did not reduce the associations between VRS, WMHs, or AD biomarkers with cognitive decline. Among those who progressed to MCI, higher CR scores were associated with an older age of symptom onset and faster cognitive decline after symptom onset. These results emphasize the strength of the associations between midlife factors obtained in cognitively normal individuals and long-term cognitive outcomes, and demonstrate that individual differences in late-life cognition are significantly associated with multiple midlife risk and protective factors.

TALK 2: Physical Activity Relates to Attenuated Clinical and Axonal Injury Trajectories in FTD

Kaitlin Casaletto, UCSF

Greater physical activity (PA) is associated with reduced cognitive decline and dementia risk, yet the biological mechanisms are unclear. We examined the relationship between PA and rate of cognitive decline in autosomal dominant frontotemporal dementia (FTD) and controls, as well as the relationship between PA and longitudinal biomarkers reflective of axonal injury (neurofilament-light chain, NfL) and inflammation (IL-6, TNFa). 108 FTD mutation carriers who were asymptomatic-to-mildly symptomatic at baseline (mean age=51) and 120 noncarrier family members (mean age=49) from the ALLFTD consortium reported baseline PA and completed a longitudinal neurobehavioral battery and annual blood draws (average=2 timepoints). Plasma was analyzed for NfL, IL-6 and TNFa in a subset (n=161) via Simoa. Higher baseline PA related to slower clinical trajectories across mutation carriers and controls. Effects were driven by mutation carriers with an estimated 62% slower clinical decline per year. In mutation carriers, there was an interaction between PA and frontotemporal atrophy on cognitive performance. High activity carriers with frontotemporal atrophy demonstrated 2.5-fold better cognitive stability/year compared to their low activity peers. In biomarker models, higher baseline PA was related to slower NfL trajectories, but not to IL-6 or TNFa progression. Again, effects were driven by mutation carriers and remained when restricting to carriers who were asymptomatic at baseline. Greater PA was related to slower clinical progression and a marker of axonal degeneration in autosomal dominant FTD, but not in older controls. This suggests that PA may protect against cognitive and axonal breakdown even in dominant genetic forms of FTD.

TALK 3: Executive Function in Older Adults: Implications for Normative Aging and Alzheimer's Disease

William Kremen, UCSD

Derived from white matter structural connectivity, modal controllability refers to the ease at which brain regions can move the brain into difficult-to-reach states to support diverse cognitive processes.  Modal controllability is associated with executive function (EF) in younger populations.  Here, we examined that association longitudinally in older adults.  We also examined if risk for Alzheimer's disease (AD) is related to EF.  Although the focus in AD is typically on episodic memory, there is growing evidence for the importance of EF deficits.  Even among cognitively unimpaired adults, poorer EF can predict progression to mild cognitive impairment several years later.  We examined normative and AD-related EF in men in the Vietnam Era Twin Study of Aging (VETSA).  Mean ages at the 3 study waves were 56 (SD=2.4), 62 (SD=2.4), and 68 (SD=2.5).  With diffusion MRI, we examined modal controllability in cognitive control networks (available at waves 2 and 3).  An AD polygenic risk score (AD-PRS) was calculated based on consortia genetic meta-analysis.  Modal controllability of cognitive control networks was associated with EF at wave 3 (n=264), and reduction in controllability from wave 2 to wave 3 was associated with EF decline (n=104).  Across 3 waves, in 1,168 individuals who were cognitively unimpaired at their baseline assessment, higher AD-PRSs were associated with steeper declines in EF (r=-.27) as well as memory (r=-.19), driven by a combination of APOE and non-APOE genetic influences.  Results suggest both structural network changes in controllability and genetic risk for AD as key mechanisms underlying age-related declines in EF.

TALK 4: Age-Related Change in BOLD Variability During Working Memory Load is Dependent on Amyloid Deposition

Kristen Kennedy, UT Dallas

Whether BOLD variability (BOLDvar) increases or decreases during aging is of recent interest and shows mixed findings. The current study goal was to examine longitudinal change in BOLDvar over 4-years and test its association with beta-amyloid deposition, a known Alzheimer's Disease biomarker, to help disentangle the nature of BOLDvar in aging.  BOLDvar was computed using mean-squared-successive-differences (MSSD) from an fMRI n-back task; beta-amyloid was estimated from PET-Amyvid scans (N=41, 59-94y at baseline). MSSD was calculated by block, then averaged together for each task condition at each measurement occasion. We specified a linear mixed effects model with MSSD predicted by baseline age, baseline amyloid, task load, and time (years between waves), with random intercept and slope of task load for each participant's wave and random slope of time for each participant.  Results indicated significant 3-way and 4-way interactions. BOLD MSSD (averaged across task load) increased with age for individuals with average-to-high amyloid and became stronger over time in frontoparietal, occipital, and inferior temporal regions. Conversely, lower-amyloid individuals showed the opposite: BOLDvar decreased with age and over time. 4-way interaction revealed modulation of BOLDvar (increasing MSSD to increasing task load) increased with age for higher-amyloid participants and strengthened with time in middle-cingulate and pars-opercular regions. These findings suggest that BOLDvar decreases with age for those with minimal amyloid, while aging with amyloid is associated with increased variability of the BOLD response and both are true within-person over 4 years of aging, providing further support that increased BOLD variability is an adverse outcome.


Neural Recycling of Reasoning Networks by STEM Domains: Evidence from Studies of Math, Engineering and Programming

Sunday, April 24, 2022, 1:30 PM - 3:30 PM (PT), Seacliff Room

Chair: Yun-Fei Liu, Johns Hopkins University

Speakers: Martin Monti, David Kraemer, Marie Amalric, Yun-Fei Liu

Formal logical reasoning, mathematics, programming, mechanical engineering, and other aspects of STEM education are examples of culturally-derived cognitive domains. Proficiency in these domains requires many years of education and varies widely across individuals within the same culture. How does the human brain enable the development of STEM expertise? The current symposium brings together recent neuroimaging studies across different STEM domains: formal logical reasoning, mechanical engineering, mathematics, and programming. Together these studies enable identifying common neural mechanisms as well as differences across STEM fields. Expertise in each STEM domain can be highly specific. A mathematician may not be a proficient programmer and vice versa. Yet, all these cultural domains make use of language-like abstract symbols (e.g., X, beta), their relations, and manipulations. Neurally, a number of common principles emerge across domains. First, despite relying on language-like symbols, reasoning in STEM domains does not depend on fronto-temporal language systems. Rather, STEM domains recruit a fronto-parietal reasoning network. An analogous fronto-parietal system is engaged by formal logic, mathematics and programming. Critically, patterns of activity with fronto-parietal systems differentiate among algorithms in programming and among categories of mechanical problems within engineering. Activity patterns in fronto-parietal networks are sensitive to individual differences in engineering expertise. Together these studies provide insight into the neural basis of language-independent reasoning. These researches also informs theories of neural cultural recycling by illuminating how the human brain accommodates cultural specialization.

TALK 1: The Dissociation Between the Neural Bases of Natural Language and Deductive Inference

Martin Monti, University of California, Los Angeles

A central aspect of human abstract thought is the ability to combine finite symbols, in hierarchically structured sequences, to obtain potentially infinite meanings. While this feature is most saliently present in the context of human natural language, it also characterizes several other aspects of human cognition including logic reasoning, algebraic cognition, and problem solving, among others. Whether the latter domains of human cognition derive their syntax-like characteristic from the neuro-cognitive operations of language remains a controversial topic. This presentation will focus on the issue by comparing the processing and manipulation of symbols according to the rules of natural language with the processing and manipulation of symbols according to the rules of deductive reasoning (and algebra). In particular, leveraging the methods of neuroimaging and neuromodulation, I will present data documenting a dissociation – in the mature human brain – between the neural basis supporting structure-dependent operations of natural language and those of deductive inference.

TALK 2: A Frontoparietal Network Underlies Deductive Inference and Conceptual Understanding in Physics

David Kraemer, Dartmouth College

How is learned knowledge represented in the brain, and how do representations differ based on different information content? In two lines of research, we used multivariate fMRI analysis (e.g., RSA) to decode the representations of learned knowledge structures. In the first line of research, we examined the neural basis of knowledge constructed via deductive reasoning. Through transitive inference across several syllogisms, participants constructed a hierarchical representation of stimuli (e.g., Robert is taller than James, who is taller than Joseph, ...). Items were only presented two at a time, and the relationships were only described - never directly observed. Across three content types (relative heights of people, relative values of artwork, and a totally abstract comparison), multivariate activity in the right intraparietal sulcus and left inferior frontal gyrus was found to represent inferred knowledge structures. In the second line of research, we investigated conceptual knowledge of physics learned by mechanical engineering students over the course of their education. Comparing advanced engineering students to novice controls, a bilateral frontoparietal network showed increased multivariate intersubject correlation during a task that focused on viewing real-world structures and considering relevant physical forces. Expert-level conceptual categories were successfully decoded from these regions in engineering students (but not controls) on both group-level and individual-level analyses. Moreover, the individual subject analyses demonstrated that this neural information correlated strongly with both task performance during scanning and traditional tests of relevant physics concepts outside of the scanner, indicating a neural signature of the strength of conceptual understanding for these concepts.

TALK 3: Neural Correlates of Advanced Mathematical Activity

Marie Amalric, Harvard University

How does the human brain conceive, encode, and manipulate abstract mathematical concepts? Because they are designated by words and represented by symbols, it has first been hypothesized that advanced math knowledge emerges through language. Language and mathematics have long been thought to rely on similar rule abstraction processes and were hypothesized to recruit similar brain structures. Yet during the past decade, neuropsychology and neuroimaging studies of basic arithmetic have started to show that math neural substrates differ from those of language processes. In this talk, I will present four fMRI studies that extend these results to the case of advanced math processing. In three fMRI studies, we asked whether mathematicians are using language areas when they do mathematics, and whether math representations rely on visual experience. Professional mathematicians - including three blind mathematicians - were asked to evaluate the truth-value of spoken statements, with or without math content. We showed that even formulated as sentences, all math statements, regardless of their difficulty or domain, and regardless of the participants’ visual experience, activated a reproducible set of brain regions that completely dissociated from areas related to language and general-knowledge semantics, but rather coincided with sites activated by simple arithmetic. In a fourth fMRI study, we reproduced these findings in a population of math-major freshman students, only when asked to reason about concepts they completely understood. These findings suggest that once fully mastered, math concepts are encoded in the brain in an abstract manner, independent of both language and sensory modalities.

TALK 4: Neural Recycling of Logical Reasoning Network for Programming Code Comprehension

Yun-Fei Liu, Johns Hopkins University

Programming is one of the nascent “culturally-invented cognitive domains” which human brains have not evolved to support. While the neural bases of some other evolutionarily recent human behaviors such as reading and symbolic math are extensively studied, how the brain understands programming code is still largely unknown. Our research suggests that code comprehension is another intriguing case of neural recycling, where culturally-invented cognitive domains “recycle” neural mechanisms for evolutionarily ancient cognitive abilities. In our fMRI study where expert programmers performed code comprehension and memory control tasks, we identified a left-lateralized fronto-parietal network engaged during code comprehension. MVPA decoding showed that the algorithms involved in the code ("for" loops or “if” conditionals) were distinguishable in this network. In a separate localizer scan, the same participants performed language comprehension and symbolic logical reasoning tasks. Comparison between the two scans showed that the code-responsive network extensively overlapped with the network for formal logical reasoning, suggesting that code comprehension may have recycled the logic network. Interestingly, the laterality of language and code comprehension co-varied across individuals. This observation points to a potential role of language in the recycling of the network for logical reasoning during the acquisition of programming expertise. Preliminary data from our new fMRI study with programming novices suggest that this fronto-parietal network is involved in the comprehension of algorithmic descriptions (pseudocode) independent of programming languages. In this talk, the implication of these results and future plans to connect the studies of experts and novices will be discussed.


Cognitive Neuroscience of Volition

Monday, April 25, 2022, 10:00 AM - 12:00 PM (PT), Grand Ballroom A

Chair: Gabriel Kreiman, Harvard

Speakers: Uri Maoz, Rosa Cao, Adina Roskies, Aaron Schurger

At the very heart of who we are as humans is a strong sense of the freedom to make decisions and actions and a sense of ownership of those decisions and actions. The critical role of volition in daily life  is exemplified by the devastating effects of conditions that impair volition, including drug addiction, severe depression, and Parkinson's disease, among others. The nature of volition has been debated for millennia, but it has only recently become the subject of scientific investigation. This symposium will present an overview of current discussions and state-of-the-art findings pertaining to the neural mechanisms that orchestrate volitional decisions and to the implications of these findings for society. In particular, Dr. Schurger will review the different formal models of spontaneous voluntary movement that have been developed to account for the attendant behavioral and neural phenomena. Dr. Cao will examine the slippery relationship between models of volitional action described in psychological language, and the neural data taken to support or undermine them. Dr. Maoz will discuss empirical and modeling work that tries to shift the long-standing focus in the field from arbitrary to deliberate decisions. Dr. Roskies will explore the implications of recent work in neuroscience for theories of free will.

TALK 1: Models of Uncued Action Initiation

Aaron Schurger, Chapman University

The modern era of volition research arguably began with the self-paced movement-initiation task of Kornhuber and Deecke (1965) and their discovery of the readiness potential (RP). Empirical research on spontaneous voluntary movement has progressed considerably since that time, but largely without any formal theoretical or computational models. More recently, researchers have begun to explore formal models that can account for the behavioral and neural phenomena associated with spontaneous voluntary movement, and these have been met with some success, but have also been challenged. In this talk I will compare and contrast the most prominent models of spontaneous voluntary movement, and the attendant pre-movement buildup of neural activity, including the stochastic accumulator model, the slow-cortical-potential sampling hypothesis, the motor-imagery model, and the linear ballistic accumulator. These more formal models also help bring to the surface the assumptions implicit in the classic account of the RP, which now can be more precisely defined alongside these other models so that they can be pitted against each other experimentally. The field is currently at a crossroads given that the interpretation of a significant body of prior research rests on the classic interpretation of the RP, which went unchallenged for decades. I conclude with a discussion of strategies for testing these different models in an effort to understand what the RP really means.

TALK 2: What Constrains Mappings Between Models of Volition and Neural Data?

Rosa Cao, Stanford University

For neuroscience to illuminate our understanding of how we choose to act, there are (at least) two kinds of questions we might want to ask. The most obvious is: How well is a particular psychological model of volitional decision making and action validated by empirical data, given a particular mapping of its psychological posits onto neural processes and constituents? But the question I'll focus on is: How should we map psychological terms from a model onto neural processes and constituents (and behavior) in the first place? This second question is crucial to how we operationalize any particular questions in the cognitive neuroscience of volition. How do we decide between two potential mappings? When, if ever, should we accept that no good mapping exists (and thus that we should revise or discard our psychological model)? In general, many interpretations are available, and the question of which interpretation is to be preferred depends not just on local empirical facts, but on its overall fit with the rest of our commitments, both empirical and theoretical.

TALK 3: Arbitrary and Deliberate Decisions in the Cognitive Neuroscience of Volition

Uri Maoz, Chapman University / UCLA / Caltech

Healthy adult humans typically experience their intentions as causal for their actions. However, results from the seminal Libet experiments and follow-up studies have convinced many that actions may be brought about as a result of unconscious processes, leaving the role of consciousness in decision making and action formation in doubt. However, those experiments focused on arbitrary decisions, bereft of reasons or purpose, that were assumed to generalize to deliberate decisions. Focusing on such deliberate decisions, we show that they can be predicted online and in real time from intracortical recordings in humans. We also show evidence that deliberate decisions might combine the values associated with the decision alternatives with arbitrary bias activity. Finally, we show that the results of the Libet experiment do not hold for at least some deliberate decisions, casting doubt on the generalizability of results on arbitrary decisions to deliberate ones.

TALK 4: Implications of the Neuroscience of Volition for the Free Will Debate

Adina Roskies, Dartmouth College

Some of the interest in neuroscientific studies of volition stems from the relevance of volition for the philosophical debate on free will. To date, most neuroscientific work has been leveraged to allegedly show that we lack free will, but most of that literature is both ignorant of important philosophical considerations and problematic in its interpretation of the empirical data. To begin with, I lay out various ways in which volition can be operationalized in neuroscientific experiments. I then briefly sketch the main philosophical positions on free will, and finally I assess the way in which current work on volition bears upon the free will debate. My contention is that, to date, no neuroscientific work has provided compelling evidence against plausible accounts of free will. I close with some discussion of how future neuroscientific studies may bear upon the philosophical problem.


New Perspectives on the Interplay Between Memory and Cognitive Control

Monday, April 25, 2022, 10:00 AM - 12:00 PM (PT), Grand Ballroom B/C

Chair: Chunyue Teng, University of Wisconsin-Madison

Speakers: Roshan Cools, Jiefeng Jiang, Aspen Yoo, Chunyue Teng

Cognitive control refers to the ability to adaptively guide information processing and behavior based on current context and task demands. Traditionally, the relationship between cognitive control and various memory functions (including working memory and long-term memory) has been viewed as one-directional, with cognitive control recruited as a task becomes more difficult (e.g., due to increasing memory load, or increasing levels of conflict). Here we will highlight growing evidence for a more interactive and integrated perspective that emphasizes how cognitive control is shaped by learning and memory, and the interplay between memory and cognitive control. To ground our symposium in the neurobiology of cognitive control, Roshan Cools will address the roles of striatal dopamine in modulating cognitive control by learning and motivation. Jiefeng Jiang will describe new behavioral evidence demonstrating how shared task representations modulate integration and interference of memory representations of tasks. Aspen Yoo will describe a behavioral and computational account of how executive functions such as working memory contribute to and interact with reinforcement learning. Chunyue Teng will present evidence from computational modeling of behavior and EEG for how the history of recent events is used to update the level of control that is recruited to mediate the interaction between visual working memory and concurrent visual perception. Collectively, these studies provide new computational and neurocognitive evidence to advance our understanding of how categorically different cognitive functions (e.g., cognitive control, working memory, long-term memory, and reinforcement learning) interact to give rise to adaptive behavior in humans.

TALK 1: Cognitive Control is Shaped by Striatal Dopamine-dependent Changes in Learning and Motivation

Roshan Cools, Radboud University Nijmegen

Dopaminergic drugs such as methylphenidate are widely used for their cognitive enhancing effects, but their mechanisms of action are unclear. We combined pharmacological fMRI and [18F]-DOPA PET to establish the hypothesis that cognitive control is shaped by striatal dopamine dependent changes in prediction error signals during learning, and in value during choice. Specifically, I will review results from a large pharmaco-imaging study with 100 healthy volunteers that allowed us to test two specific hypotheses about striatal dopamine's role in cognitive control. A first experiment demonstrated that striatal dopamine boosts cognitive control by selectively gating attention to currently task-relevant representations in posterior cortex depending on striatal prediction error signals during learning. A second experiment demonstrated that striatal dopamine boosts cognitive control by changing the weight on the benefits versus costs of effort during choice. Together these data firmly establish a key role of striatal dopamine in the shaping of cognitive control by learning and motivation.

TALK 2: Interference and Integration in Hierarchical Task Learning

Jiefeng Jiang, University of Iowa

A key feature of human task learning is shared task representation: Simple, subordinate tasks can be learned and then shared by multiple complex, superordinate tasks as building blocks to facilitate task learning. An important yet unanswered question is how superordinate tasks sharing the same subordinate task affects the learning and memory of each other. Leveraging theories of associative memory, we hypothesize that shared subordinate tasks can cause both interference and facilitation between superordinate tasks. These hypotheses are tested using a novel experimental task which trains participants perform superordinate tasks consisting of shared, trained subordinate tasks. Across three experiments, we demonstrate that sharing a subordinate task can (1) impair the memory of previously learned superordinate tasks and (2) integrate learned superordinate tasks to facilitate new superordinate task learning without direct experience. These findings shed light on the organizational principles of task knowledge and their consequences on task learning.

TALK 3: Contributions of Human Executive Functions to Reinforcement Learning

Aspen Yoo, University of California, Berkeley

The study of the neural processes that support reinforcement learning has been greatly successful. It has characterized a simple brain network (including cortico-basal ganglia loops and dopaminergic signaling) that enables animals to learn to make valuable choices in different states, using reward or punishment outcomes. However, increasing evidence shows that the story is more complex in humans, where additional processes also contribute importantly to learning. In particular, human executive functions, such as working memory and attention, contribute to several aspects of what is typically considered the single process of reinforcement learning. In this talk, I will show a couple examples of such contributions. In particular, I will show that reinforcement learning and working memory processes contribute differently to learning behavior, depending on the type of information available in a stimulus. I will also discuss a study developing a more realistic and dynamic model of how working memory representations may be updated during a learning task. Our results have important consequences for our understanding of human learning.

TALK 4: Flexible Control of the Interaction Between Working Memory and Perception

Chunyue Teng, University of Wisconsin-Madison

Everyday behavior requires retention of goal representations in working memory (WM) while concurrently processing new visual input, and previous work has established that the contents of WM and real-time perception can influence each other. Less well understood, however, is how these interactions are controlled. Here we present results from a series of behavioral and EEG experiments employing a dual-task paradigm that manipulated the congruity between WM sample item and interpolated discriminandum from an independent discrimination task. Descriptively, behavioral results replicated the classic Gratton effect and produced a novel congruity-dependent pattern of serial dependency between discrimination on trial n and WM on trial n+1. Analytically, trial-by-trial analysis with a reinforcement learning-based Flexible Control Model (FCM; Jiang et al., 2014, 2015) provided quantitative evidence for the recruitment of two distinct forms of control: phasic reactive control in the form of congruity-sensitive prediction errors, and tonic proactive control that adjusted with the recent history of WM-perception congruity. FCM estimates of proactive control accounted for variation in the Gratton effect and in two physiological correlates of cognitive control: pupil diameter and EEG frontal midline theta power. FCM estimates of reactive control predicted whether the serial dependency of WM recall on the previous trial’s discriminandum was attractive (following congruence) or repulsive (following incongruence), thereby providing a new source of evidence for a “hijacked-adaptation” account of the cognitive control of information held in WM. The brain may recruit common strategies for visual cognition, with no regard for such labels as “perception,” “attention,” and “working memory.”


Sculpting Active Vision Through Oscillations-Guided Actions

Monday, April 25, 2022, 10:00 AM - 12:00 PM (PT), Bayview Room

Chair: Yali Pan, University of Birmingham

Speakers: Yali Pan, Tzvetan Popov, Alice Tomassini, Marcin Leszczynski

During active visual sensing, humans typically make 3-5 saccades per second to shift attention and structure the in-flow of information into the visual system. Accumulative evidence suggests that this active visual processing is fundamentally different from traditional passive viewing processing, where the gaze is held constant, and no eye movements are allowed. The neural mechanism underlying active visual processing is not yet fully understood. How is the visual and (oculo)motor system coordinated during natural vision? Studies from both human and non-human primates indicated that eye movements, especially saccades, increase the neural excitability along the visual hierarchy by synchronizing ongoing low-frequency oscillations at the theta/alpha frequency. In turn, synchronized neural activity sculpts the visual system towards an optimal phase for upcoming input. The motor action control also sculpts the visual system in a rhythmic way. Thus, mediated by low-frequency oscillations, the (ocular)motor system is equipped with a temporal code that fosters the proactive sampling of visual information. In addition, eye movements can also modulate neural excitability to an extended sensory network beyond the visual cortex. In this symposium, the speakers will discuss their perspectives of active information processing through eye movements. Our target audience is researchers interested in the relationship between motor action and ongoing brain activity as prerequisites for visual information processing under naturalistic conditions such as reading and spatial exploration.

TALK 1: Saccades are Locked to the Phase of Alpha Activity During Natural Reading

Yali Pan, University of Birmingham

We saccade three to five times per second when reading natural text. However, little is known about how we coordinate the ocular and visual system during such rapid processing. Here, eye tracker and MEG were used to simultaneously collect eye movements and brain activity during a free reading task. Every sentence was embedded with target words of either high or low lexical frequency. During natural reading, the alpha activity was strong in the power spectra. Our key finding was that the alpha phase concentrated more over trials prior to saccade towards low compared to high lexical frequency target. This phase consistency indicates that saccade onsets are locked to the phase of the alpha oscillations in particular low-frequency words. Finally, source modelling localized the saccade related alpha phase alignment to the right visual-motor area (BA7). Our findings suggest that the alpha phase acts to coordinate the ocular and visual system during natural reading; this coordination becomes more pronounced for demanding words.

TALK 2: How do Alpha Oscillations Move Your Eyes About?

Tzvetan Popov, University of Zurich

Solutions to core survival requirements such as attention are scaled, preserved, and present in nearly all creatures equipped with the ability to move. In this talk, a fundamental principle of neuronal rhythms (alpha oscillations) in monitoring sensory action across phyla is discussed (honey bee, non-human primates, and humans). The brain's active sensing of the surrounding environment entails clustering of ocular/antenna movements towards an object or location. This active sensing behavior is monitored by the phase and amplitude of spontaneous (alpha) rhythm. Independent of testing conditions (e.g. light or full darkness) and cognitive load (e.g. rest, spatial attention, working memory), space is inferred by the movement direction of the brain's sensors manifesting in location-specific place topographies. These observations are discussed in light of the conjecture that the dominant rhythm of the brain facilitates and monitors sensorimotor output as a prerequisite for a proactive processing chain in cognition beginning with action.

TALK 3: Visual Processing is Coupled to Cortico-Motor Control

Alice Tomassini, Italian Institute of Technology

Effective behavior relies on a dynamic interplay between motor and sensory functions. Whereas this interplay has been variously modelled at the computational level, little is known about how it is actually realized at the neural level. A growing stream of research indicates that sampling of sensory data may routinely operate in a discontinuous, rhythmic way and such rhythmicity can be locked to motor behavior (e.g., eye movements). I will present behavioral and neurophysiological data suggesting that visual information sampling is not just temporally locked to the overt movement dynamics, but it is synchronized to the internal, covert, motor dynamics. We asked human participants to perform continuous isometric contraction while detecting unrelated and unpredictable near-threshold visual stimuli. The motor output (force/EMG) shows zero-lag coherence with brain activity (recorded via EEG) in the beta-band, as reported in previous works. On the contrary, cortical rhythms in the alpha-band systematically precede the motor output by almost 200 milliseconds. Importantly, visual performance is facilitated when cortico-motor alpha (not beta) synchronization is enhanced immediately before stimulus presentation onset. Further, in a follow-up experiment, we show that alpha-band cortico-motor synchronization has functional consequences for online visuomotor control. These findings demonstrate an ongoing coupling between visual processing and motor control, suggesting the operation of an internal and alpha-cycling visuomotor loop.

TALK 4: Active Vision Modulates Neural Excitability in Human Auditory and Speech Processing Systems

Marcin Leszczynski, Columbia University, Nathan Kline Institute

Sensing is an active process. Humans and other primates use saccadic eye movements to relocate the fovea and sample different bits of information multiple times per second. Non-retinal signals linked to saccades reset neuronal oscillations in visual areas, shifting neuron ensembles to a common momentary high excitability state just after fixation. Whether these effects extend outside of the visual system is unknown. Here, we combined intracranial electroencephalography (iEEG) with eye tracking during natural free viewing to evaluate the possibility that saccades modulate neural activity in classical auditory and speech processing areas. Studying fixation-locked field potentials, we discovered two separable mechanisms operating in synchrony with the eye movements: one operating in low level auditory areas and another in higher order speech processing areas. We also found evidence of dynamic multiplexed connectivity between the auditory system and regions involved in generation of saccades (i.e., Frontal Eye Fields). Our results support an active sensing model in which saccades are tied to dynamic modulation of neural excitability across a network that extends into the auditory system.


Insights into Human Cognition from Precision fMRI of Individuals

Monday, April 25, 2022, 10:00 AM - 12:00 PM (PT), Seacliff Room

Co-Chairs: Caterina Gratton, Northwestern University and Rodrigo Braga, Northwestern University

Speakers: Ev Fedorenko, Ben Deen, Emily Jacobs, Timothy Laumann

Individual-focused approaches to human brain mapping using functional MRI have recently risen in popularity. By collecting large amounts of data from each participant, it has been possible to produce reliable maps of brain organization at the level of the individual. This has effectively presented a leap forward in the resolution of human cognitive neuroimaging data, by removing the need for group-averaging, and afforded new insights into the organization of the human brain. This symposium highlights recent discoveries arising from this individual-focused 'Precision fMRI' approach, hosting 4 innovative researchers who will showcase how Precision fMRI can (i) reveal fine-scale differences in regional specialization that robustly replicate across individuals, (ii) reveal the breathtaking extent of network plasticity that can arise in response to brain damage, (iii) allow tracking of physiological influences on brain organization occurring across days and weeks, and (iv) reveal new principles of functional specialization within association cortex regions comprising the canonical default network. These presentations will demonstrate how Precision fMRI has led to significant advances in our understanding of the neural bases of cognition.

TALK 1: The Power of Individual-Level Analyses in fMRI

Ev Fedorenko, Massachusetts Institute of Technology

In the last decade, reliance on individual-subject analyses in fMRI research has been on the rise. I will discuss three advantages of this approach over the traditional, group-averaging approach. First, I will talk about the use of robust and validated functional ‘localizers’ as a way to establish a cumulative research enterprise within and across domains and species. Second, I will discuss the critical importance of individual-level analyses for discovering functional dissociations, and characterizing both domain-specific and domain-general neural circuits. And third, I will challenge recent claims about poor reliability of individual-level neural markers and present evidence of good reliability of measures extracted from two high-level cognitive networks. I will conclude with a discussion of ‘probabilistic functional atlases’—created by overlaying large numbers of individual activation maps—as a way to bridge two fundamentally different and currently disjoint analytic traditions in functional brain imaging (traditional group-level analyses and individual-level analyses) by providing common reference frames.

TALK 2: Parallel Systems for Social and Spatial Reasoning within the Cortical Apex

Ben Deen, The Rockefeller University

The default mode or apex network, positioned atop a rough hierarchy of anatomical and functional connectivity in primates, has been implicated in a range of cognitive and mnemonic functions: episodic and semantic memory, social cognition, spatial navigation, and reward-based decision making. These results, from separate neuroimaging studies, have been used to argue that the apex network acts as a domain-general hub, integrating information from across cortex. Here, we use deep imaging of individual human brains to refute this claim, arguing instead for parallel, domain-specific subnetworks within the cortical apex. We scanned N=10 participants using fMRI on a range of perceptual and cognitive tasks, across three sessions (7.5 hours per participant), with data acquisition and analysis tools optimized to robustly characterize functional organization in individuals. Tasks involved visual perception, semantic judgment, and episodic simulation of close familiar people and places, and everyday objects. We found evidence for parallel subnetworks within the apex network that respond specifically during conditions involving familiar people and places, across visual, semantic, and episodic tasks. Person- and place-preferring areas were systematically yoked across multiple cortical zones, including medial prefrontal cortex, medial parietal cortex, the superior frontal gyrus, and the temporo-parietal junction. Resting-state functional connectivity between zones was only observed among regions with similar category preferences. These results identify a novel principle of functional organization in human association cortex, and demonstrate that domain-specificity is pervasive within the cortical apex. They suggest that reasoning about people and places rely on separate but parallel cognitive and neural mechanisms.

TALK 3: Applying Dense-Sampling Methods to Reveal Dynamic Endocrine Modulation of the Nervous System

Emily Jacobs, University of California, Santa Barbara

Sex hormones are powerful neuromodulators of learning and memory. Foundational studies in rodents and nonhuman primates have established estrogen and progesterone’s influence in the central nervous system across a range of spatiotemporal scales. Yet, sex hormones’ influence on the structural and functional architecture of the human brain is largely unknown. A central feature of the mammalian endocrine system is that hormone secretion varies over time. Circadian, infradian, and circannual rhythms are essential for driving many physiological processes. Cognitive neuroscience studies rely heavily on cross-sectional designs that, by nature, cannot capture how the brain responds to these rhythmic endocrine changes. Here, I highlight findings from a series of dense-sampling neuroimaging studies from my lab designed to probe the dynamic interplay between the nervous and endocrine systems. Individuals underwent brain imaging and venipuncture every 12-24 hours for 30 consecutive days. Procedures were carried out under freely cycling conditions and again under a pharmacological regimen that chronically suppresses sex hormone production. First, from resting state fMRI we found evidence that transient increases in estrogen drive robust increases in functional connectivity across the brain. Time-lagged methods from dynamical systems analysis further revealed that transient changes in estrogen enhance within-network integration (i.e. global efficiency) in several large-scale brain networks, particularly Default Mode and Dorsal Attention Networks. Next, using high-resolution hippocampal subfield imaging we found that intrinsic hormone fluctuations and exogenous hormone manipulations can rapidly and dynamically shape medial temporal lobe morphology. Together, these findings suggest that neuroendocrine factors influence the brain over unprecedented timescales.

TALK 4: Brain Network Reorganization Following Bilateral Perinatal Strokes

Timothy Laumann, Washington University in St. Louis

It is well-established that cortical injuries sustained early in life can be compensated more quickly and more completely than those sustained later in life. However, the mechanisms underlying this plasticity are incompletely understood because of the difficulty studying brain physiology in individuals. We will discuss an extreme example of an individual with largely typical development despite extensive bilateral perinatal infarcts. To understand this remarkable patient, we used a recently introduced approach for precision functional mapping (PFM) in individual subjects that leverages extensive and repeated MR acquisitions to define functional network organization. These data revealed the details of functional network remapping that appear to facilitate this individual's motor and cognitive functioning in the setting of massive tissue loss. This case demonstrates the incredible capacity for plasticity in the developing brain. Further, it is a prime example of how measurement of precise functional localization provides essential insight into brain function beyond typical anatomical determinants.


Neurocomputational Mechanisms of Motivational Influences on Decision-Making

Tuesday, April 26, 2022, 1:30 PM - 3:30 PM (PT), Grand Ballroom A

Chair: Debbie Yee, Brown University

Speakers: Taraz Lee, Ian Ballard, Yuan Chang Leong, Debbie Yee

Motivation is often thought to shape adaptive decision-making by biasing actions towards rewards and away from punishment. Emerging evidence, however, points to a more nuanced role whereby motivation can enhance and impair different aspects of decision-making. Computational models provide a useful framework for dissociating decision processes into constituent cognitive components. In this symposium, we present novel research that combines computational models with other neuroscience tools to identify the specific impact of incentive motivation on latent cognitive processes underlying decision-making. Taraz Lee shows that reward improves decision-making by facilitating goal-directed responses, and not by inhibiting conflicting pre-potent responses. Using TMS, he provides causal evidence of DLPFC's role in mediating interference resolution when motivated by prospective reward. Ian Ballard will present work demonstrating that reward facilitation, over time, leads to cognitive inflexibility and high-level 'habits' that impair performance. Specifically, subjects are impaired at shifting behavior away from previously rewarded rules, even when reward contingencies are eliminated. Yuan Chang Leong will show how reward associations impair perceptual decisions by biasing sensory evidence accumulation in favor of desirable perceptual outcomes. This bias was associated with trial-by-trial fluctuations in amygdala activity. Debbie Yee will build upon this work by highlighting how the inclusion of both appetitive and aversive incentives helps identify distinct neural and computational mechanisms by which each type of motivation exerts its effect on cognitive control allocation. Together, these talks demonstrate the utility of combining behavioral, neural, and modeling tools in elucidating the cognitive processes and neural circuits underlying how motivation influences decision-making.

TALK 1: Reward Improves Performance Under Conflict by Enhancing the Preparation of Goal-Directed Actions

Taraz Lee, University of Michigan

Sometimes our goals conflict with our prepotent dispositions, leading to impaired performance. Prior research shows that people are better at overcoming automatic responses and producing goal-directed responses when motivated by the prospect of reward. However, it is not known whether reward leads to improved performance via the inhibition of automatic responses, the facilitation of goal-directed responses, or a mixture of both. This is due in part to the fact that standard experimental paradigms used to study cognitive control rely on free-response times. This allows participants to delay the initiation of their responses and avoid committing automatic errors, making it difficult to infer how reward affects underlying cognitive processes. We addressed this limitation by using conflict tasks in which participants were forced to respond at a predetermined time. We manipulated the time available for response preparation by varying the onset of the target stimulus and measured the participants' preparatory state at the time of the forced response. Finally, we used a probabilistic response preparation model that dissociates the preparation of habitual and goal-directed responses to infer the time required to prepare these different responses. Across two experiments, we found evidence that reward accelerated the preparation of goal-directed actions, while there was little evidence that reward further inhibited the preparation of automatic responses. Preliminary evidence using transcranial magnetic stimulation suggests that transient disruption of the dorsolateral prefrontal cortex abolishes this effect of reward on enhancing goal-directed response preparation. This approach allows novel insights into the specific cognitive control mechanism modulated by reward.

TALK 2: High-Level Habits in Goal-Directed Decision-Making

Ian Ballard, UC Berkeley

In habitual behavior, actions are gated by a corticostriatal circuit that encodes the reward history of stimulus-response contingencies. Cortical representations of goals are represented more anteriorly than those of actions, but share a similar cortico-striatal circuit. Although goal-directed and habitual behavior are traditionally conceptualized as competing systems for behavior control, we hypothesized that this shared architecture implies a shared reliance on reward learning. We designed an experiment to test whether reward reinforcement could give rise to habitual selection of abstract rules, just as it can give rise to habitual action selection. Subjects performed a rule-based perceptual discrimination task in which one rule was rewarded more often than the others. Following the reward phase, subjects performed the same task in extinction, without the possibility of reward. Analyses using drift-diffusion models showed that reward facilitates reward execution by increasing the drift rate, while also modifying rule execution by increasing the threshold for committing to a decision. Just as motor habits reduce behavioral flexibility, we found that this reward facilitation of rule selection comes at the cost of reduced cognitive flexibility: subjects were impaired at switching away from the rewarded rule. Strikingly, we found that both of these effects persisted into extinction. These results are consistent with a model in which reward reinforcement in the corticostriatal circuit biases rule selection. The results suggest that the reward learning system sculpts how the goal-directed system chooses goals. This idea may help to explain the emergence of compulsive activation of maladaptive goals in OCD and addiction.

TALK 3: Amygdala Encodes Motivational Enhancement of Sensory Evidence During Perceptual Decision-Making

Yuan Chang Leong, University of Chicago

For most real-world perceptual decisions, people are not neutral observers indifferent to different outcomes. Some outcomes are better than others, and people are motivated to see those outcomes over alternatives. Evidence from a number of recent studies suggests that perceptual decisions are biased towards motivationally desirable percepts. In this study, we combine behavioral experiments, neuroimaging and computational modeling to investigate the neural mechanisms underlying motivational biases in perceptual decisions. Human participants were rewarded for correctly categorizing an ambiguous image into one of two categories while undergoing fMRI. On each trial, we used a financial bonus to motivate participants to see one category over another. The reward maximizing strategy was to perform the categorization task accurately, but participants were biased towards categorizing the images as the category we motivated them to see. Heightened amygdala activity preceded motivation consistent categorizations, and participants with higher amygdala activation exhibited stronger motivational biases in their perceptual reports. Trial-by-trial amygdala activity was associated with stronger enhancement of neural activity encoding desirable percepts in sensory cortices, suggesting that amygdala-dependent effects on perceptual decisions arose from biased sensory processing. Analyses using a drift diffusion model provide converging evidence that trial-by-trial amygdala activity was associated with stronger motivational biases in the accumulation of sensory evidence. These findings highlight the role of the amygdala in biasing perceptual decision-making, and shed light on the neurocomputational mechanisms underlying the influence of motivation and reward on how people decide what they see.

TALK 4: Reward and Aversive Motivation Influence Distinct Effort Strategies for Cognitive Control Allocation

Debbie Yee, Brown University

Humans demonstrate the impressive ability to seamlessly adjust cognitive control allocation based upon the potential positive outcomes they would obtain from task completion (e.g., bonus earned), as well as the potential negative outcomes they would avoid if the task is not completed (e.g., job termination). Whereas prior research has almost exclusively focused on the impact of reward motivation on cognitive control, much less is known about the mechanisms through which aversive motivation interacts with cognitive control. Here, we combine behavioral experiments, computational modeling, and fMRI to investigate underlying mechanisms of these interactions. Human participants performed a novel self-paced incentivized Stroop task through which they could earn a monetary bonus for accurate responses and were penalized with a monetary loss for error responses. Using the Expected Value of Control model which predicts that humans adjust their control allocation to maximize expected reward rate while simultaneously minimizing effort costs associated with exerting cognitive control, we observe that that rewards and penalties facilitate reconfiguration between attention-related and inhibition-related strategies. Using fMRI, we observe that ventral striatum encodes the salience of cue-related motivational incentives, whereas anterior insula and inferior frontal gyrus encodes penalty-related information. Motivated task performance was associated with activation in dorsal anterior cingulate cortex and lateral prefrontal cortex, suggesting these regions play a key role in translating motivational value to adjustments of cognitive control. Together, these data reveal how inclusion of both reward and penalty may elucidate the neural and computational mechanisms that underlie the dissociable effort strategies for adaptive cognitive control.


The Flexible and Adaptive Nature of Emotional Memory

Tuesday, April 26, 2022, 1:30 PM - 3:30 PM (PT), Grand Ballroom B/C

Chair: Joseph Dunsmoor, University of Texas: Austin

Speakers: R. Alison Adcock, Lila Davachi, Mara Mather, Joseph Dunsmoor

Prior research has characterized how emotion can influence memory, showing enhanced memory for emotionally salient versus more mundane events. The majority of prior work, however, has focused on how emotion 'stamps' emotional experiences into long-term memory as they were experienced. Yet, emerging evidence has revealed that the mechanisms that underlie emotional memory are well positioned to support the flexible transformation of memories. Specifically, emotion targets neuromodulatory systems that dynamically support memory generalization, updating, and selectivity. In this symposia, we will synthesize recent work using a combination of novel behavioral paradigms, computational modelling, psychophysiology, and neuroimaging to understand how emotions during encoding, consolidation and retrieval transform memory. The symposium brings together speakers with diverse perspectives on human memory-grounded in animal models, cognitive psychology, and systems consolidation-to provide novel insights into emotional memory. R. Alison Adcock will present a combination of behavioral work and computational modelling detailing how motivational goal states influence the organizational structure of memory. Lila Davachi will present behavioral and imaging results suggesting that memory reactivation mechanisms support the attribution of emotional tone to temporally adjacent neutral events, Mara Mather will present results from a randomized clinical trial examining how modulating heart rate variability affects brain networks involved in emotion regulation and affective memory biases. Joseph Dunsmoor will present a collection of recent findings using hybrid Pavlovian threat conditioning with an episodic component that reveals how emotional learning affects long-term memory for neutral information. Finally, Vishnu Murtyact will act as a discussant to foster synthesis across these discrete frameworks.

TALK 1: Episodic Memories as Valuation Summaries: Dissociable Mechanisms of Reward Modulation Reveal Temporal Precision

Alison Adcock, Duke University

Episodic memories are shaped by motivation via mechanisms that precede, overlap, and follow the encoding of experience. In previous work, we have used reward cues before encoding to engage interrogative motivation and dopaminergic midbrain activation, priming the hippocampus and amplifying memories. We contrast these mechanisms of memory modulation with those that follow cues for imperatives to avoid punishment, which engage anticipatory amygdala activation and selectively prime medial temporal lobe cortex. Brain responses to reward cues themselves, however, are not monolithic, and go well beyond mesolimbic dopamine. In this talk, I will share our recent work showing that reward modulation of memory is temporally precise, with dissociable mechanisms and valuation depending on the timing of rewards during events. In Study 1, we show that on a timescale of seconds during reward anticipation, memory enhancements are precisely consistent with dynamics reported in animal work on midbrain dopamine physiology. Yet even within these few seconds, memories for objects overlapping early prediction errors versus late anticipation of uncertain events show distinct relationships with consolidation, implying separate mechanisms. In Study 2, we conceptualize episodic memories as valuation summaries. We show that distribution of rewards influences their attribution, summary valuation of the event in long term memory, and stated preferences. Notably, reward distributions differentially impacted consolidation of valuation and later preference, with potential implications for many decision-making paradigms. Dissecting these interwoven effects highlights new possible mechanisms and implications for modulation of memory systems, and ultimately new insights into how we construct mental value topographies.


TALK 2: Memory Reactivation is a Core Mechanism Supporting the Learning of Emotional Attributions

Lila Davachi, Columbia University

Decades of work has shown that emotional events are more likely to be consolidated and later remembered with greater vividness and confidence. Scientific inquiry of these effects has typically adopted paradigms where the mnemonic stimulus is itself emotional or information is being encoded under conditions of anxiety, as in conditioning studies. However, it is also known that post-encoding manipulations can enhance memory for preceding experiences. How does our memory system know how to attribute emotional arousal to temporally adjacent experiences, either those coming before or after the arousal itself? In this talk, I will share our recent work highlighting how emotional experiences that occur either before or after ‘neutral’ experiences impacts memory for those neutral events. This does not happen wholesale in humans as had been suggested by foundational animal studies. Instead, our work suggests that emotional experiences modulate memory for temporally adjacent neutral events guided by the similarity between the emotional and the neutral events and, in this way, our memory systems can adaptively support near but not far generalizations. Using fMRI, we have identified that both online and offline reactivation mechanisms contribute to these memory benefits and suggest that examination of memory reactivation effects will lend insight into how the brain forms these attributions and, in a broader sense, is working to construct an accurate model of the world from related experiences that are separated in time.

TALK 3: Vagal Influences over Emotional Memory Biases

Mara Mather, University of Southern California

What makes one person remember more positive aspects of an event and another remember more negative aspects? The strong relationships between emotional well-being and vagus nerve function as indexed by resting heart rate variability (HRV) suggest that the degree of positive/negative bias in memories may be shaped in part by vagal influences over emotion regulation networks in the brain. To test this possibility, we randomly assigned 106 participants to a multi-week intervention involving daily biofeedback that either increased HRV or had little effect on HRV during the sessions. We examined how the interventions affected brain activity during rest and during regulating emotion. In addition, after a couple of weeks of daily HRV biofeedback sessions, participants completed an emotional picture memory task involving encoding, recall, and recognition phases. In this healthy cohort, the two conditions did not differentially affect anxiety, depression or mood. The Increase-HRV intervention increased brain oscillatory dynamics, functional connectivity in emotion-related resting-state networks, and increased down-regulation of activity in somatosensory brain regions during an emotion regulation task. The active control condition did not have these effects. Furthermore, participants in the Increase-HRV condition demonstrated more positive memory biases than participants assigned to the active control, as indicated by a higher proportion of positive images remembered during free recall and a higher rate of false alarms for positive images. These findings indicate that daily sessions inducing high heart rate variability can shape the functioning of brain networks involved in emotion regulation and emotional memory biases beyond the sessions themselves.

TALK 4: How Aversive Learning Affects the Strength and Organization of Neutral Memory Encoded Close in Time?

Joseph Dunsmoor, University of Texas: Austin

The ability to remember motivationally-significant moments is adaptive, as it can help ensure we behave appropriately to similar situations in the future. There is emerging work showing that humans also have stronger episodic memory for information encoded before and after a motivationally-significant event. Our laboratory has incorporated a hybrid episodic-associative memory design to investigate long-term item and source memory for information encoded before, during, and after an emotional learning event. Behaviorally, we have identified a consistent pattern of enhanced recognition item memory for neutral category exemplars conceptually related to exemplars used as conditioned stimuli during Pavlovian threat conditioning. We also found that selective retroactive enhancements in item memory were accompanied by a bias to misattribute items from before or after conditioning to the temporal context of fear conditioning. We also find a consistent pattern of reduced item memory for information encoded following a short break after conditioning while subjects are learning safety (i.e., fear extinction). This suggests that a short break separating fear from extinction learning serves as an event boundary segmenting competing memories of threat and safety. Overlapping multivariate patterns of fMRI activity associated with the formation and retrieval of fear versus extinction reveals that the temporal context helps organize these conceptually related but emotionally competing memories into discrete subregions of the medial prefrontal cortex and hippocampus. Finally, we have new evidence that rewarding extinction helps rescue this memory deficit, helping maintain long-term memories associated with safety to help counteract the retrieval of fear.


From Acoustics to Music or Speech: Their (Dis)Similar Perceptual Mechanisms

Tuesday, April 26, 2022, 1:30 PM - 3:30 PM (PT), Bayview Room

Chair: Andrew Chang, New York University

Speakers: Andrew Chang, Christina Vanden Bosch der Nederlanden, Guilhem Marion, Pauline Larrouy-Maestri

Language and music are two highly specialized forms of auditory communication that are deeply tied to the development of the human mind. Within each domain, we have a rich understanding of how low-level time-varying acoustic signals are perceived and transformed into high-level percepts like speaking or singing. However, the degree that neural and auditory mechanisms overlap between music and language, if at all, is hotly debated. This symposium investigates (dis)similarities between music and language across different levels and types of material. First, Andrew Chang will discuss the acoustic interface between speech and music and how temporal parameters determine whether a sound is perceived as speech or music. Second, Christina Vanden Bosch der Nederlanden will show, from infancy to adulthood, what acoustic features are used to distinguish speech from music, and how these acoustic features modulate the neural tracking of speech and song. Third, Giovanni Di Liberto will demonstrate how the Temporal Response Function (TRF) approach, which models neural predictions for speech and music in natural listening settings, reveals similar hierarchical representations of speech and music. Fourth, Pauline Larrouy-Maestri will present how the brain extracts and tracks high-level musical structures from low-level acoustics, and its parallels to speech perception. Finally, the speakers and the audience will jointly discuss these recent findings covering crucial aspects, ranging from acoustics (Talks 1-2) to more general structures (Talks 3-4), in both domains. In summary, this symposium features the latest approaches and evidence which have important theoretical insights on speech, music, and cognitive neuroscience.

TALK 1: The Amplitude Modulation of Sounds Affects the Perceptual Judgement of Speech or Music

Andrew Chang, New York University

Despite our increasingly rich understanding of how humans process speech and music signals, surprisingly little is known about how they are treated as different auditory categories in the first place. Here, we hypothesized that the amplitude modulations (AM) in sounds are critical because speech and music signals feature distinct AM rate and temporal regularity: speech tends to be fast and irregular, whereas music tends to be slow and regular. Neuroimaging studies also showed speech and music divergently activated the non-primary auditory cortex, implying that the brain differentiated them according to the low-level acoustic features. In our online experiments, we developed a signal processing pipeline to parametrically manipulate the AM peak frequency and temporal regularity to generate a variety of AM envelopes, which were further used to modulate broadband noise carriers, and participants were prompted to make a judgment on whether each excerpt sounds more like a “speech” or a “music” recording. Consistent with our hypotheses, fast AM peak frequency and temporal irregularity biased the judgement toward speech but the other way around for music. Interestingly, the effects of speech judgements are robust and prevalent among most of the participants, while the music judgements are related to the participants’ musical sophistication level. To the best of our knowledge, this is the first study to investigate the initial stage of a sound being diversely perceived as speech or music, and the AM temporal features are the critical low-level acoustic factors that have implications on the evolutionary origin of speech and music perceptions.

TALK 2: How do we Differentiate Speech from Song in Early Childhood?

Christina Vanden Bosch der Nederlanden, University of Toronto, Mississauga

​​Music and language are unique forms of human communication that require distinct knowledge to extract the intended message. To apply domain-specific knowledge, listeners must be able to categorize speech and song. This seems especially challenging in early childhood, when infant-directed (ID) speech has more exaggerated musical features. Here we examined what acoustic features are important for differentiating speech and song during early childhood. In Experiment 1, we used an online survey to ask children (4-17 years, N=51) and adults (18-64 years, N=74) to describe the difference between speech and song. Both groups described differences in terms of acoustic features. Children endorsed tempo, pitch, rhythm, and register, while adults endorsed pitch, register, volume, melody, and rhythm as the top discriminating features of speech and song. Following these studies, adult ratings of spoken and sung utterances confirm that greater pitch stability (N=31) and rhythmic regularity (N=74) are associated with more song-like ratings. Experiment 2 examined how rhythm and pitch contour affected neural tracking of speech and song envelopes. Four-month-olds (N=32) heard speech and song, each in ID and monotone versions, while EEG was recorded. Infants neurally tracked ID speech better than song in delta/theta bands, but tracked monotone song better than speech. Exaggerated pitch characteristics increased tracking for ID speech, whereas hierarchical rhythmic structure increased tracking of monotone songs. Together these studies show that rhythm and pitch are important for differentiating speech and song and highlight their unique contribution to neural processing in infancy.

TALK 3: Investigating Cortical Correlates of High-Level features of Natural Speech and Music

Guilhem Marion, Ecole Normale Supérieure; France

Recent advances in cognitive neuroscience demonstrate that the human brain encodes hierarchical structures of speech and music. Markers of these hierarchical structures can be directly extracted from the acoustics of the stimuli or computed from statistical models trained on a large corpus of stimuli. The neural correlates of low-level features of speech and music, such as the acoustic envelope, are known to be encoded in the primary cortex. In contrast, encoding of higher-level features such as phoneme expectations and semantic dissimilarity in speech or musical notes, rhythm, and chord expectations, have only received recent scrutiny. In this talk, after describing the techniques allowing such discoveries, we will present two recent studies using the same EEG paradigm, in which 21 participants passively listened to and actively imagined four Bach chorals while synced to a tactile metronome. The first study shows, using TRF regressions on acoustics and expectation signals computed from a statistical model of music, that music perception and imagery share topographies and temporal activation that show inverted polarity. The second will present neural activation during unexpected silences normally embedded in the musical stream, and thus demonstrate that this paradigm can be used to support the predictive coding theory. By comparing our results to the literature on speech decoding, we argue that music and speech cognition share common mechanisms that could be explained by the predictive coding theory.

TALK 4: Segmenting and Predicting "phrases" in Continuous Auditory Streams: The Case of Music

Pauline Larrouy-Maestri, Max-Planck-Institute for Empirical Aesthetics

Parallels between music and speech remain a central topic in cognitive neuroscience. While previous investigations typically focus on the acoustic characteristics of distinct materials, we concentrate on complex processes involved in extracting high-level structures when listening to natural auditory sequences. It has been shown that the human brain extracts online syllabic, phrasal, sentential, and formal (in poetry) structures during speech listening. To address a parallel issue in music, we examined twenty-nine participants passively listening to ten Bach chorales while undergoing electroencephalogram (EEG) recording. Salient acoustic cues relative to temporal information of phrasal structures were removed so that listeners had to rely on harmonic information to parse the music streams into phrases. Importantly, the harmonic content itself was manipulated, by locally or globally reversing chord progressions, in order to disrupt phrasal structures so that the “strength” of the phrases could be controlled and compared. We discovered a neural signature in the ultra-low frequency range around 0.1 Hz (temporal modulations of EEG power) that reliably tracks the musical phrasal structure. In a manner comparable to speech processing, the brain establishes complex music structures online over long timescales (>5 seconds) and actively segments continuous music streams into units of ‘musical’ meaning. Our findings support that - while speech and music material are usually easily differentiated - the processes involved in high-level structure building in both domains share key components, suggesting that music provides a relevant naturalistic window to study how humans process their auditory environment.


I Want to Break Free: Cognitive Neuroscience Unleashed from the Lab

Tuesday, April 26, 2022, 1:30 PM - 3:30 PM (PT), Seacliff Room

Chair: Alex Clarke, University of Cambridge

Speakers: David Donaldson, Victoria Nicholls, Michelle Greene, Morgan Barense

Research in cognitive neuroscience progresses towards ever more detailed and comprehensive ways to monitor brain responses. However our dominant methods in cognitive neuroscience remain limited in one important aspect - participants are necessarily removed from the natural environment, instead performing tasks with selected stimuli in a controlled setting. This means we actually have a poor understanding of how cognitive processes unfold in real-life situations, and little data to determine if the tasks we use in controlled laboratory settings actually translate to natural cognition and behaviour in real-world settings. Twin scientific approaches of controlled and naturalistic endeavours are fundamental to disciplines like Primatology, but is yet to be fully embraced in cognitive neuroscience. This symposium will present research that looks to embrace the complexities of the real world, and study cognition and behaviour outside the confines of the lab. Across the four talks, we will illustrate how a range of approaches including eye-tracking, EEG, smartphone video and augmented reality, can be utilised to study cognition beyond the lab. This kind of research requires innovations in equipment and experimental paradigms, along with new solutions for data analysis. Together, this symposium will showcase complimentary ways cognitive neuroscientists can begin to address new questions we simply can not answer inside a scanner.

TALK 1: Making the Case for Mobile Cognition - Taking EEG out of the Lab and into the Real World

David Donaldson, University of St Andrews

Recent technological innovations mean that neuroimaging techniques have emerged from the confines of the laboratory and are increasingly being deployed in real-world contexts. But what happens when you adopt a mobile cognition approach and take a method like EEG out of the lab and into the wild? Rather than measuring neural activity whilst participants perform tightly controlled experimental paradigms, mobile cognition studies aim to examine neural activity during more naturalistic behaviour. One clear benefit is a dramatic increase in the range of environments in which brain activity can be measured – allowing us to ask age-old questions in new ways. Here we illustrate the nature of the challenges and opportunities that arise, ranging from simple methodological issues (such as how to time-lock events that aren’t experimentally controlled) to more complex conceptual concerns (such as whether ecological validity can ever trump experimental control). Using data from recent mobile EEG studies of memory and attention we demonstrate how the mobile cognition approach can challenge traditional views. More importantly, perhaps, we use recent mobile EEG studies of sporting behaviour to illustrate how the mobile cognition approach can open new avenues of investigation – raising novel questions and encouraging a more applied, impact-focused orientation. We argue that moving out of the lab is not just feasible and informative, it represents an important step in the development of neuroimaging methods.

TALK 2: Object Recognition in the Real World: Adventures in Mobile EEG and Augmented Reality

Victoria Nicholls, University of Cambridge

Seeing and understanding objects during our everyday lives requires a visual and semantic analysis of those items, yet natural recognition occurs in the continuous spatiotemporal context within which we are immersed. While some neuroimaging studies have attempted to study object recognition in naturalistic ways, it remains a challenge to study cognitive processes in ecologically valid real-world settings. Here, we overcome this challenge by using 64-channel mobile EEG (mEEG) to record neural signals in real-world contexts, while utilising the emerging technology of head-mounted augmented reality (AR) to place virtual objects in real environments. We performed two studies which demonstrate that mEEG and AR can be used to investigate cognitive and neural processes in the real world. First, we validated the use of mEEG and AR by investigating a robust neural signature, the face inversion effect. In the second study, mEEG and AR were employed to ask whether visual context impacts object recognition in both indoor and outdoor environments, using a task closely resembling how recognition occurs in real-world settings. In both studies, we show robust neural signatures for face inversion and object-scene congruency. Beyond these results, we also discuss methodological challenges for this new paradigm, and highlight new methods for relating dynamic neural signals to dynamic perception and behaviour. Overall, our findings demonstrate that neural and cognitive processes can be examined in mEEG and AR setups, and that our work helps to bridge the gap between research in the laboratory and in real-life situations.

TALK 3: Methodological Considerations on Sampling Visual Experience with Mobile Eye Tracking

Michelle Greene, Bates College

Visual perception is shaped by visual experience. The statistical regularities of our visual input are reflected in patterns of brain activity. Understanding the statistical regularities in visual experience, and increasing the ecological validity of research requires sampling real experiences. Humans explore the world with our eyes, so an ideal sampling of human visual experience requires accurate gaze estimates while participants engage in a wide range of activities and locations. In principle, mobile eye tracking can provide this information, but in practice, many technical barriers and human factors constrain the activities, locations, and participants that can be sampled accurately. In this talk, we present our progress in addressing these barriers to build the Visual Experience Database. First, we describe how the hardware design of our mobile eye tracking system balances participant comfort and data quality. Ergonomics matter, because uncomfortable equipment affects behavior and reduces the reasonable duration of recordings. Second, we describe the challenges of recording outdoors. Bright sunlight causes squinting, casts shadows, and reduces eye video contrast, all of which reduce estimated gaze accuracy and precision. We will show how appropriate image processing at acquisition improves eye video contrast, and how DNN-based pupil detection can improve estimated pupil position. Finally, we will show and quantify how physical shift of the equipment on the head (slippage) affects estimated gaze quality. Addressing these limitations takes us some way towards achieving a representative sample of visual experience, but recording of long-duration, of highly dynamic activities, and in extreme lighting conditions remains challenging.

TALK 4: Enhancing Real-World Memory with a Smartphone Intervention that Promotes Differentiation of Hippocampal Activity in Older Adults

Morgan Barense, University of Toronto

The act of remembering an everyday experience influences how we interpret the world, how we think about the future, and how we perceive ourselves. It also enhances long-term retention of the recalled content, increasing the likelihood that it will be recalled again. Unfortunately, the ability to recollect event-specific details tends to decline with age, resulting in an impoverished ability to mentally re-experience the past. This shift has been linked to a corresponding decline in the distinctiveness of hippocampal memory representations. Despite these well-established changes, there are few effective cognitive behavioral interventions that target real-world episodic memory. We addressed this gap by developing a smartphone-based application called HippoCamera that allows participants to record labelled videos of everyday events and subsequently replay standardized, high-fidelity autobiographical memory cues. In two experiments, we found that older adults were able to easily integrate this non-invasive intervention into their daily lives. Using HippoCamera to repeatedly reactivate memories for real-world events improved episodic recollection and it evoked more positive autobiographical sentiment at the time of retrieval. In both experiments, these benefits were observed shortly after the intervention and again after a 3-month delay. Moreover, more detailed recollection was associated with more differentiated memory signals in the hippocampus. We will discuss how systematically reactivating memories for recent real-world experiences can maintain a bridge between the present and past self in older adults.





APRIL 23–26 • 2022

Latest from Twitter