CNS2022-Logo_FNL4web

APRIL 23–26 • 2022

CNS 2022 | Invited-Symposium Sessions

 

#

TITLE

DATE & TIME

LOCATION

1 TAKING THINGS TO THE NEXT LEVEL: DEEP DIVES INTO NEURAL POPULATION ACTIVITY Sunday, April 24, 10:00 AM - 12:00 PM (PT) Grand Ballroom A
2 CHALLENGES FOR THE METACOGNITIVE FOUNDATIONS OF CONSCIOUSNESS Sunday, April 24, 10:00 AM - 12:00 PM (PT) Grand Ballroom B/C
3 MARR'S LEVELS OF ANALYSIS 40 YEARS ON Tuesday, April 26, 10:00 AM - 12:00 PM (PT) Grand Ballroom A
4 EMPATHY, SOCIAL CONTACT, AND THEIR NEUROMODULATION Tuesday, April 26, 10:00 AM - 12:00 PM (PT) Grand Ballroom B/C

Invited Symposium Session 1

TAKING THINGS TO THE NEXT LEVEL: DEEP DIVES INTO NEURAL POPULATION ACTIVITY

Sunday, April 24, 2022, 10:00AM - 12:00PM (PT), Grand Ballroom A

Chair John Duncan, University of Cambridge 

Speakers: Thomas Akam, Sabine Kastner, Yang Dan, Mriganka Sur

Though it is broadly accepted that cognitive functions arise from the activity of widely distributed neural populations, until recently our knowledge has been fragmentary.  It has been hard to establish how population codes are constructed within a cortical area, including contributions from different cell types and neurotransmitters; how brain regions communicate, including interacting cortical and subcortical regions; how representations in different parts of a network evolve over the timescale of a simple cognitive operation; and much more.  This is beginning to change with powerful new techniques for wide-scale neurophysiological recordings, sometimes simultaneous across much of the cortical surface, for optogenetic and chemogenetic intervention in specific aspects of network function, and for multivariate analysis of population-level data.  To illustrate these rapid developments, the symposium will present rodent and monkey studies addressing perception, attention, and decision-making from the perspective of neural population activity.

TALK 1: CORTICAL DYNAMICS AND THEIR INFLUENCE ON SUBCORTICAL TARGETS DURING GOAL-DIRECTED BEHAVIOR

Mriganka Sur, Massachusetts Institute of Technology

Transforming sensory inputs to decisions and action is a fundamental attribute of goal-directed behavior. The inter-areal, region-specific and projection-based mechanisms for these transformations are being revealed in unprecedented detail using mice trained in specific behavioral tasks, using tools such as large-scale and focal multi-photon calcium imaging of neuronal responses, encoding and decoding models of activity, and optogenetic manipulations of brain regions and projections. In mice performing a visual discrimination task, we show that visual cortex (V1) neurons respond robustly to stimuli in engaged and passive conditions whereas most posterior parietal cortex (PPC) neurons respond exclusively during task engagement. PPC neurons primarily encode the animal’s impending choice, as revealed by imaging the same neurons before and after re-training mice with a reversed sensorimotor contingency. With pre-execution delays added, population analyses demonstrate unique encoding of stimulus and choice information across V1, PPC and motor cortex (MC). Optogenetic inhibition during behavior reveals that stimulus identity is rapidly transformed into behavioral choice, requiring V1, PPC, and MC during the transformation period, but only MC for maintaining the choice in working memory prior to execution. Using a visually-guided two-choice behavioral paradigm with multiple cue-response mappings, we show that a key node for action selection is the anterior cingulate cortex (ACC), a subdivision of the prefrontal cortex. The ACC exerts important control over the superior colliculus via projections that reduce the innate response bias of the colliculus, demonstrating a core circuit principle by which cortical structures interact with sub-cortical targets to transform sensory inputs into choice and action.

Supported by R01MH126351 and ARO award W911NF-21-1-0328.

TALK 2: TOP-DOWN VISUAL ATTENTION: CIRCUIT DISSECTION IN MICE

Yang Dan, University of California

Visual spatial attention allows an animal to select inputs at specific locations to guide behavior while ignoring stimuli in other regions of the visual field. Neurons in visual cortical areas show enhanced responses to the attended stimuli, and previous studies in primates have identified several key brain areas important for spatial attention. However, many questions remain regarding the origin of the attention signals and how they lead to enhanced visual cortical responses, partly due to the difficulty of dissecting neural circuits in primates. Mice provide a powerful animal model for understanding the circuit basis of behavior, given the plethora of genetic and viral tools available for circuit dissection.  We have developed a visual attention behavior for head-fixed mice.  Combining multielectrode recording, optogenetic activation and inactivation, and virus-mediate circuit tracing, we have begun to dissect the network involving the prefrontal cortex, superior colliculus, and pulvinar thalamus for controlling visual spatial attention.

TALK 3: TOWARDS A CIRCUIT LEVEL DESCRIPTION OF MODEL-BASED DECISION MAKING IN MICE

Thomas Akam, University of Oxford

Behavioural flexibility is a hallmark of human and animal cognition.  A key computation supporting such flexibility is planning, i.e. using predictive models of the consequences of  actions to predict their long-run utility.  Human cognitive neuroscientists have made substantial recent progress interrogating brain mechanisms of planning, thanks to behavioural tasks that dissociate it’s contribution to behaviour on a trial-by-trial basis from ‘model-free’ reinforcement learning.  Developing equivalent tasks in rodents offers the possibility of using the powerful tools available for measuring and manipulating brain activity to build a mechanistic circuit-level description of model-based decision making.

I will present some first steps towards this goal; the development of tasks for mice which aim to isolate the contribution of model-based decision making in large behavioural datasets, and evidence that predictive models of task structure shape activity in the dopamine system and anterior cingulate cortex.   One question raised by this line of work is whether apparently model-based ‘expert’ behaviour by extensively trained subjects recruits the same mechanisms of behavioural flexibility as novel situations.    I will outline a behavioural approach based on navigation in complex mazes, where mice demonstrate flexible adaptation to environmental changes they have never seen before, which we think has potential for studying model-based problem solving in situations of genuine environmental novelty.

TALK 4: NEURAL DYNAMICS OF THE PRIMATE ATTENTION NETWORK

Sabine Kastner, Princeton University

The selection of information from our cluttered sensory environments is one of the most fundamental cognitive operations performed by the primate brain. In the visual domain, the selection process is thought to be mediated by a static spatial mechanism – a ‘spotlight’ that can be flexibly shifted around the visual scene. This spatial search mechanism has been associated with a large-scale network that consists of multiple nodes distributed across all major cortical lobes and includes also subcortical regions.  To identify the specific functions of each network node and their functional interactions is a major goal for the field of cognitive neuroscience.  In my lecture, I will give an overview on the neural basis of this fundamental cognitive function and discuss recently discovered rhythmic properties that set up alternating attention states.

Invited Symposium Session 2

CHALLENGES FOR THE METACOGNITIVE FOUNDATIONS OF CONSCIOUSNESS

Sunday, April 24, 2022, 10:00AM - 12:00PM (PT), Grand Ballroom B/C

Organizer Hakwan Lau, RIKEN Center for Brain Science, Japan.

Chair Ned Block, NYU.

Speakers: Ned Block, Emily Ward, Michelle Craske, Theofanis Panagiotaropoulos, Hakwan Lau

In this symposium, we will discuss a variety of work that speaks to the question of whether subjective experiences critically depend on specific metacognitive functions in the prefrontal cortex. For example, in the unattended periphery, we seem to enjoy visual experiences subjectively richer than what our representational capacity can actually afford. It has been suggested that this ‘inflated’ sense of richness is driven by biases at the metacognitive level. However, some recent studies suggest that our visual metacognition is actually robust, therefore casting doubt on the ‘inflation’ account. In a rather different context, the metacognitive view has been applied to account for the dissociation between subjective experience of fear and the physiological responses to threatening stimuli. Again, the idea is that subjective experience may depend on late-stage processing that sometimes does not track first-order (physiological or behavioral) processes accurately. However, recent studies show that this dissociation may be more subtle and nuanced than the metacognitive view suggests. Finally, there have been disputes as to whether subjective experiences depend constitutively on prefrontal processing, and if they do, about whether prefrontal mechanisms support global broadcast of information, or metacognition per se. We will address these issues and compare the metacognitive view with other theories of consciousness. In doing so, we will review studies that employed different methods ranging from psychophysics, neural network modeling, neuroimaging, brain stimulation, invasive electrophysiology, to clinical studies of patients.

TALK 1: FAILURES OF AWARENESS IN A RICH VISUAL WORLD

Emily Ward, University of Wisconsin-Madison

How detailed is the content of visual awareness? Surprisingly, the answer seems to depend on how we probe it. Experiments of iconic memory suggest that visual awareness is rich and detailed, as people can report very specific details of an exceedingly brief stimulus display if the number of details they are asked to report is limited. In contrast, experiments of inattentional blindness and change blindness suggest that visual awareness is sparse and prone to impressive failures, as people show a complete lack of awareness of even highly salient events happening right in front of their eyes. Yet, despite the magnitude of failures of awareness, people seem to be generally unaware that they experience such failures and are overconfident in their ability to notice changes and new events in visual scenes. Here, I will discuss how the visual system may compute — and people may perceive — statistical properties of visual input (such as the average size or variance in color of different items) without being aware of the individual items, which would produce the experience of a rich visual world most of the time, but also leave open the possibility for failures of awareness. I will also propose that rather than being indiscriminately overconfident about perceiving the world in rich detail, people are aware of the relative likelihood of missing unexpected changes to visual scenes, such that their change detection metacognition accurately predicts their own change blindness.

TALK 2: THREAT IMMINENCE DRIVES A COHERENT FEAR RESPONSE

Michelle Craske, UCLA

While subjective fear is strongly associated with cortical processes and other elements that are separable from behavioral and physiological defensive responses to threat, the two-system framework (LeDoux & Pine, 2016) fails to account for the coordinated nature of conscious/cortical and nonconscious/subcortical features of the fear system, especially at the highest levels of threat imminence. Two key contraindications to the two-system framework will be discussed, along with relevant empirical data. First, while conscious expectancies of threat appear more strongly related to cortical than subcortical regions (Peng et al., under review), concordance across subjective and physiological responses is observed over time as conditions of threat change (Zbozinek, Tanner, & Craske, under review; Constantinou et al., 2021), and is strongest at the highest levels of threat imminence (e.g., panic attacks, or breath occlusion). Notably, there is no evidence that behavioral and physiological responses correlate more strongly than subjective and physiological responses. Second, whereas LeDoux and Pine (2016) suggest a uni-directional pathway from subcortical to cortical processes, there is evidence to suggest that conscious experience of fear (whether verbal report of fear, subjective sense of danger, or conscious appraisals of threat) has bi-directional relationships with behavioral and physiological defensive responses. Evidence for the influence of conscious experience upon lower level defensive processes includes the capacity for fear-relevant words and recall of fear memories to induce physiological and behavioral threat responses, and for conscious appraisals or reappraisals to influence preconscious attentional biases and physiological responses (e.g., during sleep; Craske et al., 2002).

TALK 3: DECODING CONSCIOUS CONTENT REPRESENTATIONS IN THE PREFRONTAL CORTEX IN THE ABSENCE OF SUBJECTIVE REPORTS AND POST PERCEPTUAL PROCESSING

Theofanis Panagiotaropoulos, Neurospin, Paris 

The role of the prefrontal cortex (PFC) in visual consciousness has been disputed since it has been suggested to be relevant only for post perceptual processes related to monitoring, decision making and reporting the conscious contents, that follow the neural events underlying conscious perception. In a series of experiments we controlled for the contribution of these factors in the activity of prefrontal neurons in the non-human primate brain during conscious perception. First we combined no report paradigms of multistable perception (binocular flash suppression and binocular rivalry) with multielectrode recordings of neuronal activity in the non-human primate prefrontal cortex. The contents of conscious perception could be decoded from PFC population activity even in the absence of subjective reports. However, it is likely that post perceptual processing may still occur even when no reports are required, in the form of introspection about the consciously perceived stimulus. During rapid serial presentation of visual stimuli in a protocol that challenged post perceptual processing, every image presented could be decoded in the PFC as early as 50 millisecond following presentation onset. These results challenge the view that detecting representations of conscious contents in the PFC is the result of postperceptual processing, therefore supporting the predictions of Global Neuronal Workspace theory of consciousness.

TALK 4: DEFENDING THE IMPLICIT METACOGNITIVE VIEW OF SUBJECTIVE EXPERIENCE

Hakwan Lau, RIKEN Center for Brain Science, Japan

According to a version of a higher-order view of consciousness, subjective experience arises when a certain implicit (i.e. automatic) self-monitoring mechanism ‘decides’ that some qualitative sensory information reflects the state of the world right now. When studying this subpersonal, implicit process, we often make use of explicit metacognitive tasks, in which we encourage subjects to  engage in effortful introspection, or make overt ratings of confidence. This may be why sometimes these explicit measures do not track subjective experience; that’s because they do not track the relevant implicit processes perfectly, as intended. The relationship between this implicit metacognitive process and subjective experience may in fact be complicated. But the motivation for studying the mechanism further is not that it alone can account for consciousness. Rather, it is because it is evidently an important part of a larger set of interacting mechanisms. Although its function is to monitor sensory information, this monitoring happens without cognitive effort, and is an integral part of conscious perception. Therefore, it is not a postperceptual process. To the extent that the prefrontal cortex is constitutively involved in subjective perceptual experiences, the role it plays is more likely metacognitive rather than to broadcast information. Global broadcast indeed often happens when we are conscious of the relevant information, but it is a potential downstream consequence rather than the mechanism for subjective experience itself. I will defend this view based on evidence from both lesion and neurophysiological studies.

Invited Symposium Session 3

MARR'S LEVELS OF ANALYSIS 40 YEARS ON

Tuesday, April 26, 2022, 10:00AM - 12:00PM (PT), Grand Ballroom A

Chair Tomaso Poggio, Massachusetts Institute of Technology

Speakers: Christof Koch, Anya Hurlbert, Bradley Love and Tomaso Poggio

We still do not understand the brain. The science of intelligence -- of how the brain computes intelligence — and the engineering of intelligence — of how to make intelligent machines -- are among the greatest problems of our times, possibly the most fundamental of all. There are of course different ways to “understand” an information processing system like a computer or perhaps the human brain. In addition to the original  Marr-Poggio levels of hardware, algorithms and computations, one may consider alternative viewpoints based such as for instance those based on learning and evolution. Each of these ways of understanding can be independently relevant: for instance, understanding the problem of learning, and thus being able to develop and use machine learning tools, may lead to powerful systems without the need of understanding the algorithms that are actually learned. Taxonomies are somewhat arbitrary and sometimes useful. An important question is whether the taxonomy of the levels of understanding still provides — forty years later — a relevant research framework for solving the problem of artificial intelligence and, especially,  the problem of human intelligence. The speakers in the symposium will address these and related issues in the quest to create a new science and engineering of intelligence.

TALK 1: RECEPTIVE FIELD MODELS DO NOT EXPLAIN RESPONSES OF MOUSE VISUAL CORTICAL NEURONS

Christof Koch, Chief Scientist, MindScope Program, Allen Institute

The Allen Institute has recorded the responses of more than 100,000 cells in all layers in six visual cortical regions in the mouse using both Neuropixels electrodes as well as two photon-calcium imaging using a battery of standard visual stimuli. This is the largest, relatively unbiased, survey of its kind in any mammalian sensory system. Quantitative regression against linear and quadratic receptive field models demonstrate that the response of neurons is highly variable and only a small minority of neurons can be predicted by such models (across the population, ~2% of neurons have r > 0.5). We conclude, in line with Olshausen and Field (2004), that we lack quantitative functional models for one of the best characterized cortical regions anywhere – primary visual cortex in the mouse – a striking example of the independence of the level of physical substrate from that of the algorithmic instantiation of an abstract computation, following Marr and Poggio tri-partite level analysis.

TALK 2: MARR'S VISION OF COLOR, 40 YEARS ON

Anya Hurlbert, Centre for Transformative Neuroscience, Newcastle University, UK

“We can tell just by looking whether a fruit is ripe, … whether a leaf is green and supple, whether an insect is likely to be poisonous...” So said David Marr in Vision (1982), and he attributed this acumen to the visual brain solving a computational problem: how to extract invariant surface reflectance from the varying light signal at the eye. Marr proposed neural hardware to do the task: double-opponent receptive fields that would filter out the illumination. These have been found in visual cortex, but we now know that color constancy neither begins nor ends there. Not only are the retinal mechanisms that feed cortex more intricate than Marr suspected, especially with the discovery of intrinsically photosensitive retinal ganglion cells and novel S-cone pathways, but also the perceptual phenomenon seems to grow more individually variable and dependent on cognitive processes with each new study. Color image processing algorithms – ubiquitous in today’s smartphones – are still based on Marr-like ideas but have a hard time matching human perception of both illumination and object color. They need to learn, and so do vision scientists, how it is that human vision grasps so much from color.

TALK 3: MULTILEVEL THEORIES FOR COMPLETENESS

Brad Love, University College London, The Allen Turing Institute

A complete neuroscience requires multilevel theories that address phenomena and measures ranging from higher-level behaviours to activities within a cell. David Marr provided one useful way to organise scientific efforts in terms of three complementary levels of analysis. One unique aspect of Marr's proposal was that the top level, the computational level, is a description of the problem to be solved devoid of any mechanism. Although often neglected, clearly stating the intended level is core to a scientific contribution because it determines how work should be evaluated. I'll discuss past cases where fruitless debates arose because it was unclear whether the contribution was at the computational (what) or algorithmic (how) level. These same ambiguities arise in Bayesian brain proposals, but at the algorithmic and implementational levels. Ideally, theories would span multiple levels for a more complete understanding from multiple perspectives.

One way forward is to adopt a levels of mechanism approach (cf. Craver) in which the components of a higher-level model can be unpacked into their own mechanisms. In this approach, a relatively simple and understandable model can reside at the top-level, capturing behaviour, and components of this mechanism can be decomposed to capture the intricacies of the brain. I'll provide an example of this "zooming out and in" approach by considering how cells in the hippocampus can collectively implement symbols and other high-level cognitive constructs. Multilevel explanations yield a richer understanding of the system than could be provided by any level alone.

TALK 4: FROM MARR'S VISION BOOK TO HUMAN INTELLIGENCE

Tomaso Poggio, McGovern Institute for Brain Research, Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA

In 1976, when I was working with David for a three-months period at  MIT, we fully realized that a satisfactory understanding of human intelligence was far away because the problem was so deep and so difficult. We hoped however that emphasizing computational ideas could help decrypt puzzles in the neuroscience, in particular neuroscience of the visual system. The situation has changed drastically over the last decade with machine intelligence making significant progress.  We  have Alexa, AlphaZero, AlphaFold and MobilEye. We have transformers and great progress in NLP. The engineering community feels that machine learning and its recent network-based architectures are a powerful paradigm, potentially leading to the creation of intelligent machines. So…what about human intelligence, the real interest of David and mine? I believe that there are many forms of “intelligence”. Are computers that beat humans at chess and Go intelligent? Are Residual Networks more intelligent than us because they may get better  classification of certain image database? Intelligence is a pretty vague word. There are a infinite number of different forms of intelligence. From this point of view, the computational problem of intelligence is ill-defined and in fact does not even have a unique solution. There are many solutions one of which is what we call human intelligence, as defined by the Turing test. Human intelligence is of course well defined: it is a natural phenomenon produced by biological brains. Science can study both the brains and the behavior they produce, with experimental and theoretical technique. This also means that the various neural network architectures cannot be by themselves the solution to the problem of how visual cortex works, though they may help.  In fact, Marr wrote, perhaps exagerating somewhat “... a neural net theory, unless it is closely tied to the known anatomy and physiology of some part of the brain and makes some unexpected and testable predictions, is of no value.” (Marr 1975) The reason why this argument seems to contradict the levels of understanding framework described in Marr’s Vision is because it does. The simple observation David and I wrote about, was that a complex system -- like a computer and like the brain -- can be understood at several different levels. Ironically, the section in the Vision book about levels of understanding was based on a paper (on the visual system of the fly) by Werner Reichardt and myself, where we stressed that one ought to study biological brains at different levels of organization, from the behavior of a whole animal to the signal flow, ie the algorithms, to circuits and single cells. In particular, we claimed that it is necessary to study nervous systems at all levels simultaneously. From this perspective, the importance of coupling experimental and theoretical work in the neurosciences follows directly: without close interaction with experiments, theory is very likely to be sterile.

Invited Symposium Session 4

EMPATHY, SOCIAL CONTACT, AND THEIR NEUROMODULATION

Tuesday, April 26, 2022, 10:00AM - 12:00PM (PT), Grand Ballroom B/C

Chair Stephanie Preston, University of Michigan 

Speakers: James Burkett, Benjamin A. Tabak, Tristan Inagaki, India Morrison

Neural processes and neurohormones associated with offspring care are thought to be conserved over evolutionary time and across extant species and to be extended at times to support human prosocial behavior. For example, homologous versions of the neuropeptides oxytocin (OT) and vasopressin (VP) support offspring care in fish, birds, and mammals and some evidence links these processes to human altruism. The human evidence, however, is actually mixed. A series of four talks explore this issue and discuss when such neurohormonal processes do and do not impact human prosociality. After a brief overview by Stephane Preston, the first talk by James Burkett describes the proximate mechanisms of empathy-based consolation and empathic distress in pair-bonding voles--a powerful animal model for mammalian social bonding, memory, and prosocial support. The second talk describes null effects of administered intranasal OT or VP on self-reported empathic concern and mentalizing processes, in behavior and the brain. The third talk by Tristen Inagaki describes a link between functional connectivity within the DMPFC “default mode” network and individual differences in supporting and deciding to give to close others. The final talk by India Morrison describes the modulation of neural processes by OT based on the context of social touch, the identity of the social partner, and the co-occurring presence of psychotropic drugs. It is important to link theories of human behavior to evolutionary and proximate mechanisms, but with a careful eye on how laboratory conditions mimic the ecological conditions under which the behavior emerged and occurs in the real world.

OVERVIEW: HOMOLOGOUS NEURAL AND HORMONAL SYSTEMS FOR OFFSPRING AND HUMAN ALTRUISM

Stephanie Preston, University of Michigan

A brief overview before the four central talks will provide the audience with background information on the homology of neural systems for offspring care across species and how these systems are proposed to play a role in human altruism, assuming the context of giving is similar to that of the original care-providing behavior.

TALK 1: NEURAL MECHANISMS OF EMPATHY IN THE PRAIRIE VOLE AND MODULATION BY ENVIRONMENTAL EXPOSURES

James Burkett, Ph.D. University of Toledo

Empathy is a set of capacities which, in humans, are fundamental to normal social interaction and social relationships. A growing body of research in animals is describing the extent to which these capacities derive from conserved evolutionary mechanisms, both biological and psychological. We previously demonstrated empathy-based consoling behavior in the prairie vole (Microtus ochrogaster) which was dependent on oxytocin signaling in the anterior cingulate cortex (ACC). Here we show evidence of a population of neurons in ACC that are active in response to both personal distress and vicarious distress, and which have distinct molecular and electrophysiological properties. In parallel, we are studying how developmental environmental exposures related to human autism risk can affect brain and behavior in prairie voles, including empathy-related behavior. Exposing prairie vole dams to low levels of pyrethroid pesticides (5 times lower than the EPA-set benchmark dose) during pregnancy and lactation leads to a persistent neurobehavioral phenotype in offspring, including increased repetitive behaviors, decreased vocalizations, decreased consoling behaviors, and learning deficits. These results show the evolution of a conserved set of neural mechanisms involved in empathy, which are disrupted by common environmental exposures.

TALK 2: EFFECTS OF OXYTOCIN OR VASOPRESSIN ON EMPATHY: MODERATED OR NULL?

Benjamin A. Tabak, PhD, Southern Methodist University

The neuropeptides oxytocin and vasopressin have been viewed as potential therapeutics for clinical disorders characterized by social impairments. Given the importance of empathy in social functioning, several studies have attempted to identify whether modulating the oxytocin or vasopressin system can increase empathy. However, to date, results have been mixed. One potential explanation for the mixed findings in this area are the different methods that researchers use to assess empathic processes. In our first study, we examined the effects of intranasal oxytocin or vasopressin compared to placebo on self-reported empathic concern following an empathy induction. Results showed no main effects of either drug. In our second study, we examined intranasal oxytocin or vasopressin compared to placebo on behavioral and neural responses when participants were engaged in a mentalizing task. Results showed no effects of either drug on reaction time, accuracy, or neural activation within brain networks associated with mentalizing. Thus, findings from our studies showed a consistent pattern of null results. However, work from other research groups has found an effect of oxytocin on affective empathic responses when viewing emotional stimuli. In total, findings suggest that oxytocin (and possibly vasopressin) may affect only specific aspects of empathy, or more basic socioemotional processes that are related to and then indexed as empathy in certain assessments.

TALK 3: DEFAULTING TO GIVE

Tristen K. Inagaki, PhD, Department of Psychology, San Diego State University

SDSU/UCSD Joint Doctoral Program in Clinical Psychology

Support-giving toward close others, ranging from providing emotional and physical care to offering financial assistance, are pervasive social behaviors. Indeed, caring for a family member, listening to a spouse’s frustrations with a co-worker, or helping a friend through a tough time, occurs on a daily basis and remains prevalent throughout the lifespan. Such behavior serves critical functions: ensuring infant survival, bolstering social connections, and potentially leading to better health for the support giver. The ubiquity and benefits of support-giving raises the possibility that there are brain mechanisms in place that facilitate our natural inclinations to give. Based on the proposition that a primary function of the brain at rest is for social thinking and behavior, processes that facilitate support-giving may occur at rest when the brain is in its tonic, default state. Specifically, neural activity in the DMPFC subsystem of the default mode network may play a key role in supportive behavior. Consistent with hypotheses, greater functional connectivity among regions of the DMPFC subsystem during extended rest positively relate to individual differences in support-giving, both at the time of the scan and one month later. In a follow-up study, DMPFC subsystem activity during a brief rest period preceding a decision phase predicted faster decisions to give to a close other, but not receive for the self or make an arbitrary decision. Together, results suggest that “defaulting back” to this subsystem as soon as we rest may keep our brain ready to respond to the people we care about supportively.

TALK 4: OXYTOCIN NEUROMODULATION DURING SOCIAL TOUCH INTERACTIONS

India Morrison, PhD, Linköping University

Touch is a major channel for social contact in most primate species. Similarly, the hormone oxytocin (OT) is a key mediator of social attachment in humans, as suggested by experimental manipulations involving the exogenous administration of intranasal OT. However, very little is known about whether social interactions involving touch modulate endogenous OT—and if so, what insight can be gained on any relationship between social OT neuromodulation and specific brain pathways. We addressed these questions in two separate studies which manipulated the social-contextual or physiological circumstances surrounding touch, combining functional magnetic resonance imaging (fMRI) with within-subject sampling of plasma OT levels. The first study investigated any influence of factors such as person familiarity and history of past interaction on OT-related neuromodulation. In it, participants engaged in two successive touch encounters with their partner and an unfamiliar stranger. In the second study, participants experienced caressing touch after ingestion of 3,4-methyl​enedioxy​methamphetamine (MDMA or “ecstasy”) or placebo. The findings from these studies indicated that social modulation of endogenous OT critically depends on contextual (person familiarity and recent interaction history) and physiological/experiential (drug) factors. Both studies also revealed a selective role for parietotemporal pathways in OT neuromodulation, including superior temporal cortex and temporal pole. Taken together, these findings suggest that OT neuromodulation during human social interactions is context-contingent and adaptive, operating more like a dimmer switch than an on-off button.

CNS2022-Logo_FNL4web

APRIL 23–26 • 2022

Latest from Twitter