MAY 2–5, 2020
CNS 2020 Virtual Meeting | Invited-Symposium Sessions
|1||MAKING SENSE OUT OF BIG DATA IN COGNITIVE NEUROSCIENCE|
|2||THE ROLE OF CAUSAL INFERENCE FOR PERCEPTUAL DECISIONS AND ADAPTIVE BEHAVIOR|
|3||CONTEMPORARY APPROACHES TO EMOTION REPRESENTATIONS|
|4||NOVEL APPROACHES TO NON-INVASIVE BRAIN STIMULATION|
INVITED SYMPOSIUM 1: MAKING SENSE OUT OF BIG DATA IN COGNITIVE NEUROSCIENCE
Chair: Randy L. Buckner, Harvard University and Massachusetts General Hospital
Speakers: Carsen Stringer, Konrad Kording, Randy L Buckner
This symposium will illustrate both the promises and potential pitfalls of increasing availability of “big data” at many scales that can be used to understand brain and behavioral functions across species. The first two talks will illustrate a range of use cases where very high dimensional data is being used to generate novel insights in the function of neural systems and how these generate behavior. The final talk will illustrate some of the opportunities of big data in the era of physical distancing, and how open data science and online data resources offer a unique means to continue and advance science.
TALK 1: HIGH-DIMENSIONAL STRUCTURE OF SIGNAL AND NOISE IN 20,000 NEURON RECORDING
Carsen Stringer, Howard Hughes Medical Institute, Janelia Research Campus
Even in the absence of sensory inputs, the brain produces structured activity, which can be as large as or larger than sensory-driven activity. Using large-scale neural recordings of thousands of neurons in mouse visual cortex, we found that this seconds-long neural variability was driven by brainwide behavioral signals. This behaviorally-driven neural activity continued during visual stimulus presentations, creating variable neural responses to identical visual stimuli. Although large, the ongoing noise did not impair the encoding of stimuli at the population level. We found that oriented stimuli with an orientation difference of less than 1° could be accurately discriminated at >90% correct on a single trial basis. In addition to being accurate, the stimulus-evoked population activity was high-dimensional. The correlation structure across neurons obeyed a power law: the n-th dimension of the correlation matrix contained variance in proportion to 1/n. We developed a theory to explain this structure based on the assumption that neural responses to stimuli are smooth. A smooth neural code may be robust to small changes in visual stimuli, such as changes in viewpoint or lighting. Using large-scale neural recordings and new analytical techniques, we were able to characterize some of the fundamental features of visual cortical circuits in mice.
TALK 2: CASUAL INFERENCE WITH BIG DATA SETS
Konrad Kording, University of Pennsylvania
Our datasets are big. But we usually want to ask causal questions. We want to ask how the brain works. Or figure out if one way of treating patients is superior to other ways of treating them. However, most of the approaches that are established in the field focus on correlational data analyses. In my talk I will review causal inference techniques that are useful in the field and that can be adapted to ask a broad range of questions.
TALK 3: LESSONS AND OPPORTUNITIES OF BIG DATA SUPERSTRUCTING IN A VIRTUAL WORLD
Randy L Buckner, Harvard University and Massachusetts General Hospital
The accessibility and opportunities of big data have a special importance in this moment of physical distancing. In the present talk I will tell the story of the brain genomics superstruct project, GSP, a big data effort that emerged in the wake of the 2008 fiscal crisis. “Superstructing” is the act of building upon an existing structure or foundation. What began as a well-resourced local effort to build infrastructure to discover links between genetics, the brain and behavior, was morphed by necessity into a lean and distributed big data community effort. Building upon existing research programs, a rapid neuroimaging acquisition protocol, on-line testing and neuroinformatics tools were deployed to aggregate uniform data from thousands of subjects across twenty laboratories. A large open data resource was achieved that has been used to generate models of cortical network organization, a complete functional map of the human cerebellum, and links between individual differences and behavioral traits. When combined with other efforts and further data integration, genetic associations to brain organization have begun to unravel. The GSP is now just one of numerous data resources openly available to the community. Open data resources provide opportunities to test existing hypotheses, make novel discoveries, and continue educational laboratory activities in a virtual world. Most critically in this moment of dispersion, open data efforts can serve to inspire the next era of discovery and keep the students who will make those discoveries engaged in the scientific process.
INVITED SYMPOSIUM 2: THE ROLE OF CAUSAL INFERENCE FOR PERCEPTUAL DECISIONS AND ADAPTIVE BEHAVIOR
Chair: Christoph Kayser, Bielefeld University
Speakers: Rachel Denison, Sam Gershman, Uta Noppeney, Christoph Kayser
Adaptive behavior in complex environments requires an understanding of the causal relations between the sensory features arising from the multiple objects surrounding us. This symposium investigates the computational and neural mechanisms underlying sensory causal inference processes from different angles, focusing on the flexible integration of multisensory evidence, the constraints imposed by the available cognitive sources and the implications for adaptive behavior such as learning.
TALK 1: INFERRING INTERNAL CAUSES OF UNCERTAINTY TO IMPROVE DECISION MAKING
Rachel Denison, NYU
Uncertainty arises not only from the properties of sensory input but also from internal causes, such as varying levels of attention. However, it was unknown whether humans appropriately infer and adjust for such cognitive sources of uncertainty during perceptual decision making. We found that, when uncertainty was relevant for performance, human categorization and confidence decisions took into account uncertainty related to attention. Category and confidence decision boundaries shifted as a function of attention in an approximately Bayesian fashion. The observer’s attentional state on each trial therefore contributed probabilistically to the decision computation. This ability to infer and use attention-dependent uncertainty is adaptive: it should improve perceptual decisions in natural vision, in which attention is unevenly distributed across a scene.
TALK 2: CAUSAL INFERENCE IN REINFORCEMENT LEARNING
Sam Gershman, Harvard University
The impact of feedback can have different effects on learning depending on one's beliefs about the causal structure of the environment. In particular, belief updating in response to good and bad outcomes can be asymmetric, and this asymmetry is predicted by a Bayesian reinforcement learning model that takes into account hidden causes that mediate between choice and feedback. Consistent with this model, neural learning signals in the striatum appear to be "gated" by causal beliefs. Finally, I will discuss evidence that the ability to use causal knowledge to guide learning emerges over the course of development, and can be dissociated from explicit causal beliefs.
TALK 3: CAUSAL INFERENCE IN MULTISENSORY PERCEPTION
Uta Noppeney, Donders Institute for Brain, Cognition and Behaviour, Radboud University
Our senses are constantly bombarded with myriads of diverse signals. Transforming this sensory cacophony into a coherent percept of our environment relies on solving the causal inference problem - deciding whether signals come from a common cause and should be integrated, or instead be treated independently. Combining psychophysics, fMRI/EEG and computational modelling our results suggest that the brain arbitrates between sensory integration and segregation consistent with the principles of Bayesian Causal Inference by dynamically encoding multiple perceptual estimates at distinct levels of the cortical hierarchy. Only at the top of the hierarchy in anterior parietal cortices were signals integrated weighted by their bottom-up sensory reliabilities and top-down task-relevance into spatial priority maps that take into account the world’s causal structure.
TALK 4: THE PERSISTENT INFLUENCE OF CAUSAL INFERENCE IN MULTISENSORY PERCEPTION
Christoph Kayser, Bielefeld University
When combining multi-sensory information, we need to flexibly select and combine cues that arise from a common origin whilst avoiding distraction from irrelevant inputs. We asked how the brain implements such inference process by studying the combination of audio-visual information in ventriloquist-like tasks and how such sensory integration shapes the perception of subsequent unisensory stimuli. Our results unveil a systematic spatio-temporal cascade of the relevant computations, starting with early segregated unisensory representations, continuing with sensory fusion in parietal-temporal regions and culminating as causal inference in the frontal lobe. These findings suggest that inferior frontal regions guide flexible integrative behaviour based on causal inference within a trial, but also point to parietal regions as central for combining sensory evidence over time, such as from trial to trial.
INVITED SYMPOSIUM 3: CONTEMPORARY APPROACHES TO EMOTION REPRESENTATIONS
Chair: Kevin S. LaBar, Duke University
Speakers: Kevin S. LaBar, Tor D. Wager, Dacher Keltner, and Rachael E. Jack
Emotions are complex constructs that exert powerful influences over cognition and comportment. Despite progress in understanding select facets of emotional processing, it remains unclear how specific emotions like anger, sadness, or contentment are differentiated in their subjective experience, neurophysiological representation, and social communication. This symposium brings together experts who are addressing this key, unresolved issue in affective science using contemporary, data-driven computational methods that are overturning old debates about the structure of emotions. Kevin LaBar will open the symposium to discuss how machine learning and stochastic modeling tools facilitate the decoding of emotion categories from fMRI data, including spontaneous emotions and their temporal dynamics. Tor Wager will present findings from a convolutional neural network approach to show how schemas of multiple emotion categories arise from distributed codes in the visual hierarchy. Dacher Keltner will combine computational and social functional approaches to map the complex relationships among a variety of emotions elicited by naturalistic stimuli. Finally, Rachael Jack will close the symposium by demonstrating how data-driven modeling provides novel insights into cultural similarity and variation in dynamic facial expressions of emotion, with implications for improving affective communication in social robots.
Talk 1: DECODING SPONTANEOUS EMOTIONS AND MODELING THEIR TEMPORAL DYNAMICS FROM RESTING-STATE FMRI
Kevin S. LaBar, Duke University
Affective states dynamically unfold in the background of ongoing mental activity and are triggered by spontaneous thoughts during mind wandering. The emotion specificity and duration of these states are hypothesized to promote susceptibility to mental health disorders. However, it is challenging to identify emotion-specific signals embedded in resting-state neural data. Furthermore, it is unknown whether the human brain reliably transitions among multiple emotional states at rest and how psychopathology alters these intrinsic affect dynamics. We combined machine learning and stochastic modeling to investigate the chronometry of spontaneous brain activity indicative of six emotions and a neutral state. We derived fMRI information maps of these emotions from our previous decoding study of emotion inductions, and used them to pattern classify the resting-state time series. We showed that the frequency distribution of resting-state classifications across emotion categories predicted individual differences in on-line subjective feelings and off-line mood ratings and personality traits. We investigated the temporal dynamics of spontaneous transitions across these emotions using stochastic modeling and validated results across two population cohorts. Our findings indicate that intrinsic emotional brain dynamics are effectively characterized as a discrete time Markov process, with affective states organized around a neutral hub. The centrality of this network hub is disrupted in individuals with psychopathology, whose brain state transitions exhibit greater inertia and less frequent resetting from emotional to neutral states. These results indicate how the brain signals spontaneous emotions and how alterations in their temporal dynamics contribute to compromised mental health.
TALK 2: EMOTION SCHEMAS ARE REPRESENTED IN THE HUMAN VISUAL SYSTEM: EVIDENCE FROM FMRI AND CONVOLUTIONAL NEURAL NETWORKS
Tor D. Wager, Dartmouth College
Emotions are thought to be canonical responses to situations ancestrally linked to survival or the well-being of an organism. Although sensory elements do not fully determine the nature of emotional responses, they should be sufficient to convey the schema or situation that an organism must respond to. However, few computationally explicit models describe how combinations of stimulus features come to evoke different types of emotional responses, and, further, it is not clear that activity in sensory (e.g., visual) cortex contains distinct codes for multiple classes of emotional responding in a rich way. Here we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using over 25,000 images and movies and show that image content is sufficient to predict the category, valence, and arousal of human emotion ratings. In two fMRI studies, we demonstrate that patterns of human visual cortex activity encode emotion category-related model output and can decode multiple categories of emotional experience. Comparing decoding performance across multiple brain regions, we find that emotion schemas are best characterized by distributed codes in the occipital lobe and that redundant information about schemas is contained in other brain systems. These results indicate that rich, category-specific emotion representations are embedded within the human visual system. Further, they suggest that psychological and computational accounts of emotion should explain the sensory qualities that are naturally associated with emotional outcomes, as well as those that are reliably learned through experience and influenced by culture.
TALK 3: MAPPING THE PASSIONS: INSIGHTS FROM COMPUTATIONAL AND SOCIAL FUNCTIONAL APPROACHES
Dacher Keltner, University of California, Berkeley
In this talk I detail convergent insights from computational and social functional approaches to emotion. In doing so, I will introduce a new methodological approach predicated on: the study of vast array of naturalistic stimuli, the sampling of a wide range of emotional states, observer ratings from discrete and dimensional perspectives, and open-ended statistical and data visualization techniques that map complex emotion spaces. Empirical work guided by these methods converges on four ideas. First, more open-ended techniques in studies of facial expression, vocal bursts, prosody, lexical terms, music, and spontaneous experience reveal upwards of 20 distinct states. I will illustrate this with recent studies of awe, compassion, and embarrassment. Second, emotion categories are heterogenous, but in systematic ways. Each emotion category—awe, sympathy, fear, amusement, embarrassment—includes variations in experience and expression. Third, the boundaries between emotion categories are not discrete. Instead, emotion categories such as or love and desire or awe and interest, are bridged by gradients of meaning, which likely account for transitions between emotional states. Finally, discrete emotion categories organize the representation of emotion more so than appraisals of valence and arousal. I will conclude by considering what the methods and findings summarized in the talk mean for the study of emotion-related physiology as well as individual and cultural variations.
TALK 4: MODELLING DYNAMIC FACIAL EXPRESSIONS OF EMOTION ACROSS CULTURES USING DATA-DRIVEN METHODS
Rachael E. Jack, University of Glasgow
Understanding how facial movements communicate emotions has been a source of intense investigation for over a century. However, addressing this question is empirically challenging due to the sheer number and complexity of facial expressions the human face can make. Traditional approaches primarily using theory-driven methods and hypothesis testing, while advancing knowledge, have also restricted understanding including via Western-centric biases. Now, new technologies and data-driven methods developed in interdisciplinary teams alleviate these constraints, giving real traction to this complex task and delivering novel insights. Here, we showcase one such approach that combines social and cultural psychology, vision science, mathematical psychology, and 3D dynamic computer graphics to objectively model dynamic facial expressions of emotions in different cultures. Using this approach, we have provided precise characterizations of what face movements are cross-cultural and culture-specific, and the emotion information they convey including broad dimensional information (e.g., positive, high arousal) and specific (e.g., delighted) emotions. Specifically, we show that four, not six, core expressive patterns are cross-cultural, and that facial expressions transmit signals in an evolving, broad-to-specific structure over time. Our work challenges longstanding dominant views of universality and forms the basis of a new theoretical framework that has the potential to unite different views (i.e., nature vs. nurture; dimensional vs. categorical). Finally, we show direct transference of this knowledge of facial expressions to social robots by providing a generative syntactical model for social face signaling, thus providing new opportunities for Psychology to play a central role in designing digital agents of the future.
INVITED SYMPOSIUM 4: NOVEL APPROACHES TO NON-INVASIVE BRAIN STIMULATION
Chair: Jérôme Sallet, INSERM, Lyon, France and University of Oxford, UK
Speakers: Nir Grossman, Jérôme Sallet, Chris Butler, Elisa Konofagou
To understand brain circuits it is necessary both to record and manipulate their activity. The gold standard approach in cognitive neurosciences to attribute a cognitive function to a brain region relies on causation methods. Those methods often invasive are therefore principally used in animal models. Alternative so-called non-invasive approaches despite allowing addressing questions directly about the human brain are often limited by their spatial resolution, or by the brain areas that could be targeted. This symposium will bring together researchers developing new electrical or ultrasound stimulation tools. Innovations could enable targeting deep brain structures, improving spatial resolution or proposing a new approach for neuropharmacological studies. We will aim to show the potential translational approach from animal research to human applications of these novel approaches.
TALK 1: NONINVASIVE DEEP BRAIN STIMULATION VIA TEMPORALLY INTERFERING ELECTRIC FIELDS
Nir Grossman, Imperial College London, UK
Electrical brain stimulation is a key technique in research and clinical neuroscience studies, and also is in increasingly widespread use from a therapeutic standpoint. However, to date all methods of electrical stimulation of the brain either require surgery to implant an electrode at a defined site, or involve the application of non-focal electric fields to large fractions of the brain. We report a noninvasive strategy for electrically stimulating neurons at depth. By delivering to the brain multiple electric fields at frequencies too high to recruit neural firing, but which differ by a frequency within the dynamic range of neural firing, we can electrically stimulate neurons throughout a region where interference between the multiple fields results in a prominent electric field envelope modulated at the difference frequency. We validated this temporal interference (TI) concept via modeling and physics experiments, and verified that neurons in the living mouse brain could follow the electric field envelope. We demonstrate the utility of TI stimulation by stimulating neurons in the hippocampus of living mice without recruiting neurons of the overlying cortex. Finally, we show that by altering the currents delivered to a set of immobile electrodes, we can steerably evoke different motor patterns in living mice.
TALK 2: PROBING DECISION-MAKING CIRCUITS IN PRIMATES USING TRANSCRANIAL ULTRASOUND NEUROMODULATION
Jérôme Sallet, INSERM, Lyon, France and University of Oxford, UK. Co-authors: Jean-Francois Aubry1, Davide Folloni2, Lennart Verhagen2, Nima Khalighinejad2, Matthew Rushworth2, Institut Langevin, Paris, France; 2. University of Oxford
Transcranial ultrasonic stimulation (TUS) is an emerging method whereby low-intensity ultrasound is delivered through the skull to brain tissue resulting in reversible disruption of neuronal activity at the targeted site. Although the exact mechanisms by which ultrasound effects neuromodulation are not fully characterized, the goal of this presentation is to show that the technique is safe and can be used to modulate brain activity and behaviour with a good anatomical precision. TUS neuromodulatory effects were measured by examining relationships between activity in each targeted area and the rest of the brain using resting-state functional magnetic resonance imaging (fMRI) collected under anaesthesia. Importantly those targeted regions could either be superficial cortical areas (preSMA, Frontopolar cortex), or deep subcortical structures (Amygdala, Basal Forebrain). With the specific protocol used, dissociable and focal effects on neural activity could not be explained by auditory confounds. Furthermore, offline effects were shown to last for more than two hours post-stimulation. With such long lasting effect, we were able to test in separate experiments for the specific contribution of perigenual ACC to counterfactual reasoning and of the lateral orbitofrontal cortex to credit assignment.
TALK 3: ULTRASONIC MODULATION OF HIGHER ORDER VISUAL PATHWAYS IN HUMANS
Chris Butler, University of Oxford and Imperial College London, UK. Co-authors: Braun V, Blackmore J, Cleveland R, University of Oxford
Transcranial ultrasonic stimulation (TUS) has been used to target primary sensory regions of the human brain. Its effect on higher-order cortical areas has not been studied. Moreover, concerns have recently arisen that TUS effects may be driven indirectly through stimulation of early auditory pathways. We investigated whether TUS can modulate higher-order visual processing both in superficial (middle temporal area (MT)) and deep (fusiform face area (FFA)) regions. We further examined the efficacy of auditory stimulus masking. Magnetic resonance imaging was used to map skull anatomy and functional regions of interest (MT and FFA) for each participant (n=16). Segmented imaging datasets formed the basis of 3D ultrasound simulations to determine transducer placements and source amplitudes. Thermal simulations ensured that temperature rises were <0.5 °C at the target and <3 °C in the skull. TUS (500 kHz, 300 ms 50% duty cycle bursts) was applied to MT or FFA whilst participants performed a visual motion or a face identity detection task. To control for non-specific effects, auditory masking was applied during the tasks. EEG data were collected throughout. Auditory masking reduced subjective stimulation detection to chance level and abolished auditory evoked potentials. Ultrasonic stimulation of MT led to facilitation of visual motion detection in the contralateral hemifield, with no effect upon face identity detection. Stimulation of FFA did not affect visual motion detection performance. We show that TUS can be used in humans to modify behaviour and electrophysiological activity in higher-order visual pathways in a task-specific and anatomically precise manner.
TALK 4: NONINVASIVE CNS MODULATION USING ULTRASOUND WITH OR WITHOUT BLOOD-BRAIN BARRIER OPENING
Elisa Konofagou, Columbia University, NYC
The brain is a formidable frontier for modulation of both itself and for other organs in the body. Over the past several decades, ultrasound has been consistently shown to successfully probe brain activity transcranially. Our group has been studying the noninvasive stimulation and inhibition of the central nervous system both with and without blood-brain barrier opening. When focused ultrasound is applied with intravenously administered microbubbles, the blood-barrier opens and has been shown to improve cognitive performance such as spatial memory in mice and touch accuracy and reaction time in non-human primates, that lasts hours to months after opening. Without BBB opening or microbubbles, our group has shown that focused ultrasound is capable of noninvasively stimulating lateralized paw movement as well as sensory responses such as pupil dilation and eye movement when specific cortical and subcortical regions are targeted, demonstrating that ultrasound can trigger both motor and sensory brain responses. An overview of the aforementioned findings in rodents and non-human primates as well as clinical translation will be presented.