CNS 2021 |  Symposium Sessions

 

#

TITLE

THE SOCIAL CEREBELLUM AND ITS ROLE IN INTERACTION SEQUENCES
HOW DOES PRESTIMULUS BRAIN STATE INFLUENCE PERCEPTION?
THE NETWORK STRUCTURE OF EPISODIC THOUGHT
THE BEHAVIOURAL RELEVANCE OF NEURAL VARIABILITY
BEFORE THE PREDICTIONS: THE ROLES OF MEMORY ON PREDICTIVE PROCESSING
NEW FRONTIERS AND TECHNOLOGIES IN VISION REHABILITATION
NEW INSIGHTS ON MULTISENSORY BRAIN ORGANIZATION FROM MULTIVOXEL PATTERN ANALYSIS MVPA TO LAMINAR FMRI
COMPUTATIONAL M/EEG MODELLING: BRIDGING SCALES TOWARDS BRAIN HEALTH AND DISEASE
STUDYING THE NEONATAL AND INFANT BRAIN: FROM NEUROSCIENCE TO INTERVENTIONS

THE SOCIAL CEREBELLUM AND ITS ROLE IN INTERACTION SEQUENCES

Chair: Frank Van Overwalle, Vrije Universiteit Brussel

Speakers: Frank Van Overwalle, Elien Heleven, Chiara Ferrari, libera Siciliano

This symposium explores the role of the cerebellum in the social understanding of actions and emotions, and in particular in understanding others' mental states. Its aim is to alert the neuroscientific community that a large part of the social brain -- the cerebellum -- has been ignored in past research on social cognition. Nonetheless, understanding what the cerebellum does might dramatically increase our insights in what determines proficient social functioning. To illustrate, recent investigations show that dysfunction of the cerebellum at birth is the largest non-genetic risk factor for autism. The hypothesis is put forward that the cerebellum supports the detection and prediction of action sequences in the social domain. It is argued that understanding actions in their correct order is a prerequisite for inferring the mental state of other persons, and to anticipate their next series of actions. Novel evidence is presented that largely support this hypothesis. Research with cerebellar patients demonstrates that they are strongly impaired in identifying the sequences of actions, especially when taking into account the beliefs held by the observed agents. Other research shows that learning and generation of such action sequences is supported by the posterior cerebellum. Moreover, this symposium presents novel research on social sequence learning regarding inconsistencies in actions, in action trajectories, in emotion, and in trait inferences and predictions, as well as regarding explicit or implicit sequence learning and the effect of neurostimulation. Collectively, this research may broaden our understanding of the role of the posterior cerebellum in social cognition.

TALK 1: THE SOCIAL CEREBELLUM: NEW INSIGHTS AND EVIDENCE ON ACTION SEQUENCING

Frank Van Overwalle, Vrije Universiteit Brussel

Recent research has revealed that the posterior cerebellum (Crus II) plays a critical role in social reasoning. One hypothesis is that the cerebellum is responsible for the understanding and automatization of sequences of movements and perceived actions. Understanding actions in their correct order is a prerequisite for inferring the mental state of other persons, including their beliefs, and to anticipate their next actions. I will first give a brief overview on published research that investigated for the first time cerebellar patients on their social capacities with respect to social belief sequencing, and the underlying cerebellar substrates of social belief sequencing using fMRI. Next I will demonstrate the role of sequences in social understanding in novel tasks, and the involvement of the cerebellum using fMRI. A first set of studies focuses on the role of the cerebellum in sequences of actions that are related to the same trait that they imply. I will then discuss the effect of inconsistencies by introducing some trait-inconsistent actions. A related study focuses on the role of action prediction when a trait of a person is already known, and how this might lead to predictions of the person’s future actions and their order. A second set of studies focuses on implicit learning of false and true belief sequences, and how performance differs from structurally similar, but non-social sequences. The third set of studies discusses how the learning of action trajectories may be subserved by the cerebellum.

TALK 2: THE CEREBELLAR ENGAGEMENT IN RECONSTRUCTING ACTION SEQUENCES - A STUDY ON CEREBELLAR PATIENTS.

Libera Siciliano, Sapienza University of Rome, Italy

Novel advances on cerebellar function in detection and generation of action sequences are recently pointing to a role in sequencing social events. The sequencing of actions is a prerequisite for social cognition, since it may contribute to understanding and predicting others’ behaviours. Such a process includes the capacity to infer intentions and beliefs, an ability known as mentalizing and reported to be impaired in patients with cerebellar disease. To test the hypothesis of a cerebellar role in understanding social events specifically when sequential processing is required, we compared the performance of patients affected by a neurodegenerative cerebellar disease with healthy matched controls in the Picture Sequencing Task and the Stories Sequencing Task. Participants were required to understand and generate the correct chronological order of (1) well-known social routines, (2) social actions involving reasoning about mental states, or (3) non-social events. The events were presented in cartoons or sentences. The patients also underwent a brain MRI. The results showed cerebellar patients’ impairments in reconstructing social actions compared to non-social events, particularly when correct action ordering required participants to reason about the mental states of the protagonist. The impairments in action sequencing were in line with the identified patterns of grey matter reduction, in accordance with cerebellar functional topography. These results confirm a cerebellar role in modulating mentalizing abilities when the reconstruction of social action sequences is required, giving more insight into the specificity of the cerebellar role in social domain.

TALK 3: THE SOCIAL CEREBELLUM: NEW SEQUENCING TASKS AND CEREBELLO-CORTICAL CONNECTIONS

Elien Heleven, Vrije Universiteit Brussel

An increasing number of studies highlighted the importance of the cerebellum in social functioning, most often the posterior part (i.e., Crus 1 and 2). One hypothesis states that the basic function of the cerebellum to detect and construct internal models of motor sequences for the planning and execution of movements, extended during human evolution to purely mental sequences which facilitates (social) understanding. We introduce new tasks to investigate cerebellar involvement in the processing of different types of sequences, in which participants generated the correct chronological order of new or well known (non-)social stories. A functional magnetic resonance imaging study showed strong cerebellar activation during sequence generation for all event types compared to passive viewing or reading events, and more so in the posterior Crus 2 for new social events involving agents’ beliefs compared to routine (non-)social events. Using dynamic causal modelling, we revealed closed-loop connections between the active social regions in the cerebellum and mentalizing regions in the cortex. These results confirm that the posterior cerebellum plays a critical role in the understanding and construction of the correct order of new action sequences relevant for social cognition, and shows strong connection with social mentalizing regions in the cortex.

TALK 4: NEUROSTIMULATION OF THE AFFECTIVE CEREBELLUM: PROCESSING OF OTHERS’ EMOTIONAL EXPRESSIONS.

Chiara Ferrari, University of Pavia, Italy

Consistent evidence suggests that the posterior cerebellum is an essential node of the neural network dedicated to emotional processing. We carried out a series of experiments in which we used transcranial magnetic stimulation (TMS) to interfere with the cerebellar activity (both the right and the left cerebellar hemisphere) while healthy participants discriminated the emotions expressed by  faces and body postures. Preliminary results indicate that TMS affected participants’ discrimination performance when delivered over both the left and right cerebellum. Moreover, in a further study, we demonstrated that inhibitory stimulation of the (left) cerebellum modulates the corticospinal excitability (measured by motor-evoked potentials over the contralateral motor cortex) in response to the observation of emotional facial expressions. Overall, our findings point to an important contribution of the cerebellum to emotional processing and show that the posterior cerebellum is able to determine physiological changes associated with the perception of others’ facial expressions.

HOW DOES PRESTIMULUS BRAIN STATE INFLUENCE PERCEPTION?

Chair: Biyu Jade He, New York University School of Medicine

Speakers: Sepideh Sadaghiani, Avniel Ghuman, Biyu Jade He, Nathan Weisz

Sensory input does not enter a silent brain, but one that is constantly active. Despite enormous progress in understanding the spatiotemporal architecture and generative mechanisms of spontaneous brain activity over the past few decades, how spontaneous brain activity shapes online functional computation remains poorly understood. A long line of research has shown that prestimulus brain activity, measured with fMRI BOLD or EEG signals, predicts the success or failure of perception in threshold-level perception paradigms in the visual, auditory, and somatosensory modalities. Recent studies have further begun to tease apart the functional mechanisms embedded in this phenomenon. They have asked: 1) How exactly does prestimulus spontaneous activity influence perceptual processing? Does it increase noise or signal, or both? Does it selectively influence perception of specific contents, or indiscriminately influence perception of different contents? 2) Are there different types of spontaneous activity that influence perception in different ways? How do activity level and connectivity pattern respectively shape stimulus and perceptual processing? 3) How do spontaneous fluctuations of cortical activity relate to variations in pupil-linked arousal within the waking state? And how does spontaneous activity influence naturalistic vision and interact with saccades? 4) How does prestimulus spontaneous activity shape the evolution of post-stimulus brain responses? In this symposium, four speakers will cover recent progress on these topics made using a variety of human brain recording techniques (fMRI, M/EEG, intracranial recordings).

TALK 1: COGNITIVE CONTROL NETWORK STATES IMPACTING PERCEPTION

Sepideh Sadaghiani, University of Illinois at Urbana-Champaign

Perceptual decisions are modulated by prestimulus activity, network-specific connectivity, and distributed connectome topology. One approach to understanding this modulation is to study spontaneous trial-by-trial fluctuations of BOLD amplitude and connectivity during long baseline periods that are free of stimulus- and cue-evoked responses. This approach has revealed that perception, from simple signal detection to higher-order decisions, is biased by prestimulus activity in the respective stimulus-relevant sensory areas. Beyond sensory regions, the prestimulus state of cognitive control-related networks further impacts perception during particularly control-demanding tasks. For example, when the task relies on tonic alertness or vigilance, spontaneous baseline activity prior to successful vs. failed trials differs most strongly in the Cingulo-Opercular (alertness/salience) network. Another approach to understanding the perceptual impact of pre-stimulus states is to experimentally manipulate the baseline. This manipulation is often achieved by introducing a cue and studying how it engages cognitive control processes. Instead, we asked how endogenously maintained control processes emerge in the prestimulus baseline state when the cue is unavailable. Compared to trials where a cue informs about timing and content of an upcoming stimulus, the absence of such facilitatory information is met by increased prestimulus activity in the dorsal Fronto-Intraparietal (selective attention) network. This prestimulus effect differs from those unfolding during stimulus presentation, where the absence of the cue engaged the lateral Fronto-Parietal (executive control) network. Taken together, perceptual processes are modulated by both spontaneous and demand-driven changes in the prestimulus state of cognitive control networks.

TALK 2: DIFFERENT PRESTIMULUS EXCITABILITY/CONNECTIVITY STATES SUPPORT PERCEIVING ANYTHING VS. A THING

Nathan Weisz, University of Salzburg (Austria)

Driven by various conceptual frameworks and flanked by increasing evidence, progress has been made in understanding relevant stimulus-induced neural dynamics supporting conscious access. However, our understanding of relevant prestimulus patterns has not advanced equally. Several M/EEG studies utilizing near threshold (NT) stimuli have shown reduced activity in the alpha frequency range in putatively task-relevant sensory processing regions prior to stimuli that are subsequently reported. In parallel, some studies also indicate connectivity states to influence perceptual reports. However, to what extent prestimulus alpha and connectivity patterns are providing distinct information is not clear. In our working framework, we interpret alpha oscillations in task relevant sensory regions to signal fluctuating excitability, whereas the connectedness of these regions form preestablished communication pathways. In a series of experiments we investigated prestimulus excitability and connectivity patterns using NT in different modalities and bistable paradigms in the visual system. These paradigms differ in their extent to which patterns predisposing specific perceptual contents are probed. For the NT experiments, consistent prestimulus alpha effects were identified accompanied by connectivity effects that overall indicate enhanced network integration of task relevant regions prior to conscious perception. In the case of bistable perception however, only prestimulus connectivity effects could be observed, that correlated with neural measures of poststimulus processing across participants in line with our framework. Overall, our research supports a strong influence of prestimulus connectivity patterns in shaping the perceptual fate of how sensory stimuli are processed that can be independent of excitability state of task relevant regions.

TALK 3: ENDOGENOUS ACTIVITY MODULATES STIMULUS AND NEURAL TUNING AND PREDICTS PERCEPTUAL BEHAVIOR

Avniel Ghuman, University of Pittsburg

Perception reflects not only sensory input, but also the endogenous neurocognitive state when sensory input enters the brain. To help understand how endogenous activity modulates perception, we developed a novel two-stage statistical machine learning analysis procedure. This procedure provides a statistical metric of whether endogenous activity modulates the stimulus response, a trial-by-trial metric of the degree of modulation to examine trial-by-trial brain-behavior correlations, and an avenue to probe what aspects of the endogenous activity modulate stimulus responses. We applied this procedure to intracranial recordings from 30 pre-surgical epilepsy patients to show that endogenous activity in category selective neural regions modulates the stimulus response for that category and is unrelated to endogenous activity in regions selective for other visual categories. The results further show that trial-by-trial reaction times correlate to the degree to which the endogenous activity influences the stimulus response, again in a neural circuit specific manner (e.g. endogenous activity in face selective regions correlated with reaction times for face stimuli, but not other visual categories). Additionally, the results suggest that endogenous oscillatory dynamics modulate the excitability of the local neural population. Finally, we have begun applying these methods to human intracranial data collected during real world natural vision, with similar results when examining the effects pre-saccadic neural activity on post-saccadic visual response. These results suggest that oscillatory endogenous activity can modulate neural tuning by altering local excitability and influence behavior in a circuit- and stimulus-specific manner, reflecting a potential mechanism by which endogenous neural states facilitate and bias perception.

TALK 4: A DUAL ROLE OF PRESTIMULUS SPONTANEOUS NEURAL ACTIVITY IN VISUAL OBJECT RECOGNITION 

Biyu Jade He, New York University

Preexisting brain states have enormous influences on conscious perception. Depending on the preexisting brain state at the time of stimulus arrival, a physically identical stimulus may be consciously perceived or not, a visual object may be consciously recognized or not, and we may perceive something that is not out there. In this talk, I will describe our recent work that uses magnetoencephalography (MEG) and eye-tracking to investigate the role of prestimulus spontaneous activity in shaping conscious visual perception. Using multivariate analysis applied to sensor-level activity patterns recorded before stimulus presentation, we identified two separate spontaneous neural processes that influence visual recognition in distinct manners: a non-content-specific (NCS) process, which disregards stimulus category and correlates with pupil-linked arousal, and a content-specific (CS) process, which facilitates recognition in a category-specific manner and does not correlate with pupil size fluctuations. The two processes have doubly-dissociable influences on perceptual processing: the NCS process shifts detection criterion whereas the CS process enhances perceptual sensitivity. In addition, prestimulus pupil size predicts subsequent perceptual decisions, and this effect could be explained by cortical activity that covaries with spontaneous pupil size fluctuations. Lastly, prestimulus brain activity nonlinearly interacts with stimulus-triggered brain responses to shape the trajectory of post-stimulus brain activity. Our findings reveal coexistence of different spontaneous processes that influence perception in distinct ways.

THE NETWORK STRUCTURE OF EPISODIC THOUGHT

Chair: Rose Cooper, Boston College

Speakers: Rodrigo Braga, Rose Cooper, Janice Chen, Alex Barnett

Episodic thought, the ability to construct and recall representations of specific events, is a complex phenomenon supported by the default network. Two emerging lines of research provide new insight into the multifaceted nature of both the default network and episodic representations. First, analyses of brain network architecture have pointed to a fractionation of the default network into functionally distinct subsystems. Second, naturalistic paradigms have demonstrated a diverse and hierarchical neural representation of events. This symposium seeks to highlight and integrate these parallel lines of research to better understand the structure of the default network and its relation to the contents of episodic thought. In the first talk, Rodrigo Braga will introduce evidence from individual-focused analyses for the presence of distinct distributed networks within canonical default regions, with the specialization of a medial temporal lobe-linked network for episodic projection. Alex Barnett will explore functional connectivity of default subnetworks to the hippocampus and the unique kinds of event information represented by different subnetworks. Rose Cooper will deconstruct the episodic functions supported by regions and connections within a posterior medial network during recollection and movie watching. Finally, Janice Chen will show how audiovisual movies can be used to understand how we structure temporally-evolving events and how default network regions represent and transform event-specific information. Together, we highlight that much can be learned about episodic thought by considering the multifaceted nature of real-world events in the context of brain network architecture.

TALK 1: DETAILED ANATOMY OF A DISTRIBUTED NETWORK ACTIVATED DURING EPISODIC PROJECTION

Rodrigo Braga, Northwestern University Feinberg School of Medicine

Increased activity within a distributed set of regions that sit at the transmodal apex of association cortex (i.e. the “default network”) is commonly observed during tasks requiring internally-oriented episodic processes, including recollection and imagining the future. Network estimation within individuals has revealed that the canonical default network comprises at least two distinct parallel distributed networks. One shows connectivity with circumscribed regions of the posterior medial temporal lobe, observed at or near the parahippocampal cortex using 3T MRI, and with a further region observed at or near the subiculum when resolved at high-resolution using 7T MRI. The observed links to multiple regions of the medial temporal lobe raised the hypothesis that this distributed network serves mnemonic functions. Data from 18 repeatedly sampled individuals were used to test this hypothesis using a combination of functional connectivity and task contrasts. Individual-focused analyses revealed that the distributed network shows increased activity during task contrasts targeting episodic projection (remembering and imagining the future). Evidence for increased activation was observed throughout the medial temporal lobe-linked network. These findings suggest that specialization for episodic projection occurs at the level of the entire distributed network, and raise questions regarding the precise episodic processes supported by the network broadly and by specific network regions.

TALK 2: ORGANIZATION OF CORTICO-HIPPOCAMPAL NETWORKS IN THE HUMAN BRAIN

Alex Barnett, University of California, Davis

Episodic memory is thought to depend on interactions between the hippocampus and a set of closely interconnected regions that comprise the default mode network (DMN)—a large-scale network that has been identified with resting-state fMRI. Here, using data-driven analyses of resting-state fMRI data to characterize cortico-hippocampal network connectivity, we identified a discrete set of subnetworks that interact with the hippocampus. Specifically, we found that the hippocampus is closely affiliated with the DMN and with a “Medial Temporal Network” (MTN) that included regions in the medial temporal lobe and retrosplenial cortex. The DMN could be further subdivided into three subnetworks: a “Posterior Medial” Subnetwork comprised of regions in the posterior cingulate, lateral parietal, and dorsal prefrontal cortex, an “Anterior Temporal” Subnetwork comprised of regions in the temporopolar, lateral orbitofrontal, and dorsal medial prefrontal cortex, and a “Medial Prefrontal” Subnetwork comprised of regions in the ventral medial prefrontal, and entorhinal cortex. These cortico-hippocampal networks vary in their functional connectivity along the hippocampal long-axis, and analyses of an independent task-fMRI dataset revealed that the three DMN subnetworks represent different kinds of information during memory-guided decision-making. Finally, a data-driven meta-analysis of functional imaging studies of cognition suggests new hypotheses regarding the functions of the MTN and DMN subnetworks, thus providing a framework to guide future research on the neural architecture of episodic memory.

TALK 3: FUNCTIONAL HETEROGENEITY OF THE POSTERIOR MEDIAL NETWORK DURING EPISODIC TASKS

Rose Cooper, Boston College

Brain regions within a posterior medial (PM) network are characterized by sensitivity to episodic tasks, and they also demonstrate strong functional connectivity as part of the default network. Despite its cohesive structure, delineating the intranetwork organization and functional diversity of the PM network is crucial for understanding its contributions to multidimensional episodic thought. By studying the PM network during associative memory tasks and movie watching, we have shown that distinct episodic functions are associated with both activity and connectivity of constituent regions, from tracking precise spatial context to representing the overall quality and theme of an event. In recent work, we identified distinct subsystems that may support a hierarchical network structure: a Ventral PM subsystem (retrosplenial cortex, parahippocampal cortex, posterior angular gyrus) and a Dorsal PM subsystem (medial prefrontal cortex, hippocampus, precuneus, posterior cingulate cortex, anterior angular gyrus). Connectivity dynamics during movie watching demonstrate that the distinction between PM subsystems is functionally relevant: whereas both Dorsal and Ventral PM subsystems track movie content, only Ventral PM connections increase in strength at event transitions and support memory for scene context. Overall, these findings provide a model of PM network pathways and reveal distinct functional roles of subsystems associated with event cognition.

TALK 4: BRAIN DYNAMICS UNDERLYING MEMORY FOR CONTINUOUS NATURAL EVENTS

Janice Chen, Johns Hopkins University

The world confronts our senses with a continuous stream of rapidly changing information. Yet, we experience life as a series of episodes or events, and in memory these pieces seem to become even further organized. How do we recall and give structure to this complex information? Recent studies have begun to examine these questions using naturalistic stimuli and behavior: subjects view audiovisual movies and then freely recount aloud their memories of the events. Within the default network, we find brain activity patterns that are unique to individual events, and which reappear during verbal recollection; robust generalization of these event-specific patterns across people; systematic transformation of the activity patterns between encoding and recall; and memory effects driven by the network structure of links between events in a narrative. Both the behavioral and neural phenomena replicate across multiple movies with a wide variety of semantic content. These observations construct a picture of how the default network contributes to our ability to comprehend and recall real-world events that unfold continuously across time.

THE BEHAVIOURAL RELEVANCE OF NEURAL VARIABILITY

Chair: Leonhard Waschke, MPI for Human Development

Speakers: Leonhard Waschke, Diego Vidaurre, Amy Ni, Marieke Scholvinck

Neural activity is highly variable across time at a variety of temporal and spatial scales from single cell spiking on the order of milliseconds to ensemble-activity measured by BOLD fMRI with a resolution in the second range. Previously regarded as noise, recent findings have highlighted the impact of neural variability on sensory processing and behaviour. In this symposium, we will collect the most recent insights regarding the behavioural relevance of neural variability by gathering evidence from different species, methods, and aspects of behaviour. We will highlight the involvement of single cell and population variability for selective attention and sensory processing in non-human animals. Furthermore, we will showcase how non-invasive approximations of neural variability in humans link to cognitive performance and reflect dynamic functional networks. Finally, we will discuss potential links between different spatial and temporal scales of variability, aiming for a comprehensive account that captures the behavioural relevance of neural variability. This symposium brings together international researchers at different career stages and from different fields to bridge gaps and highlights the temporal variability of brain activity as a key, under-valued dimension for understanding brainâ??behaviour associations.

TALK 1: IMPROVED SELECTIVE ATTENTION DUE TO A SYSTEMICALLY ADMINISTERED STIMULANT CORRESPONDS TO DECREASED C

Amy Ni, University of Pittsburgh

Stimulants such as methylphenidate are commonly prescribed to treat Attention Deficit Hyperactivity Disorder (ADHD), but the neuronal mechanisms that underlie improved selective attention with these drugs are not well understood. We measured the behavioral and neuronal effects of systemically administering methylphenidate to rhesus monkeys. Specifically, we measured the monkeys’ performance on a visual change-detection task with cued spatial attention after administering either the drug or a placebo, while recording the activity of a population of neurons in visual area V4 using chronically implanted microelectrode arrays. We found that systemically administering methylphenidate selectively improved behavioral performance at only the attended visual location. Further, this spatial selectivity was reflected in the neuronal population responses in V4: Methylphenidate decreased the correlated variability (the shared trial-to-trial variability of pairs of neurons in response to repeated presentations of the same stimulus) in the V4 population only when the receptive fields of the neurons overlapped the attended visual location. Finally, we found a consistent, quantitative relationship between improvements in behavioral performance with the drug and decreases in correlated variability. In conclusion, spatially specific changes in the correlated variability of visual neuron populations may underlie increases in spatially selective visual attention due to the systemic administration of a stimulant drug.

TALK 2: THE CONTINUOUS LOOP BETWEEN BRAIN ACTIVITY AND BEHAVIOUR

Marieke Scholvinck, Ernst Strüngmann Institute for Neuroscience, Frankfurt

Humans may not be good at multitasking – but their brains are! At any given moment, a neuronal population may be involved in multiple cognitive processes, such as perceptual decision making, attention, and learning. Cognitive neuroscience studies have overwhelmingly focused on studying how each of these processes affects neuronal activity in isolation. Consequently, neuronal activity resulting from concurrently ongoing processes has been discarded as ‘noise’. In my talk, I will argue for a different experimental approach, which recognises the noise as being in fact continuously ongoing internal brain activity. I will first use a divided visual attention task in monkeys to explore how such internal dynamics in visual cortex affects neuronal responses to stimuli and behaviour. I will then give you a glimpse of our current work in which we study how internal neuronal dynamics continuously affect behaviour of mice and monkeys in a virtual reality environment.

TALK 3: ELECTROPHYSIOLOGICAL VARIABILITY AND ITS ROLE FOR BEHAVIOUR

Leonhard Waschke, MPI for Human Development

Evidence continues to mount that perception and action do not entirely depend on available sensory information. Instead, animal physiology has highlighted the impact of attention- and arousal-related variations of temporal neural variability on sensory processing and behavioural performance in across several different tasks. The testing and transfer of these findings to human cognitive neuroscience is crucial for the understanding of inter- and intra-individual differences in human behaviour. Here, I will present evidence for the potency of novel information-theoretic and spectrally based metrics of electroencephalography activity (EEG) to approximate neural variability on a wide range of temporal scales. Specifically, I will demonstrate the sensitivity of these metrics to neurochemical alterations, attention-related changes in ensemble activity, and complex sensory information. I will showcase the relevance of temporal neural variability for sensory processing and behavioural performance in a number of different sensory decision-making paradigms. These results transfer concepts and hypotheses between different fields of neuroscience, and argue for a principled investigation of neural variability and its role in human perception and action.

TALK 4: BEHAVIOURAL RELEVANCE OF SPONTANEOUS, TRANSIENT BRAIN NETWORK INTERACTIONS IN FMRI  

Diego Vidaurre, Oxford University & Aarhus University

How spontaneously fluctuating functional magnetic resonance imaging (fMRI) signals in different brain regions relate to behaviour has been an open question for decades. Correlations in these signals, known as functional connectivity, can be averaged over several minutes of data to provide a stable representation of the functional network architecture for an individual. However, whether dynamic changes in fMRI-derived FC reflect distinct and transient patterns of communication between neuronal populations is still controversial. In this talk, I will speak about methods that can be used to assess and compare the relation between time-varying functional connectivity, time-averaged functional connectivity, structural brain data, and non-imaging subject behavioural traits. Using these methods, I will show that time-varying fMRI functional connectivity, detected at time-scales of a few seconds, has associations with some behavioural traits, and that these associations are not dominated by anatomical differences. Despite time-averaged functional connectivity accounting for the largest proportion of variability in the fMRI signal between individuals, some aspects of intelligence, for example, could only be explained by time-varying functional connectivity. Consequently, the finding that time-varying fMRI functional connectivity has a unique relationship to population behavioural variability could be used as an argument that it might reflect transient neuronal communication fluctuating around a stable neural architecture.

BEFORE THE PREDICTIONS: THE ROLES OF MEMORY ON PREDICTIVE PROCESSING

Chair: Yee Lee Shing, Goethe University Frankfurt

Speakers: Johanna Bergmann, Martina Vilas, Javier Ortiz-Tudela, Eelke Spaak

The study of the brain within a predictive processing framework has become prominent in the last decade. One core (often implicit) assumption that underlies this view is that some form of stored information is available for the brain to generate predictions; that is, predictions should derive from memory. However, the interactions between predictions and the memory from which they are derived have not been the focus of research until recently. This symposium brings together researchers working on the intersection between these two fields to advance our understanding of memory as a driving force behind prediction generation in the hypothesis-testing brain. In Talk 1, mental imagery and visual illusions are examined together with laminar fMRI, showing that the information carried by prediction signals is implemented differently across the different layers of the early visual cortex. In Talk 2, neural implementation and the behavioral consequences of scene-to-object predictions, which are generated from either spatial or semantic regularities, will be presented. In Talk 3, MEG and EcoG are used to explore the temporal signature of prediction error detection when predictions are drawn from either semantic or episodic knowledge. In Talk 4, fMRI is used to show that prediction signals arriving at the early visual cortex can carry different types of content (i.e., scene vs object information) and how this top-down modulation differs between episodic and semantic retrievals. Finally, we will have a panel discussion on the current theoretical understanding of neural mechanisms underlying the interplay between memory and prediction.

TALK 1: IMAGINARY AND ILLUSORY INFORMATION IS DECODABLE FROM DIFFERENT LAYERS OF PRIMARY VISUAL CORTEX (V1)

Johanna Bergmann, Max Planck Institute for Human Cognitive and Brain Sciences

When we engage in mental imagery, we see something with our ‘inner eye’ that may not be physically present. Likewise, visual illusions can induce a sensory experience that does not correspond to the physical input. Despite the fact that both experiences are internally generated, they feel vastly different: visual illusions appear ‘real’ and are under limited volitional control. In contrast, mental imagery usually feels distinct from perception. Visually illusions are thought to arise from interactions within visual cortices, whereas mental imagery involves a wide brain network. V1 has shown to be involved in both, but it is unclear whether processing differences already emerge here. We use laminar fMRI combined with population receptive field mapping and multivariate pattern analysis to investigate this question. Neural projections to V1 follow characteristic connectivity patterns: most feedforward input from the eyes arrives in mid-layers, whereas feedback from other brain areas is sent to deep and superficial layers. Interestingly, projections to superficial layers stem mostly from nearby regions, whereas projections to deep layers are mostly from further-away areas. In two separate experiments, we find characteristic layer-wise decoding patterns for visual imagery and illusions: During mental imagery, stimulus information is only decodable from deep V1 layers. In contrast, illusory stimulus information is only decodable from superficial layers. Our results indicate that characteristic signatures of mental imagery and illusory processing may already emerge at the level of V1, and suggest that feedback information in superficial and deep V1 layers may have different functional roles and experiential outcomes.

TALK 2: SPATIAL (SEMANTIC) PRIORS IN MEMORY AND THEIR EFFECT ON (UN)CONSCIOUS PERCEPTION

Eelke Spaak, Donders Institute for Brain, Cognition and Behaviour

Humans excel at learning where to expect certain objects, given their spatial context. Several questions remain, however, concerning this type of learning: (1) which neural mechanisms are involved in acquiring and exploiting such knowledge?; (2) can this type of learning happen outside of conscious awareness?; and (3) can these perceptual effects be reconciled with predictive processing, which posits reduced, rather than enhanced, processing of context-congruent items? I will present two strands of evidence. First, a combined behavioural and MEG experiment. Learning of spatial regularities during visual search was underpinned by hippocampal theta activity. After an initial learning phase, both behaviour and neural activity underwent a sudden switch to an exploitation phase, characterized by prefrontal theta activity instead. Remarkably, greater awareness of the learned regularities led to reduced behavioural benefits. Second, I’ll present results from a large-sample online behavioural experiment, in which observers were presented with objects in semantically congruent or incongruent surroundings. Counterintuitively, perception was impaired for scene-congruent objects, an effect unexplained by low-level confounds, but instead related to subjective congruency ratings. This suggests that a scene-induced prior over expected objects is likely to ‘explain away’ congruent objects, whereas incongruent objects elicit prediction errors. Time permitting, I will present multivariate MEG results that shed light on the interactions between scene priors and object representations. Taken together, these findings further our understanding of how perception and memory continuously interact, and demonstrate that this interaction is well understood through the lens of predictive processing.

TALK 3: SCHEMA- AND EPISODIC-BASED PREDICTIONS DURING VISUAL NARRATIVE PERCEPTION

Martina G. Vilas, Max Planck Institute for Empirical Aesthetics

Predictive processes might rely on different representational systems depending on the nature of mediating experiences. During novel situations (e.g. watching a movie for the first time), abstract, schematic models might guide predictions towards typical event patterns extracted from similar experiences. In contrast, when reexperiencing events linked to a unique autobiographical fingerprint (e.g. watching a movie for a second time), episodic-memory retrieval processes might serve as the sources of predictions.  We used Magnetoencephalography (MEG) and Electrocorticophagraphy (ECoG) recordings to investigate how these two types of predictions are instantiated in the brain. Participants viewed visual narratives (comics) depicting the unfolding of real-world actions whose ending either complied with the schematic model (congruent), or did not (incongruent). To create predictions based on episodic-memory, we repeated a subset of these comics. These manipulations enabled us to track the neurophysiological responses to predictions stemming from schema and episodic-memory processes. We found N400 responses – a measure of prediction error - to endings inconsistent with schema-based expectations. Episodic trials in which the same incongruent ending was repeated, and was thus no longer unexpected, elicited comparable N400 signals. Decoding analyses also revealed a similar pattern of responses for both conditions. This suggests that episodic-memory might not overwrite schema-based predictions and error detection mechanisms, at least when measured non-invasively in MEG. Differences between schema- and episodic-based predictions might instead be based on more complex patterns of oscillatory activity and cortico-hippocampal interactions, and on the content and the structure of the information conveyed by the anticipatory mechanisms.

TALK 4: FEEDBACK SIGNALS FROM EPISODIC AND SEMANTIC CONTENT IN NON-STIMULATED AREAS OF THE V1/V2

Javier Ortiz-Tudela, Goethe University Frankfurt, Germany

Navigating through the world requires us to rapidly integrate incoming information (i.e. feedforward signals) with either context-linked/episodic or context-free/semantic knowledge (i.e. feedback signals). Early visual cortex areas have been shown to represent higher-order information in the absence of sensory input, which is thought to be due to feedback processes. However, the nature of these signals and their relationship with stored memory representations is still unclear. We used fMRI to relate prediction signals in the early visual cortex to episodic and semantic memory representations. Our participants studied cartoon images depicting common real-world locations that included one target object. On the following day, an object retrieval task that included the studied scenes (i.e., episodic trials) as well as a new set of scenes never seen before (i.e., semantic trials) was performed inside the scanner. Critically, the scenes were used as cues and the target objects’ position was visually occluded. We used functional retinotopy to isolate activity from the voxels in V1 and V2 that responded exclusively to the occluded part of the visual field, and multivariate pattern analysis to look for the reinstatement of memory-specific content during retrieval. The results show that non-stimulated areas of the visual cortex carry both concurrent scene information and distant (memory-based) object information. Moreover, representation similarity analysis revealed that object information only reaches the early visual cortex when retrieved through an episodic route. We will relate these findings with activity in distant areas that are involved in episodic (i.e., hippocampus) and semantic retrieval (i.e., ventromedial prefrontal cortex).

NEW FRONTIERS AND TECHNOLOGIES IN VISION REHABILITATION

Chair: Benedetta Franceschiello, Laboratory for Investigative Neurophysiology, CHUV and UNIL

Speakers: Ruxandra Tivadar, Monica Gori, Benedetta Franceschiello, Serge Picaud

According to the World Health Organisation, 2.2 billion people live with a vision impairment or with blindness, and preventable causes account for 80% of the total global burden. Therefore, visual neuroscience and vision rehabilitation constitute frontier areas with significant opportunities and implications for applied, basic, and clinical domains. This symposium brings together researchers whose work spans cutting-edge laboratory-based and field research on the development of new technologies and protocols for understanding vision, its breakdown, and its rehabilitation. The research covered in the symposium spans 1) the development and application of digital haptics, 2) innovations in retinal prostheses, 3) the application of vision rehabilitation strategies with children, 4) the development of MRI-based tools for studying the structural-functional integrity of the visual system from eye to brain,. These advancements in the field will be presented and discussed in terms of the hurdles already overcome and the challenges ahead for widespread application in public health.

TALK 1: DIGITAL HAPTICS IN VISION REHABILITATION

Ruxandra Tivadar, Cognitive Computational Neuroscience Group, University of Bern

Digital haptics are relevant as a sensory substitution and visual rehabilitation tool. Digital haptics use ultrasonic vibrations on screens to render textures through reduction of the local friction felt by an actively exploring finger. Digital haptics are a significant advance over existing approaches because they require less training time and are non-invasive and more ergonomic. First, we show how sighted and visually impaired participants quickly learn to interact with the device and in turn use digital haptics to generate mental representations of objects that they can then mentally manipulate in a mental rotation paradigm. Second, we show that digital haptics can also be applied in more naturalistic circumstances. In navigation experiments, digital haptics can confer the spatial layout of an environment and assist individuals in reconstruction and navigation of this space. In visual dual-task settings, haptic feedback significantly biases attention towards and improves performance on a central visual task, as compared to visual and multisensory (i.e. visual and haptic) feedback.

TALK 2: VISUAL RESTORATION BY A PHOTOVOLTAIC RETINAL PROSTHESIS AND OPTOGENETIC THERAPY

Serge Picaud, Institute of Vision, INSERM, CNRS, Sorbonne Université

Following photoreceptor degeneration, retinal prostheses can restore some visual perception by stimulating residual retinal neurons but current devices fail to provide face recognition or autonomous motion in an unknown environment. We here tested the spatio-temporal resolutions of a novel photovoltaic prosthesis and of optogenetic therapy both in non-human primates. In an ex vivo model of blindness, we demonstrated that some primate retinal ganglion cells can respond to one unit of the infrared sensitive photovoltaic prosthesis but not to its neighbors with a 100µm pitch. In vivo, activation of a single unit generated a behavioral saccadic response indicating thereby primate perception by the implant. In parallel, we injected AAV viral vector coding for ChrimsonR, a microbial opsin in living primates, to select the most efficient AAV vector. We selected the AAV2-7m8 coding for ChrimsonR-tdTomato based on the highest number of light-sensitive cells when the retina was isolated and stimulated on a multielectrode array. These recordings showed further that cells were responding with light stimulation shorter than 30ms and thus compatible with video-rate image presentation. The spatial resolution allowed discrimination of moving bars or letter recognition. These results demonstrated the functional efficacy of the PRIMA photovoltaic retinal prosthesis and of optogenetic therapy using ChrimsonR both on the non-human primates. These primate preclinical studies have paved the way for clinical trials on patients affected by age-related macular degeneration and retinitis pigmentosa, respectively.

TALK 3: MULTISENSORY INTEGRATION DEVELOPMENT FOR REHABILITATION

Monica Gori, Italian Institute of Technology

In 2014 the number of blind children below 15 years of age was estimated to be 19 million.  Visually impaired children tend to manifest impairments in the motor, perceptive, and social domain. The creation of new technological devices to be used early in life is a must. However, despite the massive improvement of technological solutions specifically designed for visually impaired users, we find that adults do not widely accept many of these solutions, and these are not suitable for young children. When the visual information is unavailable, the natural visual sensory feedback associated with body movement is crucial for space-representation development. To restore this significant sensory-motor association, we recently developed and validated a new rehabilitation device called ABBI (Audio Bracelet for Blind Interaction; www.abbiproject.eu) to be used by children with visual impairments starting from the first years of age. Audio thresholds, motor performance, and social interaction skills were measured with psychophysical and motion tracking methods before and after three months of training with the audio bracelet. Our results suggest that the training performed in 21 visually impaired children between 6 and 15 years of age rehabilitate space representation and social skills. No improvement was observed in the control group. Similar training was also effective in adults and younger, visually impaired children. In the presentation, I'll show the ABBI device's rehabilitation results and other similar tools we have recently developed and validated in young, visually impaired children.

TALK 4: MIME - MAGNETIC RESONANCE IMAGING OF THE MOVING EYE

Benedetta Franceschiello, Laboratory for Investigative Neurophysiology, CHUV and UNIL

The functional interplay between eye and brain during vision is a research domain that has not been systematically explored yet. A lack of adequate methods is likely a major contributor to this knowledge gap. Magnetic Resonance Imaging (MRI) is a particularly promising non-invasive technique because it can provide measurements related both to the tissue/organ structure and to the regional neural activity. However, eye motion artefacts prevent the applicability of MR techniques to image the eye, therefore impeding the simultaneous imaging of the eye and brain. In this talk I will present our patented structural MRI protocol that allows dynamic acquisitions of the eye while it moves during quasi-naturalistic vision, in a field of view permitting to image eyes and brain simultaneously. To test the efficacy of this method, eye-movements and axial lengths of the participants’ eyes – as extracted from the MR images – have been compared with eye-tracker measurements and optical biometry, respectively. This new noninvasive technology can estimate the rotation axes from the MR images with up to 97% accuracy with respect to the eye-tracker hardware. The high-resolution MRI scans of the human eye (1 mm3) – acquired during natural movement – permit to quantify the optical axial length with an accuracy having the same order of magnitude of the one obtained with ocular biometry. Finally, I will discuss the possible applications of this technique and the new frontiers it opens both in the field of ophthalmic MRI and neuroscience.

 

NEW INSIGHTS ON MULTISENSORY BRAIN ORGANIZATION FROM MULTIVOXEL PATTERN ANALYSIS MVPA TO LAMINAR FMRI

Chair: Anna Gaglianese, University Hospital Center, University of Lausanne

Speakers: Davide Bottari, Uta Noppeney, Anna Gaglianese, Olivier Collignon

In the last years multisensory research has changed our knowledge about brain functional development and specialization. Thanks to human brain imaging techniques such as fMRI and EEG, neuronal recordings via intracranial measurements and advanced signal analysis approaches we are starting to disentangle the multisensory nature of primary and higher order brain regions. This symposium brings together emerging techniques and signal analysis approaches to explore the mechanisms of multisensory integration in the human brain. Speakers of this symposium will inform us about the brain activation patterns underlying the convergence and integration of information from different senses within low- and high-level cortices. The research highlighted in this symposium will take advantages from advanced signal analysis techniques such as multivariate pattern analysis and population receptive field mapping to characterize the multisensory nature of brain regions canonically described to be highly specialized to respond only to distinct sensory information. Furthermore, sub millimeter measurements undertaken with ultra-high field MRI at 7T will shed new lights on the multisensory and attentional mechanisms that regulate sensory processing at the spatial scale of neuronal ensembles. These findings will have an impact on how the brain reorganize in case of a sensory loss and on clinical applications such as rehabilitation programs that aim at restoring function through other sensory modalities.

TALK 1: VOXEL-WISE MODELLING REVEALS SOUND ENVELOPE REPRESENTATION IN PRIMARY VISUAL CORTEX

Davide Bottari, Molecular Mind Lab, IMT School for Advanced Studies Lucca, Lucca, Italy

Recent studies revealed that sub-regions of V1 can be modulated or directly respond to auditory stimulations. However, whether specific acoustic features are mapped in striate cortex is unknown. To address this issue, we combined envelope modeling of different natural or synthetically derived sounds with the measure of fMRI BOLD signal in absence of retinal input. Four categories of acoustic stimuli characterized by natural amplitude modulations were presented while participants performed an infrequent target detection task: (1) words (speech-specific imaginable stimuli); (2) pseudowords (derived from words, but avoiding semantics); (3) artificial noise-vocoded sounds (generated from words but flattening the original spectral structure); (4) bird chirps (non-speech naturalistic sounds). We then performed a cross-validated searchlight-based principal component regression, aiming at reconstructing the envelope power in the Low (2-6 Hz) and High (6-10 Hz) frequency ranges, in both the striate cortex and in temporal areas.  For all categories of acoustic stimuli, results revealed expected patterns of activations in the temporal areas classically involved in the extraction of features related to the amplitude modulation of sounds. Remarkably, also the primary visual cortex resulted engaged by all stimuli categories and both frequency ranges. These results revealed that acoustic-feature representations can be observed in striate cortex regardless of presence or absence of semantic content and fine spectral structures; moreover, the tuning of this region was not speech-specific. The present results provide clear evidence that V1 receives and maps low-level acoustic information, such as sound amplitude modulations, and expand our knowledge on non-visual processing in V1.

TALK 2: SIMILAR CATEGORICAL REPRESENTATION FROM SOUND AND SIGHT IN THE CORTEX OF SIGHTED AND BLIND

Olivier Collignon, Institute of research in Psychology (IPSY) & Institute of Neuroscience (IoNS) -

The Ventral Occipito-Temporal Cortex (VOTC) shows reliable category selective response to visual information. Do the development, topography and information content of this categorical organization depend on visual input or even visual experience? To further address this question, we used fMRI to characterize the brain responses to eight categories (4 living, 4 non-living) presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. Using a combination of decoding and representational similarity analyses, we observed that VOTC reliably encodes sounds categories in the sighted and blind groups, using a representational structure strikingly similar to the one found in vision. Moreover, we found that the representational connectivity between VOTC and large-scale brain networks was substantially similar across modalities and groups. Blind people however showed higher decoding accuracies and higher inter-subject consistency for the representation of sounds in VOTC, and the correlation between the representational structure of visual and auditory categories was significantly higher in the blind when compared to the sighted group. Crucially, we also demonstrate that VOTC represents the categorical membership of sounds rather than their acoustic features in both groups. Our results suggest that early visual deprivation triggers an extension of the intrinsic categorical organization of VOTC that is at least partially independent from vision.

TALK 3: FAST TEMPORAL AND HIGH SPATIAL RESOLUTION FMRI TO STUDY MULTISENSORY PROCESSES IN THE HUMAN BRAIN

Anna Gaglianese, Department of Radiology, University Hospital Center and University of Lausanne

The notion that multisensory processes impact perception and behaviour in early cerebral stages has been extensively proved in the past decades. The introduction of functional magnetic resonance imaging (fMRI) and novel biologically-inspired data analysis methods allowed the exploration of these mechanisms across different scales (vascular and neuronal levels) in living humans. FMRI measures the entire brain simultaneously and reveals several new multisensory activation patterns showing the integration of information from different senses in regions usually denoted as unisensory in nature. In particular, in recent years, thanks to the high sensitivity and signal-to-noise ratio offered by fMRI scan we were able to employ and develop both high sensitivity, high spatial resolution measurements on the one hand and fast temporal resolution fMRI techniques on the other. This talk will overview how we have capitalized on these advances to improve our knowledge on the impact of neuronal mechanism of auditory processing in visual attention processes and object recognition. Specifically, I will focus on two scales: 1) Establish the vascular dynamics of multisensory processes in visual cortex in response to concomitant audio visual stimuli with sub-sec fMRI. 2) Quantify the sub-mm spatial profile of neural activity in primary visual cortex during the perception of visual shapes in the presence or in the absence of auditory stimulation. Both measurements have taken advantage of the latest advanced analysis techniques such as multivoxel pattern analysis MVPA, population receptive field mapping and BOLD deconvolution.

TALK 4: RESOLVING MULTISENSORY AND ATTENTIONAL INFLUENCES ACROSS CORTICAL DEPTH IN SENSORY CORTICES

Uta Noppeney, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen

Information flow is regulated by multisensory and attentional mechanisms that are guided by our current goal. Critically, multisensory and attentional mechanisms are closely intertwined. Both enhance perceptual sensitivity and precision of sensory representations. Using sub-millimeter-resolution fMRI at 7T we resolved BOLD-response and activation patterns across cortical depth in early sensory cortices to auditory, visual and audiovisual stimuli under auditory or visual attention. In visual cortices, auditory stimulation induced widespread inhibition irrespective of attention. Auditory relative to visual attention suppressed mainly central visual field representations. In auditory cortices, visual stimulation suppressed activations, but amplified responses to concurrent auditory stimuli, in a patchy topography. Critically, multisensory interactions in auditory cortices were stronger in deeper laminae, while attentional influences were greatest at the surface. These distinct depth-dependent profiles suggest that multisensory and attentional mechanisms regulate sensory processing via partly distinct circuitries. Our findings are crucial for understanding how the brain regulates information flow across senses to interact with our complex multisensory world.

COMPUTATIONAL M/EEG MODELLING: BRIDGING SCALES TOWARDS BRAIN HEALTH AND DISEASE

Chair: Jeremie Lefebvre, University of Ottawa

Speakers: Jeremie Lefebvre, Stephanie Jones, John Griffiths, Axel Hutt

Over the last decades, advances in M/EEG neuroimaging have given us unprecedented access to physiological data about brain dynamics. Computational methods, such as machine learning, are revolutionizing the way we interact with, interpret, and process this data. For instance, fields of research such as computational psychiatry flourish, taking advantage of the never-ending flow of brain/health-related information to perform automated diagnostics. Advances in computational techniques and modelling have also catalyzed the development of neuroinformatics: personalized large-scale simulations where patient M/EEG data are used to inform algorithms, to support diagnosis and even predict treatment outcomes, closing the loop between modelling, experiments and the clinics. But, despite these much-needed technological feats, our understanding of the fundamental physiological and functional mechanisms underlying M/EEG signals and how they correlate with brain function  remain poorly understood. This is because even the most fined grained imaging technique or detailed computer simulation do not reveal how neural circuits process information. Computational M/EEG modelling is a blooming field of research which strives to bridge spatial scales between the localized activity of neurons and the macroscopic EM fluctuations observed through the lenses of M/EEG. Combining field modelling, neuroimaging data and techniques from non-linear dynamics, recent advances in computational modelling allow researcher to better understand the neurophysiological underpinnings of M/EEG and provide new insights about brain dynamics in health and disease.

TALK 1: LOSS OF CONSCIOUSNESS IN GENERAL ANAESTHESIA MAY BE UNDERSTOOD BY RENDERING THE BRAIN MORE DETERMINI

Axel Hutt, INRIA Nancy

Abstract: The physiological mechanisms by which anaesthetic drugs modulate oscillatory brain activity remain poorly understood. Combining human data, mathematical and computational analysis of both spiking and mean-field models, we investigated the spectral dynamics of encephalographic (EEG) beta-alpha oscillations, observed in human patients undergoing general anaesthesia. The effect of anaesthetics can be modelled as a reduction of neural fluctuation intensity, and/or an increase in inhibitory synaptic gain in the thalamo-cortical circuit. Unlike previous work, our analysis demonstrates that a non-linear transition, triggered by a simple decrease in neural fluctuation intensity, is sufficient to explain the clinically-observed appearance - and subsequent slowing - of the beta-alpha narrowband EEG peak. Taken together, our results show that such a non-linear transition results in functional fragmentation of cortical and thalamic populations; highly correlated intra-population dynamics triggered by anaesthesia decouple and isolate neural populations. Our results are able to parsimoniously unify and replicate the observed anaesthetic effects on both the EEG spectra and inter-regional connectivity, and further highlight the importance of neural activity fluctuations in the genesis of altered brain states.

TALK 2: HUMAN NEOCORTICAL NEUROSOLVER: A NEW NEURAL MODELING TOOL TO LINK MECHANISM TO MEANING OF EEG/MEG

Stephanie Jones, Brown University

Electro- and magneto-encephalography (EEG/MEG) are the leading methods to non-invasively record human neural dynamics with millisecond temporal resolution. However, it can be extremely difficult to infer the underlying cellular and circuit level origins of these macro-scale signals without simultaneous invasive recordings. To address this need, we developed the Human Neocortical Neurosolver (HNN): a new user-friendly neural modeling tool designed to help researchers and clinicians interpret human imaging data (https://hnn.brown.edu, Neymotin et al., eLife 2020). A unique feature of HNN’s model is that it accounts for the biophysics generating EEG/MEG signals, with enough detail to connect to microcircuit level dynamics that can be studied in animals. I will give an overview of this new tool and describe an application to study the origin and meaning of 15-29Hz beta frequency oscillations, known to be important for sensory and motor function. Our data show that in primary somatosensory cortex these oscillations emerge as transient high power ‘events’. Functionally relevant differences in averaged power reflect a difference in the number of high-power beta events per trial (“rate”), as opposed to changes in event amplitude or duration. Modeling led to a new theory on the circuit origin of such beta events and suggested beta causally impacts perception through layer specific recruitment of cortical inhibition, with support from invasive recordings in mice in non-human primates. In total, HNN provides an unpresented translational tool to link mechanism to meaning of human EEG/MEG signals.

TALK 3: MODULATION OF CORTICOTHALAMIC RHYTHMOGENIC CIRCUITS IN DEPRESSED PATIENTS  BY RTMS NEUROSTIMULATION

John D. Griffiths, CAMH, University of Toronto

Repetitive transcranial magnetic stimulation (rTMS) is routinely used in the clinic as an alternative therapy for patients with treatment-resistant depression. Very little is known, however, about the physiological basis of rTMS effects, and how these relate to alleviation of symptoms. In this talk I summarize our recent work examining the influence of rTMS on spatiotemporal brain dynamics using an established model of EEG rhythm generation in the corticothalamic system. Comparison of parameter estimates from models fitted to patient EEG data before and after rTMS therapy yields putative physiological changes induced by the intervention. In particular, we find statistically significant reductions in excitatory corticothalamic gains in models fitted to post- as compared to pre- stimulation therapy resting EEG data. Interestingly, these modulations extend well beyond the primary stimulation site in the frontal lobe, indicating a key role for large-scale networks in the transmission of modulatory rTMS effects. Projection to the model’s reduced 3-dimensional parameter space allows interpretation of these rTMS-induced changes in terms of the principal instabilities, and associated spectral signatures, of corticothalamic activity.

TALK 4: HOW TO TALK TO NEURONS: ENGAGING NEURAL CIRCUITS, SYSTEMS AND RHYTHMS USING BRAIN STIMULATION 

Jeremie Lefebvre, University of Ottawa, Krembil Research Institute

Promising experimental findings over the last two decades have sparked a strong interest in using electromagnetic stimulation to treat a variety of neurophysiological disorders, such as depression, Parkinson and epilepsy. One strategy is to use periodic waveforms to engage brain oscillations - rhythms generated by synchronous neurons and involved information processing – to regulate neural communication and interface with neural circuits at a functional level. While relatively well characterized at the scale of individual cells, the effect of fluctuating electromagnetic fields on neural circuits dynamics and computation remains poorly understood. We here interface experimental, computational and mathematical approaches to understand the neurophysiological mechanisms underlying oscillatory activity, as well as develop new biomedical paradigms to entrain brain rhythms. Our findings reveal that modulation of brain oscillations is best achieved in states of low endogenous rhythmic activity and that irregular, state-dependent fluctuations tune the susceptibility of cortical network to exogenous control. We will discuss the influence of stimulation waveforms and how these define the type of interaction with non-linear neural circuits. We will also see how patient imaging data can be combined to inform personalized brain-scale simulations to study how endogenous and exogenous inputs interact to engage networks across the white matter. Taken together, these results provide new and exciting perspectives on the development of neuromodulatory paradigms for the treatment of neurological diseases and for cognitive enhancement.

 

STUDYING THE NEONATAL AND INFANT BRAIN: FROM NEUROSCIENCE TO INTERVENTIONS

Chair: Natalie Maitre, Nationwide Children’s Hospital and the Ohio State University

Speakers: M. Hickey, A. Key, JR Wozniak and N. Maitre

Throughout four widely different talks along a continuum of neonatal and infant studies, investigators will all follow a common thread: how patient-based neuroscience can advance the field of neonatal and infant care, elucidating mechanisms, biomarkers of disease and response to treatment, and finally can help with evidence-based design of interventions to improve cognition and language. First, early neural markers of auditory processing provide markers of gut-microbiome influence on neonatal development. These brain-based auditory markers can be refined throughout infancy to study the complexity of developing cognition and predict developmental disability, long before it can be diagnosed behaviorally. Interventions naturally derive from these applied neuroscience experiments, in particular through the use of early life nutritional modifications. While accurate tailoring of microbiomes may occur in the future, current nutritional interventions in early life can change cognitive outcomes in childhood. The case of choline to supplement hippocampal growth and pre-school age cognitive function has been tested and promising results in children prenatally exposed to alcohol support the concept of early neurocognitive interventions. Finally, through elucidation of neonatal multisensory processes, building blocks of later cognition, behavioral and language, researchers have designed neurally-based interventions while infants are still hospitalized. Layered interventions with speech, vestibular, olfactory and tactile input are tested with markers of multisensory integration typicality in neonatal life, as a mediator of development in childhood.

TALK 1: GUT MICROBIOMES AND EARLY HUMAN NEURODEVELOPMENT

M. Hickey, C. Gale, University of Minnesota Medical School

Preclinical models show that developing brains are more susceptible to deleterious effects of microbiome disruption than mature brains, supporting the presence of early-life sensitive periods for brain developmental programming by the microbiome. The hippocampus, a brain region responsible for learning and memory, is a target of microbiome-mediated effects in adult mice. However, the relevance of these results for human infants during the first year of life has heretofore not been studied. We recently found that otherwise healthy term-born infants exposed to antibiotics soon after birth demonstrated altered auditory processing and recognition memory responses, supporting the possibility of a microbiota-gut-brain axis in humans during early life. To determine if gut microbes are associated with early-life hippocampus function, we then compared microbiomes and EEG/event-related potential (ERP) performance in response to auditory stimuli in a cohort of healthy, exclusively breastfeeding 1-month old infants. New analyses show that ERP feature (P2 amplitude values, mother-stranger voice-discrimination) differences were associated with gut microbiome variation, suggesting a moderating brain function. Ongoing studies seek to understand how covariables may contribute to differences between the groups and whether differences persist at later ages. Potential impact: In contrast to genetic factors, gut microbes can be modified. Engineered microbiomes could improve brain development in at-risk infants with lengthy hospitalizations, exposure to long courses of antibiotics, and/or gastrointestinal disorders), to reduce the incidence and severity of disorders of learning and memory prevalent in these populations.

TALK 2: NEURAL MARKERS OF INFANT AUDITORY LEARNING DURING EARLY DEVELOPMENT

A. Key, B. Keceli-Kaysili, T. Woynaroski, Vanderbilt University Medical Center; and S. Roth, Vanderbilt University

Objective evaluation of infant cognitive processes is challenging due to a limited range and low reliability of overt behavioral responses, which do not allow direct study of neural mechanisms underlying performance. Yet, early identification of risk for suboptimal developmental outcomes is essential to the design, implementation and maximal effectiveness of early interventions. Recordings of brain activity may offer insights into infant cognitive development. Therefore, we used auditory event-related potentials (ERPs) during a 6-minute passive listening paradigm to evaluate early learning processes, quantified as auditory incidental memory for novel nonwords, relevant to language acquisition. In an accelerated longitudinal design, we tested 34 infants between 6-18 months of age. The sample represented a continuum of possible outcomes due to varied family histories of developmental disabilities. Direct assessments of adaptive, communicative and cognitive functioninga and caregiver reports of sensory processing were obtained. Neural evidence of auditory learning following repeated exposure to spoken stimuli was detected in the youngest infants and became more prominent with increasing age. Infants with familial risk for suboptimal developmental outcomes demonstrated reduced auditory learning compared to typical infants prior to 12 months of age. Differences in ERPs decreased but persisted during the second year of life. Auditory ERPs are feasible markers of early learning processes in the infant brain, sensitive to developmental differences and to risk for developmental disabilities. To strengthen this methodology as a measure for interventional trials, ongoing analyses examine predictive associations between ERPs in the first year and diagnostic outcomes at 24-36 months.

TALK 3: CHOLINE AS AN EARLY NEURODEVELOPMENTAL INTERVENTION IN CHILDREN WITH PRENATAL ALCOHOL EXPOSURE

JR Wozniak, MK Georgieff, University of Minnesota; University of Minnesota Medical School

Prenatal alcohol exposure (PAE) results in structural brain anomalies, neurologic dysfunction, cognitive impairment, and growth retardation. Despite the tremendous public health burden of PAE, few treatments exist for this condition. To target cognitive deficits in infancy, a potential intervention for PAE is nutrient supplementation with choline. In hippocampus, choline contributes to increased dendritic arborization in CA1, larger cells, and functional changes. Choline alters brain structure and function in hippocampus and prefrontal cortex. Rare studies link prenatal choline supplementation to improvements in attention and recognition memory in children with PAE. Over a decade, we have been testing postnatal choline supplementation in 2 year old subjects with PAE. Here, we present a longitudinal follow-up of children initially treated with choline or placebo to evaluate durability of cognitive changes following early intervention. The study was a randomized, double-blind, placebo-controlled trial. Longitudinal participants include 31 children (16 placebo; 15 choline) seen four years after trial completion (mean age at follow-up = 8.6 years). Diagnoses were 12.9% Fetal Alcohol Syndrome, 41.9% partial FAS, and 45.1% Alcohol Related Neurodevelopmental Disorder. Outcome measures included intelligence, memory, executive functioning, and behavior. Children who received choline had significantly higher non-verbal intelligence, higher visual-spatial skill (29%), higher working memory ability (27%), better verbal memory (38%), and fewer behavioral symptoms of Attention Deficit Hyperactivity Disorder (11%) than placebo. These data support choline as a potential neurodevelopmental intervention for FASD and highlight the need for long-term follow-up to capture treatment effects on neurodevelopmental trajectories.

TALK 4: NEONATAL MULTISENSORY PROCESSING: FROM OBSERVATIONS TO INTERVENTIONS

N. Maitre, Nationwide Children’s Hospital and the Ohio State University and M. Murray, the LINE, Lausanne, Switzerland

Multisensory processes allow combining information from different senses, often improving stimulus representations and behavior. The extent to which multisensory processes are an innate capacity or require experience with environmental stimuli remains debated. We addressed this knowledge gap by studying multisensory processes in prematurely born and full-term infants using event-related potentials (ERPs) full-term and preterm neonates in response to auditory, somatosensory, and combined auditory-somatosensory multisensory stimuli. Data were analyzed within an electrical neuroimaging framework. Multisensory processing in full-term infants was characterized by a simple linear summation of responses to auditory and somatosensory stimuli alone, which furthermore shared common ERP topographic features. ERP topography observed in full-term infants or “typical infantile processing” (TIP) can be used as a template to measure deviations in MS responses. Preterm infants exhibit non-linear responses and topographies less-often characterized by TIP with distinct patterns for multisensory and summed unisensory conditions. The better TIP characterized an infant’s ERPs, independently of prematurity, the more typical was sensory processing at 12 months of age and the less likely was the child to the show internalizing tendencies at 24 months of age. We then utilized these findings and TIP framework to design a multisensory intervention and test it in a large randomized controlled trial in the NICU, to improve language and motor outcomes of preterm infants. We compared standardized sessions of contingent auditory stimulation layered with calibrated vestibular, tactile and olfactory stimulation to standard care in 200 infants. Early differences in TIP are observed at NICU discharge. They will be followed by one and two year language and motor measures.