Contact Us

Create an Account   

Mini-Symposium Sessions

Session #





Mini-Symposium Session 1


April 6

10:00 am - Noon

Salon F

Mini-Symposium Session 2


April 6

10:00 am - Noon

Salon E

Mini-Symposium Session 3


April 6

10:00 am - Noon

Salon D

Mini-Symposium Session 4


April 7

10:00 am - Noon

Salon F

Mini-Symposium Session 5


April 7

10:00 am - Noon

Salon E

Mini-Symposium Session 6


April 7

10:00 am - Noon

Salon D

Mini-Symposium Session 7


April 8

10:00 am - Noon

Salon F

Mini-Symposium Session 8


April 8

10:00 am - Noon

Salon E

Mini-Symposium Session 9


April 8

10:00 am - Noon

Salon D

Mini-Symposium Session 1

Sunday, April 6, 10:00 am - Noon, Salon F

The Relational Memory Theory: Inspiring Novel Predictions Two Decades Post-Inception

Chair: Deborah Hannula, University of Wisconsin - Milwaukee
Co-Chair: Melissa Duff, University of Iowa
Speakers: Neal Cohen, Alison Preston, Jennifer Ryan, Howard Eichenbaum

Two decades have passed since the relational memory theory, a neurobiological framework that outlines representational properties of memories mediated by the hippocampus and adjacent medial temporal lobe (MTL) cortical structures, was proposed (Cohen & Eichenbaum, 1993). Since its inception, this theory has inspired novel hypotheses about how exactly MTL subregions contribute to memory, with the hippocampus itself said to mediate relational memory binding and representation. Relational memories were characterized by three fundamental properties that distinguish them from rigidly bound or unitized representations mediated by MTL cortical structures - they cannot be derived from past experience, they exhibit compositionality, and they can be flexibly accessed and used. Results from recent investigations motivated by the relational memory theory suggest an extended reach of the hippocampus beyond long-term declarative memory to the domains of working memory, inferential reasoning, binding across time, language, and online processing. The speakers in this symposium will discuss the neural mechanisms and functional properties of relational memory, highlighting new findings about the role of the hippocampus in the spatial-temporal organization of experiences, building flexible memories that permit novel inferences, and contributions of the hippocampus to memory performance at far shorter time scales than traditional accounts would suggest. A converging methods approach will be emphasized, as speakers will discuss evidence from behavioral, neuroimaging, and neuropsychological investigations conducted with non-human animals, special populations, and neurologically intact individuals. Our intent is to highlight how far we have come, and how the relational memory theory has been instrumental in this process.

Talk 1: The Functional Properties and Functionalities of Relational Memory

Neal Cohen; University of Illinois at Urbana-Champaign

Relational memory supports the ability to acquire, retain, retrieve, and flexibly use knowledge about facts and events. The hippocampus, in interaction with prefrontal cortex and neocortical processing and storage sites, provides (1) the ability to form representations of all manner of (even arbitrary or accidental) relations, binding together the constituent elements of experience; (2) a relational database critical not just for the creation, but also for the maintenance, updating, and integration of memory representations; and (3) rapid and automatic reactivation of elements of the relational database that are related to information currently being processed. Findings over the last twenty years have established that the ability of hippocampus and relational memory to support the updating, integration, and flexible use of memory can be used in service of many aspects of cognition and behavior, beyond the traditional domain of conscious recollection and long-term memory. Relational memory processing can occur and contribute even on time-scales usually associated with working memory, capable of rapidly constructing, comparing, combining, and recombining on-line relational memory representations in service of flexible cognition and adaptive control of behavior. In so doing it provides a critical foundation for many aspects of spatial cognition and navigation, inferential reasoning, language, decision-making, creative thinking, and the guidance of adaptive choices.

Talk 2: Hippocampal and Prefrontal Contributions to the Formation of Integrated Memory Networks

Alison Preston; The University of Texas at Austin

Much memory research to date has focused on how our brains encode and retrieve memories for individual experiences. However, in the real world, it is rare that a decision can be made on the basis of a single memory alone; rather, the vast majority of decision and action requires drawing upon knowledge derived across multiple events. Relational memory theory proposes that the ability to extract new information across distinct episodes results from the formation of integrated memory networks, in which individual memories are connected to one another in terms of the people, places, and things they have in common. These networks would then allow memory to extend beyond direct experience and anticipate the relationships among events. In a series of human fMRI studies, we examine these predictions by having subjects learn overlapping associations (AB, BC) and later testing them on the inferential relationships among items (AC). Using pattern information analysis, we show that prior memories (A items) are reactivated during new BC learning, with the degree of reactivation relating to inference performance. Moreover, hippocampal and ventromedial prefrontal cortex (VMPFC) encoding activation tracked trial-by-trial reactivation of prior memories during new learning as well as subsequent inference. Finally during a rest scan following BC encoding, medial temporal lobe and VMPFC connectivity was enhanced relative to baseline rest, potentially reflecting post-encoding integration. Collectively, our data show that the hippocampus and VMPFC play key roles in constructing integrated memory networks and further demonstrate that these integrated memories are used in service of future decisions.

Talk 3: Hippocampal Relational Binding across Space and Time

Jennifer Ryan; Rotman Research Institute, Baycrest, University of Toronto

The hippocampus has a critical role in the binding of relations among distinct elements across space and time into a lasting representation. Such hippocampal relational memory binding is engaged rapidly and obligatorily, and the resultant representations may be used in service of multiple cognitive operations and over a variety of delays. Behaviorally, the formation of relational representations is reflected in increases in eye movement behavior, suggesting that eye movements may serve as the conduit by which information is integrated. On a neural level, findings from magnetoencephalography have shown that theta oscillations mediate hippocampal binding of relational representations. Specifically, increases in theta power track increases in binding demands and are predictive of subsequent visuospatial memory performance. Damage to the hippocampus, as observed in amnesia, disrupts eye movement binding behavior, reduces binding-related increases in hippocampal theta power and impairs memory for visuospatial relations, even over short delays. Similarly, aging, which is associated with a decline in hippocampal volume, results in altered eye movement binding behavior, reduced hippocampal theta power and relatively impaired memory for visuospatial relations.

Talk 4: The Neural Basis of Relational Memory

Howard Eichenbaum; Boston University

I will discuss efforts to identify the neuronal mechanisms that underlie relational memory, specifically the nature of neuronal representations that can relate elements of memories in support of our ability to remember specific episodes and to integrate new experiences into networks of memories. Evidence from multiple approaches will be reviewed, indicating (1) separate anatomical pathways that converge onto the hippocampus the contents of memories and the spatial-temporal organization of experiences, (2) fundamental roles for the hippocampus in both spatial and temporal organization of memories, and (3) physiological evidence on how events are linked within a spatial and temporal organization. These studies reveal that relational memory is a product of a unique representational scheme within hippocampal networks that encodes all manner of relations among experiences and supports the flexible and inferential expression of memories.

Mini-Symposium Session 2

Sunday, April 6, 10:00 am - Noon, Salon E

Contributions of alpha-band oscillations in cognition

Chair: Heleen A. Slagter, University of Amsterdam
Speakers: Ali Mazaheri, Heleen A. Slagter, Bradley R. Postle, Markus Bauer

Historically, alpha-band oscillations have been thought to represent the activity of the visual cortex in an idle state. However, a growing body of research, mostly in the domain of visuospatial attention, suggests that alpha-band dynamics play an active role in information processing, including selection among competing brain networks, facilitation of information processing within task-relevant networks, as well as "task-positive" functions related to short-term memory. This symposium will present new empirical evidence and theoretical views on the pivotal role of alpha oscillations in cognition. Mazaheri will start with a general introduction to the topic and show that alpha oscillations can be regarded as a general mechanism for information selection that operates across sensory modalities. Slagter will consider the relationship between pre-stimulus alpha oscillations and early stimulus selection processes (P1 and N1 attention effects), and propose that these neural mechanisms are dissociable and reflect qualitatively different aspects of attention. Next, Postle will present TMS-EEG data providing causal evidence for the notion that alpha oscillations may subserve multiple functional roles, including long-range effective connectivity and possibly the binding of individuated object identities to specific locations. Finally, Bauer will discuss the neurochemical basis of alpha oscillations and its implications for alpha's functional role in cognition. Collectively, these talks will present current debates and open questions in the study of the role of alpha oscillations in cognition to a broad audience. They will also highlight important avenues for future research.

Talk 1: Region-specific oscillatory alpha activity serves to suppress distracting input across visual and auditory modalities

Ali Mazaheri; University of Amsterdam

There have been a number of studies suggesting that oscillatory alpha activity (~10 Hz) plays a pivotal role in attention by gating information flow to relevant sensory regions. The vast majority of these studies have looked at shifts of attention in the spatial domain and only in a single modality (often visual or sensorimotor). I will present a series of studies which investigated the role of alpha activity in the suppression of a distracting modality stream. We used a cross-modal attention task where visual cues indicated whether participants had to judge a visual orientation or discriminate the auditory pitch of an upcoming target. The visual and auditory targets were either presented simultaneously or alone, allowing us to behaviorally gauge the "cost" of having a distractor present in each modality. We found that the preparation for visual discrimination (relative to pitch discrimination) resulted in a decrease of alpha power in the early visual cortex, with a concomitant increase in alpha/beta power) in the supramarginal-gyrus, a region suggested to play a vital role in short-term storage of pitch information The changes in alpha-power in the modality-relevant-cortices had direct consequences on performance on a trial-by-trial basis. Our work adds to increasing evidence that the top-down (i.e. attentional) modulation of alpha activity is a mechanism by which stimulus processing can be gated within the cortex. Here, we find that this phenomenon is not restricted to the domain of spatial attention and can be generalized to other sensory modalities than vision.

Talk 2: Facilitation and inhibition in attention: Functional dissociation of pre-stimulus alpha activity, P1 and N1 components

Heleen A. Slagter; University of Amsterdam

Attention - the ability to attend to some things while ignoring others - can be best described as an emergent property of many neural mechanisms, facilitatory and inhibitory, working together to resolve competition for limited processing resources and control of behavior. To gain a better understanding of how attentional inhibition and facilitation are neurally implemented, here, participants continuously attended to one and the same hemifield for 80 minutes while their brain activity was recorded using EEG. We reasoned that the consistent assignment of relevance to one hemifield would allow us to better separate inhibitory and facilitatory attentional effects. Indeed, in striking contrast to previous studies which typically observed bilateral attentional modulations of early sensory processing when subjects alternated between attending left and right, we found perfectly lateralized P1 and N1 components and attentional modulations to, respectively, ipsilateral (P1) and contralateral (N1) posterior regions. This finding substantiates the idea that the P1 reflects inhibition and the N1 amplification. The fact that these early potentials only occurred over one hemisphere moreover indicates that they may not reflect exogenous sensory signals, as generally assumed, but top-down modulations of feed-forward sensory processing. Moreover, in further contrast to previous studies, greater pre-stimulus alpha activity was observed over relevant vs. irrelevant posterior regions, supporting proposals that alpha power reflects active inhibition only required when irrelevant regions compete for attentional resources. Together, these findings suggest a functional dissociation between pre-stimulus alpha, the P1 and N1, and highlight the influence of statistical task structure on attentional control dynamics.

Talk 3: Simultaneous (r)TMS and EEG reveals multiple functional roles for alpha-band oscillations

Bradley R. Postle, Stephen Emrich, Jeffrey S. Johnson, Bornali Kundu; University of Wisconsin, Madison, North Dakota State University

Transcranial magnetic stimulation (TMS) offers a means to test causal hypotheses about functions supported by frequency band-specific dynamics in the EEG. One previous study, for example, has provided confirmatory evidence for an inhibitory role for alpha-band oscillations in posterior visual circuits, with repetitive (r)TMS-induced changes in alpha-band power negatively related to rTMS-related changes in visuospatial short-term memory (STM) performance. Here, we will present more recent work highlighting "ask-positive" functions of alpha-band oscillations. In one study, individual differences in delay-period alpha-band power were positively related to the strength of the TMS-evoked response in prefrontal cortex, when TMS was delivered to superior parietal lobule during the delay period of a spatial STM task. Because this effect was largely attenuated during the ITI, this provides evidence that alpha-band oscillations underlie behaviorally specific patterns of effective connectivity in the dorsal control network. In a second study we used delay-period rTMS to effect a causal test of a hypothesized role for alpha-band oscillations in binding individuated object identities to specific locations. During a variant of the change-detection task - STM for the color-in-location of squares within an array - we delivered rTMS at 10Hz to the inferior IPS. Our results revealed a positive association between rTMS-related change in delay-period alpha-band power and rTMS-related change in STM capacity. They thus provide causal evidence for a role of for posterior alpha-band oscillations in supporting visual STM performance, perhaps by maintaining the bindings between stimulus dimensions in STM.

Talk 4: Feedforward and feedback influences in visual attention tasks: Evidence for functional and neurochemical dissociations between alpha and gamma-oscillations

Markus Bauer; University of Nottingham, University College London

There is abundant evidence for the involvement of both alpha- and gamma-oscillations in selective attention. Despite the regular co-occurence of these spectral phenomena we provide evidence here that they reflect distinct phenomena. While it is known that attentional modulation of alpha-oscillations occurs in the prestimulus-period and that of gamma-oscillations in the post-stimulus period, this could in principle be attributed to different neuronal excitation states (stimulus on/off), known to have opposite effects on these frequency-bands. Here, we provide clear evidence that this is not the case and that instead alpha- and gamma-oscillations are modulated separately and are caused by different top-down signals with different functional characteristics: whereas alpha-oscillations represent mere prediction-signals, gamma-oscillations appear to respond to exogenous and endogenous attentional components. This dissociation resonates with the differential sensitivity we have recently found for these frequency bands to cholinergic neuromodulation. More specifically we have shown that the cholinergic system, which has been closely associated with attentional performance, specifically enhances attentional alpha-/beta-lateralization in visual cortex. I will discuss the presumed pathway and mechanisms how alpha-oscillations are modulated by attentional top-down signals and how this may impact processing of upcoming stimuli.

Mini-Symposium Session 3

Sunday, April 6, 10:00 am - Noon, Salon D

Putting Person Perception in Context: Insights from Social Neuroscience

Chair: Jonathan Freeman, Dartmouth College
Co-Chair: Jay van Bavel, New York University
Speakers: Jay Van Bavel, William Cunningham, Reginald Adams, Jonathan Freeman

Many prominent models of face and person perception ignore the role of the social context. This mini-symposium features emerging social neuroscience research to shed new light on how various forms of social context shape perceptual and evaluative responses to other people using a wide range of methodologies (univariate and multivariate fMRI, EEG, MEG, modeling, behavioral). The first two presentations will focus on the role of social motives. Jay Van Bavel will explore how the motivational context of identifying with a social group impacts the Fusiform Face Area and early (P100) perceptual responses to in-group and out-group faces, suggesting that initial components of face perception are highly malleable. Wil Cunningham will show how processing goals shape evaluations of other people, reflected in amygdala and anterior cingulate activity. The final two presentations will focus on the impact of perceptual contexts, such as facial cues that contextualize focal perceptions. Reginald Adams will discuss how eye gaze contextualizes facial emotion to signal threat value (e.g., direct-gaze fear and averted-gaze anger convey ambiguous threat). He finds exacerbated amygdala responses to threat-ambiguous vs. threat-specified combinations that vary as a function of neural temporal dynamics, suggesting earlier processing of threat-congruent information. Jon Freeman will examine how stereotypes lead multiple social categories (sex, race, emotion) to mutually interact and shape each other's perception, implicating fusiform regions involved in face perception and prefrontal regions involved in stereotype access and top-down visual predictions. Together, this mini-symposium will help illuminate the fundamental role that the social context plays in person perception.

Talk 1: Social identity shapes social perception and evaluation: Evidence from behavioral, electroencephalography and neuroimaging experiments

Jay Van Bavel; New York University

Correctly identifying group members is critical for successfully navigating the social world. I will present behavioral, electroencephalography, and neuroimaging experiments that demonstrate the dynamic influence of social identity on perception and evaluation. We assigned people to one of two mixed-race groups and had them respond to faces of Black and White in-group and out-group members. This allowed us to compare the effects of a minimal social identity with a salient social category-race. Across methodologies, assigning people to mixed-race groups eliminated ostensibly automatic racial biases by leading people to categorize others on the basis of their group membership. Specifically, group membership influenced BOLD activity in core (Fusiform Face Area) and extended (amygdala) components of the face processing network, emerged as early as 100 milliseconds (P100), and shaped consequential downstream behavior (automatic evaluations and recognition memory). This pattern was evident despite the fact that the intergroup distinction was arbitrary, there were no visual cues to distinguish groups, and exposure to the faces was equivalent and brief. Behavioral experiments confirmed that in-group bias was mediated by visual attention and moderated by social motives (e.g., the need to belong). However, multi-voxel pattern analyses of BOLD data revealed that membership in a mixed-race group does not make the visual system "color-blind" to race. Taken together, this program of research suggests that ostensibly automatic forms of racial bias are not inevitable, but are sensitive to seemingly trivial social identity motives that shape the value of social targets.

Talk 2: Shaping ambivalent responses in person perception

William Cunningham; University of Toronto

An important aspect of person perception involves our ability to generate useful evaluations that are contextually appropriate. This is particularly important when one's evaluation can shift dramatically, from positive to negative, as in the case of ambivalent attitudes. That is, although people can have objective ambivalence (defined as the existence of conflicting representations), this ambivalence is often times solved by aspects of the situation. Yet, in other situations, ambivalence remains when the situation cannot resolve the ambiguity (subjective ambivalence). In two fMRI studies, I will present data showing how ambivalent attitudes are resolved either through a giving participants a goal to attend to positive or negative features of ambivalent people (Study 1), or how the context of evaluation can resolve ambivalence (Study 2). Specifically, modulation of amygdala and anterior cingulate activation to ambivalent targets were modulated by goals and context. These data support the idea that "top-down" processes inhibit or emphasize parts of the associations to prevent subjective ambivalence and generate more univalent responses.

Talk 3: Ambiguity and the temporal dynamics of threat-related attention

Reginald Adams, Kestutis Kveraga; The Pennsylvania State University, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School

In this talk, we present research examining the intersectional impact of compound facial cues on attention. Early on, using fMRI, we found greater amygdala responsivity to ambiguous (e.g., direct gaze/male fear) versus clear (e.g., averted gaze/female fear) combinations of threat cues. This work helped to resolve a long standing puzzle in the literature as to why amygdala activation was consistently found to fear displays, yet not to anger displays, when anger (at least when coupled with direct gaze) is arguably a clearer signal of threat. We have since also found the opposite pattern of results, with greater amygdala activation to clear- versus ambiguous-threat cues. In an effort to address this apparent discrepancy, we examined whether different adaptive attunements across the temporal stream moderate these effects. Using a dot-probe paradigm, we found greater attentional orienting to rapid presentations of clear combinations of threat cues, and greater sustained attention to ambiguous threat-cue combinations. Paralleling these effects, again using fMRI, we likewise found greater amygdala responses to clear-threat cues when rapidly presented (33ms and 300ms), and to ambiguous-threat cues when presented for more sustained times (1s, 1.5s, 2s). Using MEG, we then examined the neurodynamics of threat perception as it unfolds. Our findings implicate magnocellular "action-related" vision in the processing of clear threat cues, and parvocellular "analysis-related" vision in the processing of ambiguous cues. These findings support an adaptive dual-process framework that favors quick and efficient attentional orienting toward threat-congruent information and later attentional maintenance required to process threat-ambiguous information.

Talk 4: Person perception at the intersection of multiple social categories

Jonathan Freeman; Dartmouth College

Individuals effortlessly categorize other people along any number of social dimensions, such as sex, race, and emotion. Although often assumed to be independent, in this talk I will propose that these dimensions may intersect in meaningful ways. I will discuss neuroimaging, behavioral, and computational-modeling studies documenting systematic interactions between multiple social dimensions, either due to bottom-up (shared facial cues) or top-down (shared stereotypes) factors, or both. Using a mouse-tracking technique that records hand movements en route to category responses, I discuss evidence that certain social dimensions become perceptually linked. For example, it was found that stereotypes lead Black faces or male faces to partially activate the angry category and appear angrier (even when they express no anger), which was corroborated by computational simulations of the categorization process. Neuroimaging studies involving correlational analyses between mouse-tracking and neural data suggested that the medial and dorsolateral prefrontal cortices play dissociable roles in instantiating these social category interactions and subsequently inhibiting them, thereby allowing faces to be perceived accurately. Finally, multi-voxel pattern analyses characterized the inherent overlap of these different social category representations (e.g., male, Black, anger) in lower-level fusiform regions involved in face perception vs. higher-order prefrontal regions involved in stereotype access and top-down visual predictions. Taken together, this research demonstrates that perceptions of social categories are not independent but rather systematically interact, and implicates both bottom-up and top-down processes in driving social category interactions. The findings bolster recent intersectional and dynamic-interactive frameworks of social categorization.

Mini-Symposium Session 4

Monday, April 7, 10:00 am - Noon, Salon F

The neuroscience of social networks

Chair: Kevin Ochsner, Columbia University
Speakers: Lisa F. Barrett, Emily B. Falk, Dharshan Kumaran, Kevin N. Ochsner

Humans are a fundamentally social species that evolved to live and thrive in social groups. For decades, sociologists and ethologists have studied the nature and characteristics of these groups in terms of their network size, structure and an individual's status within them. Until recently, however, little was known about the brain systems governing how we recognize, represent, and act on the basis of own and others status in our social networks. This symposium will highlight how cognitive neuroscience has begun to shed new light on these issues by showing how variables that quantify the kind of social network(s) to which we belong - and our status in them - relate to brain structure and function. Lisa F. Barrett will describe how individuals who are members of larger social networks show structural and connectivity changes in brain systems for emotion and social behavior. Emily Falk will explain how being a connector that links individuals in a friendship network is related to neural markers of the ability to influence other's opinions. Dharshan Kumaran will focus on the fundamental importance of memory systems in allowing individuals to make precise assessments of the social rank and affiliations of others. Finally, Kevin Ochsner will present data documenting how the recognition of who is popular in our networks is supported by the concerted activity of brain systems for affect, social cognition and social perception. Together, these talks illustrate the value of combining the methods of sociology, psychology and cognitive neuroscience.

Talk 1: The Role of Amygdala Structure and Connectivity in Social Cognition

Lisa F. Barrett, Kevin C. Bickart, Bradford C. Dickerson; Northeastern University, Massachussetts General Hospital

This talk will present data from three studies in which we examined the relation of amygdala structure and connectivity to social network size and complexity. Using structural MRI, Study 1 demonstrated the first evidence that amygdala volume uniquely predicts the size and complexity of social networks in healthy adults. Using resting-state functional connectivity analysis, Study 2 demonstrated that healthy adults who have larger and more complex social networks not only have larger amygdala volumes but also amygdalae with stronger functional connectivity within several intrinsic brain networks. Study 3 used structural MRI and a newly developed and validated clinician-based rating scale to demonstrate that atrophy in large-scale brain networks anchored in the amygdala predicted specific social cognitive impairments in a sample of frontotemporal dementia (FTD) patients. From these studies, we have discovered that the amygdala is a component of at least three partially distinct anatomical networks that are important for forming and maintaining social bonds. These findings provide a powerful componential framework for understanding the neural underpinnings of social cognition.

Talk 2: Social network structure modulates neural processes involved in successful communication and message propagation

Emily B. Falk, Matthew B. O'Donell, Christopher N. Cascio, Joseph B. Bayer; University of Pennsylvania, University of Michigan

The opinions, behaviors and recommendations of others fundamentally affect human decision-making. At a macro level, sociologists have shown that there is variation in the extent to which people are connected to others and in positions to exert such influence. However, relatively little is known about the neural mechanisms that lead people to share information and what positions them to be successful in persuading others. In a series of studies we have examined the neural processes that promote being a good "idea salesperson" and how these processes interact with broader social environments, including one's position in their social network. We have combined fMRI data gathered during tasks relevant to social influence and message propagation with network data analyzed using tools from social network analysis (SNA). SNA provides a rich set of measures and techniques to quantify the size, structure and scope of an individual's social environment as well as operationalizations of sociological concepts such as opportunities for information brokerage. We find that individuals with more opportunities for information brokerage show increased activity in mentalizing and affective systems that respond to social cues when making recommendations and receiving social feedback about their recommendations. Neural responses within these same brain systems are also associated with successful message propagation and being a good "idea salesperson". The combination of neural and SNA metrics offers a powerful way to analyze links between mechanisms involved in message propagation and the positions occupied by individuals in their social networks.

Talk 3: The Neural Mechanisms Underlying Knowledge of Social Structures

Dharshan Kumaran; University College London

Primates have a range of highly developed cognitive abilities that enable individuals to meet the challenging pressures of living in large social groups. In this talk, I will focus on the fundamental importance of memory in prospering in such an environment: whilst perceptual cues (e.g. body posture) may provide a coarse heuristic with which to rapidly evaluate others, detailed knowledge of social structures (e.g. hierarchies, networks) - gradually accrued through a history previous interactions and experiences - is needed to make more precise assessments of rank and affiliations. I will present a series of experiments in which we used a range of experimental paradigms (e.g. involving "navigation" through one's own real social network, learning of a social hierarchy involving unfamiliar others) in combination with functional and structural brain imaging (fMRI and VBM). Together, I will argue that this work provides insights into the neural substrates underpinning knowledge of complex social structures at several different levels: the brain regions involved (e.g. hippocampus, amygdala, medial prefrontal cortex, superior temporal sulcus), putative computational mechanisms that may underlie learning, and the nature of the representations and information coding schemes involved.

Talk 4: Neural systems tracking popularity in real world social networks

Kevin N. Ochsner, Noam Zerubavel, Peter Bearman; Columbia University

Successfully navigating our complex social world requires understanding the relative status of members of our groups. Sociologists and social psychologists have historically emphasized two kinds of status that have important implications for behavior: power-based status, where individuals vary in their control over resources and outcomes, and affiliation-based status, where individuals vary in the extent to which they are liked by other group members. To date, the majority of neuroscience research has focused on power-based hierarchies rather than affiliation-based popularity. Here we present the first imaging research to examine the neural systems tracking the popularity of members of real-world social networks. To do this we first used social network analysis (SNA) to determine the relative popularity of individuals in in the context of a friendship-based network to which they belonged. We then had members of each network view photographs of other group members and asked, on a trial-by-trial basis, how brain activity parametrically scaled with the popularity the target group member viewed on that trial. We found that activity in three kinds of brain regions tracked target popularity: systems involved in affective evaluation (e.g. vmPFC, amygdala, ventral striatum), social cognition (e.g. dorsal MPFC, TPJ), and social perception (e.g. FFA). Importantly, activity in the affective evaluation systems mediated the relationship between target popularity and activity in other brain regions, suggesting that a history of learning about the affective outcomes associated with popular individuals organizes our responses to them. These data have implications for models of affect, person perception and group behavior.

Mini-Symposium Session 5

Monday, April 7, 10:00 am - Noon, Salon E

A New Look at Neural Representation in the Prefrontal Cortex

Chair: Earl Miller, M.I.T.
Speakers: Jonathan Wallis, Earl Miller, Mattia Rigotti

The traditional view of the cortex is like clockwork: Different areas and even individual neurons each have their own specific functions, often organized about sensorimotor information. But from recent work in the prefrontal cortex (PFC), a different view is arising. There are gradients of representations with a large degree of overlap and many multifunctional "mixed selectivity" neurons. We will present evidence for this and discuss its implications. In the first part of the symposium, we will discuss gradients of representations in the PFC. David Badre will show that the human PFC is organized along a hierarchy of rules. Jon Wallis will show that more anterior regions of the monkey orbitofrontal cortex (OFC) encode value in a more abstract form than in posterior OFC. Next, we will turn to the overlap in PFC representations and the mixture of signals on the neuron level. Earl Miller will show that the monkey PFC contains large proportions of multifunction mixed-selectivity neurons whose representations change with task demands, a property that underlies mental flexibility. Finally, Mattia Rigotti will discuss the computational advantages of mixed selectivity. Their high-dimensionality may endow PFC neurons with the flexibility to learn a wide range of tasks, but at the same time they are susceptible to noise, sometimes causing errors.

Talk 1: Gradients of function in orbitofrontal cortex

Jonathan Wallis; University of California at Berkeley

Several studies have argued that the frontal lobe is organized along a gradient of abstraction, with progressively more abstract information encoded by progressively more anterior frontal areas. In addition, a prominent theory of orbitofrontal cortex (OFC) organization argues that there is a valence gradient, with positive outcomes encoded medially and negative outcomes encoded laterally. To test these ideas, we trained two monkeys on a task that required them to use secondary reinforcement (tokens that could later be exchanged for juice) in order to learn optimal behavior. We could reward the subject by giving them a token and punish the subject by taking tokens away. This enabled us to test whether OFC contained an abstraction gradient (secondary reinforcement is more abstract than primary reinforcement) and/or a valence gradient. We found no evidence to support the valence gradient: throughout OFC, neurons encoding reward were interspersed with those encoding punishment. In addition, there was no evidence that neurons encoding secondary reinforcers were located more anteriorly to those encoding primary reinforcers. However, neurons in the posterior OFC tended to encode the value of either the secondary or primary reinforcer, whereas neurons in anterior OFC encoded the value of the reinforcer independent of whether it was secondary or primary. Thus, although our results are not what we expected, they are nevertheless consistent with a more abstract value signal encoded in more anterior OFC regions.

Talk 2: Flexible neurons for a flexible mind

Earl Miller; Massachusetts Institute of Technology

The picture emerging from many years of neurophysiological investigation of the prefrontal cortex (PFC) is that of a highly adaptive, non-linear, system. Many neurons do not have fixed functions; they seem to have "mixed selectivity". PFC neurons are tuned to mixtures of multiple task-related aspects. This is in contrast to typical sensory or motor cortical neurons that are selectively activated by relatively few, often related properties (e.g., spatial location, direction of motion, edges, etc.) and whose activity is thought to always "mean" the same thing like "leftward motion there". Instead, many PFC neurons have more extensive and eclectic inputs from a wide range of external (sensory, motor) and internal (values, memories, etc.) information sources. The result is a large population of neurons that can participate in many functions, the "meaning" of their activation changing depending on behavioral context: the task at hand. I will show examples of this mixed selectivity and argue that they are key to a hallmark of intelligence: mental flexibility.

Talk 3: Understanding errors in complex cognitive tasks: the role of mixed selectivity

Mattia Rigotti; Columbia University

Prefrontal cortex (PFC) neural activity is characterized by a striking diversity: in animals engaged in cognitive behavior, PFC neurons are reliably but idiosyncratically tuned to mixtures of multiple task-related aspects (mixed selectivity). The responses of individual neurons are consequently difficult to interpret, but these interpretative difficulties readily dissolve when we take a neural population perspective. This approach reveals that mixed selectivity at the level of individual neurons is a signature of high-dimensionality at the level of population activity. The importance of high-dimensionality resides in the impressively large repertoire of downstream response functions that it accommodates. We recently showed (Rigotti et al. Nature 2013) that such computational advantage is probably important for subserving the cognitive functions ascribed to the PFC, since the dimensionality of the activity patterns is predictive of animal behavior, as it collapses in error trials. Surprisingly, the selectivity to individual task-related variables does not appear to decrease during errors. We present a model of the neural responses that reconciles these seemingly contradictory observations. We show that the mixed selectivity component of the response greatly contributes to the dimensionality of the patterns of activity but is fragile to noise. In the error trials this component is most strongly affected, impairing the ability of the animal to perform the task. However, the non-mixed component, which is more robust, still encodes the individual task-related variables. The model explains the PFC neural recordings collected in (Warden, Miller 2009) and analyzed in (Rigotti et al 2013).

Mini-Symposium Session 6

Monday, April 7, 10:00 am - Noon, Salon D

Prediction, adaptation and plasticity of language processing in the adult brain

Chair: Gina Kuperberg, Tufts University & MGH
Speakers: T. Florian Jaeger, Matthew H. Davis, Kara D. Federmeier, Gina R. Kuperberg

This mini-symposium focuses on adaptation and plasticity of language processing in the healthy adult brain. It explores the idea that prediction in language is inherently linked to language adaptation and learning. We bring together several leaders who will discuss these relationships from different perspectives, presenting data collected using multiple techniques. First, Florian Jaeger, together with Dave Kleinschmidt, will situate the relationship between prediction and learning in a changing environment within a rational "ideal observer" framework, discussing data from computational Bayesian models. Second, Matt Davis will discuss a series of magneto-encepholography (MEG) and functional MRI (fMRI) experiments suggesting that the brain's adaptation to degraded speech depends on the accuracy of prior predictions, linking these findings to predictive coding models of neural processing. Third, Kara Federmeier, together with Eddie Wlotko, will discuss a large body of electrophysiological research examining the impact of prediction violations at the levels of semantic, lexical, and perceptual features, highlighting how quickly we adapt to such errors, and how this varies across the lifespan. Finally, Gina Kuperberg will discuss electrophysiological and fMRI studies examining prediction at the semantic-syntax interface, suggesting that the certainty of our predictions can directly influence the neurocognitive mechanisms we engage to comprehend real-world events in different discourse contexts. This symposium is timely and important. It revisits key questions about the architecture of language comprehension in the brain in the light of core computational and neural principles of learning, adaptation and executive function.

Talk 1: Efficient language understanding in a variable world: Prediction and Adaptation

T. Florian Jaeger, Dave F. Kleinschmidt; University of Rochester

Whether reading, listening, or viewing sign language, the linguistic signal comprehenders receive is perturbed by noise. This makes language understanding a problem of inference over noisy input. The ideal solution to this problem is to take advantage of prior (top-down) knowledge in predicting the signal, thereby facilitating efficient inference of the intended message. In line with such ideal observer models, prediction is an essential part of language processing. However, producers differ in their realizations of linguistic sounds as well as lexical and syntactic preferences. As a consequence the statistics required for efficient prediction actually differ (i.e., are subjectively non-stationary) between environments (e.g., between speakers/writers). How then is efficient prediction even possible? We propose that the brain achieves this by a) recognizing previously encountered environments (e.g., a familiar speaker or experimental testing room), b) generalizing across environments based on similarity to previous experience, and c) implicitly learning the statistics of novel environments (e.g., a new speaker). That is, not only do we continuously learn, but we do so while imputing and updating structure over linguistic environments (e.g., groups of speakers that share an accent or dialect). We discuss existing evidence that supports this view and present a computational framework that guides future work on how the brain integrates prediction errors by learning at multiple levels of representation. Language is an ideal domain to pursue the question of how we navigate a variable world, because of its comparatively well-understood rich structure.

Talk 2: Predictive mechanisms support rapid adaptation and slow consolidation in learning to understand speech

Matthew H. Davis; University of Cambridge

Processes of learning and adaptation are key to successful perception and comprehension of the degraded, novel, and ambiguous speech that we encounter in our everyday life. I will here contrast two forms of learning: (1) rapid adaptation processes that operate over the course of minutes to enhance comprehension of ambiguous or degraded speech, and (2) episodic encoding and overnight consolidation processes that integrate novel input into longer-term knowledge during overnight sleep. Behavioural and neuroimaging evidence shows both learning processes operate at phonetic, lexical, and semantic levels. I will argue, however, that key neural computations supporting these different forms of learning are distinguished not by the level of the comprehension system that is modified, but rather by the accuracy of prior prediction at the time that variant input is heard. Rapid adaptation is achieved by reinforcing accurate predictions and suppressing inaccurate predictions for upcoming speech sounds, words or meanings. Thus, adaptation is enhanced when prior knowledge permits more accurate predictions: if listeners hear degraded spoken words after seeing their written form (cf. Sohoglu et al, 2012, J Neuroscience), or ambiguous words are presented after disambiguating contexts (Rodd et al, 2012, Cerebral Cortex). In contrast, novel and hence unpredicted speech sounds, words or meanings are encoded by hippocampal, episodic mechanisms (Davis & Gaskell, 2009, Philosophical Transactions), and online predictions are only modified after overnight consolidation (Gagnepain et al, 2012, Current Biology). I will propose a predictive coding account of speech perception and learning that unifies these different neural mechanisms.

Talk 3: Better or worse than expected? ERPs reveal dynamic modulation of predictive processing mechanisms during language comprehension

Kara D. Federmeier, Edward W. Wlotko; University of Illinois, Tufts University

Current views of language comprehension have been importantly shaped by compelling electrophysiological evidence that language processing can be facilitated by expectations for semantic, lexical, and perceptual features of likely upcoming words. This evidence for facilitative effects of prediction is complemented by findings of processing consequences when predictions are disconfirmed. Prediction thus requires processing resources, whose deployment may be difficult for some people (e.g., older adults) and may be disadvantageous in some processing circumstances. Our research shows that multiple language comprehension mechanisms are implemented in parallel and that the brain adapts its use of these mechanisms, not only over the long-term, in response to changing neural and cognitive abilities with age, but also over the short-term, in response to situational and task demands. For example, when the utility of prediction for comprehension is reduced, by repeatedly substituting unexpected synonyms for strongly expected words, electrophysiological signatures of predictive comprehension are diminished. However, when participants are given an additional task for which prediction can be beneficial, indices of predictive processing reappear for those same stimuli. Our results thus show that the brain evaluates the utility and/or success of a predictive mode of comprehension on the fly and dynamically adjusts comprehension strategies vis-a-vis the situational and task context, such that resources can be allocated to most effectively achieve comprehension aims. We link these results to emerging understandings of domain-general mechanisms of cognitive and neural control.

Talk 4: Comprehending Events in Context: Language comprehension is language learning

Gina R. Kuperberg; Tufts University, Mass. General Hospital

Comprehending language requires us to decode rapidly-unfolding sequences of letters or sounds in noisy environments. Some have proposed that, to meet this challenge, we use our stored linguistic and real-world knowledge to predict upcoming information ahead of bottom-up input. Others, however, have argued that prediction is counterproductive: why predict, only to be proved wrong? This controversy may stem from several assumptions about the nature of prediction: that prediction necessarily equals lexical prediction, that it is necessarily an all-or-nothing phenomenon, and that inaccurate predictions necessarily lead to inefficient comprehension. I will summarize evidence from multimodal neuroimaging studies suggesting that these assumptions are wrong. First, we can predict at the level of syntactic and coarse semantic features, which can map on to one another ahead of the bottom-up input, thereby predicting event structure(s), without necessarily committing to specific lexical forms. Second, these predictions are probabilistic, generated with various degrees of certainty. Third, the neurocognitive mechanisms engaged when these predictive semantic-syntactic mappings are violated depend on the certainty with which they were generated, and equate to the neural costs of unifying an incoming word into its context. I discuss two general implications of this framework: (1) the spatiotemporal patterns of neural activity evoked by an incoming word in context depend on the representational level and the certainty of our prior predictions; (2) language comprehension is language learning: the costs of our prediction errors are what drive us to adapt to our wider statistical environment in a continuous attempt to refine these predictions.

Mini-Symposium Session 7

Tuesday, April 8, 10:00 am - Noon, Salon F

Mechanisms of Memory Consolidation During Sleep

Chair: Susanne Diekelmann, University Tuebingen, Germany
Speakers: Ken A. Paller, Jessica D. Payne, Rebecca L. Gómez, Susanne Diekelmann

It is a relatively new insight that sleep facilitates the consolidation of newly acquired memories. Sleep after learning typically leads to better memory performance at a later retrieval test compared to equivalent periods of wakefulness. Although current evidence overwhelmingly suggests that sleep consolidates memory, we are only starting to understand the psychological and neurophysiological mechanisms underlying this intriguing effect. The present symposium provides an overview of hot topics and new opportunities for understanding mechanisms of sleep-dependent memory consolidation in humans. The first contribution by Ken Paller (Northwestern University) introduces memory reactivation during sleep as a presumed consolidating mechanism and shows that it is possible to trigger specific memories during sleep by using auditory cues that are associated with single memory contents. The second talk by Jessica Payne (University of Notre Dame) reports data on the role of rapid eye movement (REM) sleep for the consolidation of emotional memory, indicating that physiological reactivity to emotional stimuli at encoding channels the preferential consolidation of this information during subsequent REM sleep. The third talk by Rebecca Gómez (University of Arizona) examines the role of sleep in the abstraction of underlying rules from newly learned material in infants and young children, showing that sleep might serve different functions for memory abstraction during development. The fourth contribution by Susanne Diekelmann (University Tuebingen, Germany) discusses how sleep, particularly slow wave sleep, preferentially consolidates memories that are relevant for future behavior, such as prospective memory for intended actions.

Talk 1: Targeted Memory Reactivation During Sleep

Ken A. Paller; Northwestern University

A fundamental feature of memory is the propensity for changes in storage after initial encoding. Recent findings favor the possibility that memory consolidation during sleep might be instrumental for determining the nature of long-term memory, by actively maintaining the memories that we carry through our lives. In other words, the information that is ultimately available for retrieval may tend to be that which is reactivated during sleep. Some support for this idea comes from studies of healthy elders and patients diagnosed with amnestic Mild Cognitive Impairment. We showed, for example, that overnight retention of verbal information was related to intervening slow-wave sleep, and that defective slow-wave sleep can contribute to age-related memory impairment. What factors determine which information is reactivated and what memories we keep or lose? Studies of young, healthy individuals have shown that memory processing during sleep can benefit memory storage, particularly for information that is valued for future use. Moreover, we can proactively determine what memory processing takes place during sleep. We used subtle auditory cues during slow-wave sleep to promote the reactivation of specific spatial associations and of specific skills. Research elucidating the mechanisms of this targeted memory reactivation provides important clues about memory consolidation during sleep and about how we can make the best use of this understudied aspect of memory function. Moreover, novel applications of methods of targeted memory reactivation offer potential advantages, and may prove helpful for applications when learning is critical for recovery from disease or for overcoming maladaptive prior learning.

Talk 2: Emotional Memory and Psychophysiological Reactivity Following a Night of Sleep

Jessica D. Payne; University of Notre Dame

Negative objects are typically better remembered than the neutral backgrounds on which they are placed, while neutral objects and backgrounds tend to be remembered equivalently. This preferential reinforcement of negative arousing stimuli within scenes is known as the emotional memory trade-off effect, and it has been shown to increase following periods of sleep. Here we examined 1) the sleep stage correlates of this selective benefit to emotional objects within scenes, 2) whether the degree of physiological reactivity (as measured by heart-rate deceleration) to images at encoding would predict subsequent memory for these objects, and 3) whether physiological reactivity in response to scenes at encoding would be depotentiated following sleep compared to wakefulness. Results suggest that preferential memory for emotional objects was associated with rapid eye movement (REM) sleep, that the degree of heart rate deceleration to negative scenes at encoding predicted selective memory for negative objects, but only in the sleep group, and that sleep globally depotentiated physiological reactivity to both negative and neutral scene components. These results suggest that selective emotional memory consolidation during sleep is largely associated with time spent in REM sleep, that larger visceral reactions to negative pictures at encoding set the stage for this preferential memory for negative objects following a night of sleep, and that sleep has a net depotentiating effect on physiological reactivity to images, regardless of valence.

Talk 3: The (Complicated) Role of Sleep in Abstraction in Infants and Young Children

Rebecca L. Gómez; University of Arizona

Abstraction is a crucial form of learning involving retention of key aspects of experience while enabling generalization to new information. This ability is critical for infants and young children who must generalize to similar but not identical instances to those encountered during learning (e.g., a familiar grammatical form instantiated in novel vocabulary or a new referent for an existing word). We investigate contributions of sleep to abstraction in children for whom memory structures are developing. At 15 months of age, a period of less mature memory, sleep in an interval immediately after learning enables abstraction of a linguistic rule and portability of that rule to new vocabulary after a 4-hour delay. With increased maturity of memory structures the relationship between sleep and abstraction becomes more complex. Preschoolers (2-3 years of age) who successfully form an abstraction during learning must sleep soon afterwards to demonstrate abstraction 24 hours later. However, in preschoolers unable to form an abstraction during learning, immediate sleep may be disadvantageous: the high levels of NREM sleep characteristic at this age appear to contribute to consolidation of irrelevant details and an inability to abstract later. The findings suggest that sleep plays different roles for memory and abstraction at different points in development perhaps as a function of the learning systems most involved, with cortical systems prevailing in infancy as compared to increased hippocampal functioning thought to come online in early childhood.

Talk 4: The Role of Future Relevance in Sleep-Dependent Memory Consolidation

Susanne Diekelmann; University Tuebingen, Germany

Memories are of the past but serve to regulate future behavior. While sleep is well known to benefit the consolidation of memories for past events, the role of sleep for memories of future relevance is less well understood. Recent research provides initial evidence that consolidation processes during sleep are not non-selective but target preferentially those memories that are relevant for future behavior, such as memories for which participants expect a retrieval test and memories that are associated with monetary reward. In a series of studies, we have also shown that the prototype of future-relevant memory, prospective memory for intended actions, is especially facilitated by sleep. Sleep improves the ability to execute intended actions after a delay of two days, and this improvement specifically depends on slow wave sleep rather than REM sleep. Sleep thereby enhances both components of prospective memory, (i) to remember that something has to be done and (ii) to remember what has to be done. Moreover, the facilitative effect of sleep on prospective memory is particularly evident under conditions of reduced attentional resources at retrieval, suggesting that sleep strengthens intentional memory representations so that the intention automatically comes to mind at the appropriate time without the need for additional attentional resources. This evidence collectively indicates that some memories gain preferential access to sleep-dependent memory consolidation based on their relevance for future behavior.

Mini-Symposium Session 8

Tuesday, April 8, 10:00 am - Noon, Salon E

MEG, EEG and fMRI based functional connectivity analysis: Relevance to cognition

Chair: Satu Palva, Neuroscience Center, University of Helsinki
Speakers: Satu Palva, Nathan Weisz, Joerg Hipp, Jonathan Power

Recent advances in functional neuroimaging have highlighted the role of inter-areal interactions and functional connectivity in human cognition. Non-invasive electrophysiological recordings with electro- and magnetoencephalography (EEG / MEG) with excellent temporal resolution permit the monitoring of functional connectivity in the sub-second time-scale of human cognitive operations. Functional magnetic resonance imaging (fMRI) recordings have further revealed a reliable and spatially detailed organization of human functional networks during rest. This symposium will discuss recent results which show that when MEG and EEG are combined with source reconstruction techniques and graph theory metrics, large-scale functional connectivity and inter-areal synchronization can be revealed in several temporal scales and anatomical networks. The strength and spectro-anatomical patterns of these inter-areal interactions predict the behavioral task-performance in perceptual, working memory, and attention tasks. We also present data showing that plastic changes in congenital blindness are associated with changes in local and large-scale neuronal interactions during rest and an auditory task. Further, we will discuss the spatial organization of fMRI-derived human functional connectivity and ways to identify important nodes in correlation networks and examine the differential impact, in cognitive terms, of human brain lesions in different parts of the network. Our symposia attempts to argue that the rich hierarchy of functional connectivity in sub-second time-scales may underlie the integration of information across brain regions to mechanistically support human cognition. We will further argue that both sub-second time-scale connectivity observed with MEG and slow BOLD fluctuations in fMRI predict sensory and cognitive impairments and plasticity of neuronal networks.

Talk 1: Inter-areal synchrony in fronto-parietal and sensory networks underlies performance in working memory and attention tasks.

Satu Palva, Sheng Wang, Roosa Honkanen, Santeri Rouhinen, J. Matias Palva; Neuroscience Center, University of Helsinki

Attention and working memory (WM) are associated with large-scale neuronal activity distributed across the cortex. Neuronal mechanism underlying the coordination of this anatomically distributed processing into introspectively coherent cognition has remained largely unknown. Synchronization of neuronal activity in beta- and gamma- frequency bands gives rise to transient neuronal assemblies that may through relational coding beget the coordination and integration of distributed processing. I will present data investigating the functional role of local and large-scale neuronal interactions in attention and working memory (WM). We have estimated inter-areal synchronization and local oscillation amplitudes from concurrent MEG and EEG recordings in individual cortical anatomy by using source reconstruction techniques and graph theory metrics during visual WM and attention tasks. The strength of inter-areal synchronization in several distinct anatomical networks and sub-second time-scales predicts the task-performance and individual variations in behavioral accuracy and capacity of both attention and WM. To estimate neuronal correlates of multi-object visual attention, we used fMRI based sub-network structures to identify the strength of synchronization in distinct subsystems. These data reveal that the behavioral performance and attentional capacity is mainly predicted by concurrent synchronization in fronto-parietal and sensory networks. We have furthermore observed that both local and inter-areal synchronization during working memory retention period in task-relevant visual and fronto-parietal regions are correlated with and predict the performance and features maintained in visual working memory task. Together these data reveal that local and inter-areal synchronization in several sub-second time-scales may mechanistically underlie the performance in human attention and WM tasks.

Talk 2: Pre-stimulus functional networks form predispositions for upcoming conscious percepts of near-threshold stimuli

Nathan Weisz, Julia Frey, Sabine Leske, Thomas Hartmann, Philipp Ruhnau; Center for Mind and Brain Sciences, University of Trento, Italy, Department of Psychology, University of Konstanz, Germany

Near-threshold (NT) stimuli are often used to study neural processes associated with conscious experience. An increasing amount of works show that already prior to stimulation reduced alpha power in task-relevant regions are predictive of perceiving the NT stimulus. This has been mainly interpreted within the context of the functional inhibition hypothesis, stating relevant areas to be in a state of relatively reduced excitability. In a series of MEG studies, we show that reduced alpha power prior to hits is accompanied by increases on network measures that imply a stronger integration of the respective regions in a distributed functional network. Our findings argue for pre-established pathways of neural communication that form "windows" to upcoming conscious access. The lecture intends to introduce this framework and will exemplify how the combination of functional connectivity and graph theory to MEG data helps to gain deeper insights into the predispositions of conscious perception.

Talk 3: Altered neuronal interactions in the cortex of the blind

Joerg Hipp, David J. Hawellek, Andreas K. Engel, Markus Siegel; Center for Integrative Neuroscience (CIN), University of Tubingen, Center for Neural Science, New York University, University Medical Center Hamburg-Eppendorf

In congenital blindness, the brain develops under severe sensory deprivation and undergoes remarkable plastic changes in both structure and function. However, the neuronal mechanisms that underlie this altered functional state remain largely unknown. I will present MEG experiments investigating local and large-scale neuronal interactions in the visual cortex of the blind during resting, and in an auditory task. Comparing resting activity in the blind and sighted visual cortex revealed dramatic differences in neuronal interactions that dissociate from effects in local signal power. Furthermore, we found specific oscillatory processes that reflect non-visual auditory processing in the visual cortex of the blind, and found that these processes were functionally coupled with the auditory cortex. This work reveals intact electrophysiological activity in deprived visual cortex and suggests that it is functionally integrated into a larger network serving non-visual functions.

Talk 4: Healthy brain network organization predicts cognitive outcomes after brain lesions

Jonathan Power, David Warren, Joel Bruss, Natalie Denburg, Haoxin Sun, Steve Petersen, Daniel Tranel; Washington University in Saint Louis, MO63130,, University of Iowa, USA

The systems-scale organization of the human brain has become much better understood in the last decade, largely because of the realization that task-associated regions display correlated spontaneous fluctuations in fMRI BOLD signal. Several groups have now partitioned the human cortex into 1-2 dozen distributed systems, many of which have (partially) known functional attributes (e.g., the visual system, the dorsal attention system, etc.). Here, we will discuss one such strategy for partitioning the brain into functional systems. We then identify locations (target locations) in the brain that 1) are proximal to elements of many systems, and 2) exhibit spontaneous BOLD correlations to many systems. We predict that lesions to such locations may disrupt interactions among different systems, leading to broad and potentially severe impairments in cognition. We then test this hypothesis by examining the behavioral and cognitive profiles of subjects with focal, stable brain lesions at target locations. We contrast the effects of lesions at target locations with the effects of lesions at control locations that do not possess either of the properties mentioned above, but which have been previously identified as "cortical hubs" using methods we have argued against. Lesions to target locations (N=19) uniformly produced widespread impairment across many cognitive domains that far exceeded the deficits predicted by traditional neuropsychological principles. In contrast, lesions to control locations (N=11) uniformly produced impairment in one or few cognitive domains, in accord with clinical expectations. These preliminary results substantiate our predictions and suggest revisions of current understanding of brain hubs.

Mini-Symposium Session 9

Tuesday, April 8, 10:00 am - Noon, Salon D

Oscillatory mechanisms of attentional control

Chair: Tom Marshall, Donders Institute for Brain, Cognition and Behaviour, Nijmegen (Netherlands)
Co-Chair: Ole Jensen, Donders Institute for Brain, Cognition and Behaviour, Nijmegen (Netherlands)
Speakers: Lisa Payne, Tom Marshall, Saskia Haegens, Clayton Curtis

The alpha rhythm is the most prominently observable feature in human EEG and MEG. Originally believed to reflect "cortical idling", a body of evidence suggests that alpha in fact reflects functional inhibition. Specifically, alpha oscillations in sensory cortex are believed to reflect active inhibition of task-irrelevant information and of task-irrelevant brain-regions, allowing efficient communication between task-relevant regions and processing of task-relevant information. In this framework, alpha reflects a top-down inhibitory drive whereas gamma-band oscillations reflect active processing. This interaction between high and low frequency oscillations may represent a fundamental mechanism by which the brain operates as a network, however the mechanisms by which the alpha rhythm is generated locally in sensory cortex, the mechanisms controlling its deployment, its spatiotemporal scope, and the interaction between alpha and gamma oscillations are not yet fully understood. In this symposium we will draw on a range of methods - scalp and intracranial EEG, MEG, laminar recordings from non-human primates, and non-invasive brain stimulation (TMS) - to attempt to characterize the oscillatory mechanisms which manifest attentional control. Particularly, we will address the following questions: 1) Which brain regions exert top-down control allowing alpha oscillations to be deployed in a task-appropriate manner? 2) What is the underlying neurophysiological mechanism by which the sensory alpha rhythm is generated? 3) What is the spatial and temporal specificity of the active inhibition produced by alpha? 4) How can the interaction between alpha oscillations and gamma oscillations be characterized?

Talk 1: Alpha-band oscillations protect selective auditory and visual processing

Lisa Payne, Chad Dube, Robert Sekuler; Brandeis University, Waltham, MA, USA

Change in cortical alpha band oscillations (8-14 Hz) has been used as a marker of attentional control. We demonstrated that cued, intentional ignoring of task-irrelevant information gives rise to increased electroencephalogram (EEG) alpha-band power for auditory, linguistic, and visual stimuli. In experiment one, subjects' attention was directed either to an auditory attribute of spoken words or to an orthographic attribute of printed words. Right-lateralized posterior alpha band power increased after a cue to ignore the spoken word, consistent with previous results of auditory selective attention. During a word recognition test after all trials had been completed, subjects performed at chance for recall of voice gender for words they had been cued to ignore. Strikingly, when subjects were cued to ignore the font of a printed word, alpha oscillations increased over left fronto-temporal regions commonly associated with verbal processing. In experiment two, subjects' attention was directed either to the first or second of successive, briefly-presented study Gabors. A cue preceding each Gabor signified whether that Gabor should be remembered or ignored. After a brief retention period, subjects reproduced the spatial frequency of the to-be-remembered Gabor. When the to-be-ignored Gabor appeared second in the sequence, pre-stimulus, posterior alpha power predicted the degree to which that task-irrelevant stimulus distorted subsequent recall of the to-be-remembered stimulus. Together the two sets of results demonstrate that timely deployment of attention-related alpha-band oscillations can aid short-term memory by filtering out task-irrelevant information.

Talk 2: A causal role for FEF in top-down control of alpha and gamma oscillations during attentional allocation

Tom Marshall, Ole Jensen, Til Ole Bergmann; Donders Institute for Brain, Cognition and Behaviour, Nijmegen, Netherlands

Directing attention produces frequency-specific modulation of neuronal oscillations in sensory cortex; anticipatory alpha band activity decreases contralaterally and increases ipsilaterally to attention, whereas stimulus-induced gamma band activity increases contralaterally and decreases ipsilaterally to attention. We investigated the role of the Frontal Eye Fields (FEFs) in providing top-down control of these modulations. Previous research has suggested that the right FEF is dominant; right FEF disruption produces stronger effects on both attention and perception. We inhibited activity in left FEF, right FEF, or vertex (control) in separate sessions using continuous theta burst stimulation (cTBS), before measuring magnetoencephalography (MEG) whilst participants performed a cued spatial attention task. Individual FEF sites were functionally localized using fMRI. Analysis of the control condition revealed characteristic modulations of alpha and gamma oscillations: anticipatory alpha power decreased contralaterally and increased ipsilaterally to attention; stimulus-induced gamma power increased contralaterally and decreased ipsilaterally to attention. cTBS produced site- and frequency-specific disruptions of these effects: right FEF cTBS inhibited anticipatory alpha modulation in the right hemisphere; left FEF cTBS inhibited alpha modulation in the left hemisphere. Stimulus-induced gamma modulation in left hemisphere was increased following right FEF cTBS, whereas left FEF cTBS produced no effects. Thus, whilst the alpha effects were symmetric, the gamma effects were specific to right FEF stimulation, suggesting that the previously reported right-hemisphere dominance in this network is mediated by high-frequency stimulus-induced oscillatory activity. These data demonstrate a causal role for FEF in the direction of visual attention by top-down control of both alpha and gamma oscillations.

Talk 3: Laminar profile of the sensory alpha rhythm

Saskia Haegens; Columbia University College of Physicians and Surgeons, New York Cognitive Neuroscience and Schizophrenia Program, Nathan Kline Institute, Orangeburg

Recent work suggests that oscillatory brain activity in the alpha band (8-14 Hz) reflects functional inhibition and plays an important role in attention. However, the underlying neurophysiological mechanisms remain fairly ill-understood. Here, we studied the laminar profile of the alpha rhythm in primary visual (V1), somatosensory (S1) and auditory (A1) cortex of the macaque monkey. We used linear-array multi-electrodes to record laminar profiles of spontaneous and sensory event-related local field potentials (LFP) and multi-unit activity (MUA) in S1, V1 and A1. We examined the laminar profile of the alpha rhythm both in the LFP signal and in its second derivative, the current source density (CSD) signal, which helps to localize underlying current generators. First, we asked in which layer alpha activity is most prominent. In accordance with earlier reports, we found that in the LFP profiles, alpha was strongest in infragranular layers. However, based on the CSD profiles, alpha was strongest in supragranular layers. Next, we showed that the reference location substantially affects the LFP but not the CSD spectra. We propose that the LFP signal partly reflects volume-conducted activity, while the CSD allows us to zoom in on local generators, hence leading to this seemingly surprising difference. We then asked how alpha interacts with neuronal processing as reflected by MUA, and found that granular MUA aligned with supragranular alpha phase. Furthermore, we explored how different attention conditions affect alpha activity per layer. We conclude that the laminar pattern of alpha band activity might be more complex than generally assumed.

Talk 4: Phasic changes in gamma power and the mechanisms of working memory

Clayton Curtis, Sangita Dankedar; New York University

Past research on the neural mechanisms of working memory has almost exclusively focused on persistent neural activity, which is thought to integrate perception and action over time. In contrast, phasic changes in neural activity during working memory have remained largely unexplored. Here, we examine phasic changes in the power of gamma oscillations during working memory maintenance. We acquired intracranial electroencephalography recordings from the posterior parietal cortices of human patients with pharmacologically intractable epilepsy performing a memory-guided saccade task. Significant cross frequency coupling was observed between gamma power and the phase of alpha oscillations. In addition to sustained, spatially specific changes in gamma power, we also observed phasic changes in gamma power during the working memory delay period, with the phasic changes in gamma power occurring at predominantly frequencies in the alpha range. The results suggest that phasic high frequency power changes are involved in working memory maintenance. Low frequency (phase-locking) oscillations and cross frequency coupling could be mechanisms employed to mediate the timing and amplitude of high frequency localized activity. We hypothesize that these phasic changes - the periodic coupling between local populations of neurons in parietal cortex and spatially separated regions of cortex - are the means by which top-down attention signals coordinate large-scale brain networks.