CNS 2020 | Data Blitz Session Schedule
|Data Blitz Session 1||Saturday, March 14||12:30 - 2:00 pm||Back Bay A&B||Vishnu Murty|
|Data Blitz Session 2||Saturday, March 14||12:30 - 2:00 pm||Back Bay C&D||Jeffrey Johnson|
|Data Blitz Session 3||Saturday, March 14||12:30 - 2:00 pm||Grand Ballroom||Marian Berryhill|
DATA BLITZ SESSION 1
TALK 1: Tracking of Continuous Speech in Noisy Auditory Scenes at 7T fMRI
Lars Hausfeld, Maastricht University
Previous results from ECoG, MEG and EEG measurements studied brain responses to 'cocktail-party'-like listening situations. These studies showed that neural measures tracked ongoing acoustic features (e.g., amplitude of speech envelope, spectrogram, pitch) of the attended speech and, to a lesser extent, unattended speech. Furthermore, it was shown that primary and non-primary auditory cortical regions in STG contributed to the tracking of speech and its modulation by task. However, due to the limited coverage and spatial resolution of these measurements, the specific role of these regions and areas outside auditory cortex require further study. Here, we measure brain responses at high-field fMRI at 7T of participants who selectively attend to one of two speakers in an auditory scene. We show that speech tracking, previously performed at high temporal resolution, is possible with the comparably slow sampling of BOLD activation at 1Hz. Furthermore, the large coverage and high spatial resolution allowed us to map regions tracking the speech envelope features as well as the pitch contours of attended and unattended speakers. Single-participant analyses show that tracking of both attended and unattended speech features occurs in Heschl's gyrus and superior temporal cortex. Contrasting the tracking of attended and unattended speech showed that the attentional modulation (i.e., higher tracking for attended vs. unattended speech) is restricted to non-primary auditory cortical regions in planum temporale and superior temporal gyrus and sulcus. In addition, our results suggest a role of posterior temporal cortex in processing the distractor speaker.
TALK 2: Are attention-related modulations of alpha-band dynamics local or global?
Mattia Pietrelli, UW Madison
Research on endogenous attention has shown that predictive cues about the location and timing of forthcoming visual stimuli can influence behavior and several stages of neural processing. One proposed neural mechanism is that spatial and temporal predictions are influence the processing of visual stimuli by hijacking ongoing alpha-band oscillatory activity in brain areas involved in visual perception. However, it is not known if this top-down modulation of alpha oscillatory activity is selective for the circuits that represent target locations, or if it more broadly influences the physiological tone of the representation of the entire visual field. To answer this question, we manipulated spatial and temporal predictability during a Posner-style visual discrimination task, in which, within a block, stimuli could only appear in two of the four cardinal locations (i.e., either left-right or top-bottom). Consequently, in each block, two locations were task-relevant while the other two were task-irrelevant. Inverted encoding modeling (IEM) was used to isolate patterns of alpha-band activity specific to each of the four locations. Results showed that top-down expectations biased alpha-band power in a target location-specific manner, suggesting that alpha-band oscillatory activity can be controlled within discrete, local networks in order to optimize visual perception. Furthermore, periodic waxing waning of IEM reconstructions between cued and uncued location, consistent with the idea that alpha oscillatory activity sampled the two task-related locations rhythmically.
TALK 3: The frontal aslant tract (FAT) white matter microstructure differentiates young children with ADHD from typical controls
Anthony Dick, Florida International University
Attention-deficit/hyperactivity disorder (ADHD) is typically diagnosed in early childhood, and is characterized by deficits in executive function (EF) and in motor coordination. The neurobiology of ADHD with respect to EF in young children is not well understood, though identifying biosignatures of EF deficits in ADHD could serve as indicators of treatment response, as well as inform development of future treatments. To this end, we conducted a diffusion-weighted imaging (DWI) study in 196 4-7-year-old children (69% male, Mage = 5.7 yrs, with (n = 100) and without (n = 96) a diagnosis of ADHD). We mapped a recently-defined fiber pathway known as the frontal aslant tract (FAT). Given its connectivity profile connecting the right inferior frontal gyrus with the pre-SMA/SMA, and its previous association with EF in children (Garic et al., 2019), Dick and colleagues (2019) proposed that the right FAT might be involved in the planning, sequencing, and inhibitory control of potentially conflicting motor plans for manual movements. Results of the DWI study were in line with that prediction. We found that group status (ADHD vs Control) moderated the significant association between right FAT microstructure and performance on a motor sequencing task requiring inhibitory control (i.e., the Head-Toes-Knees-Shoulders task; p < .05). Group status did not moderate the significant association between microstructure and performance on typical EF tasks (Flanker and Dimensional Change Card Sort). Thus, 1) the right FAT is a potential biosignature of early ADHD diagnosis, but 2) only for tasks that require inhibitory control over sequenced movements.
TALK 4: Crossmodal modulation of the intracortical depth profile of BOLD signals in auditory cortex
Kaisu Lankinen, Harvard Medical School
Previous electrophysiological studies in non-human primates have shown different laminar activation profiles to auditory vs. crossmodal visual stimuli in auditory cortices and adjacent association areas. Using 1-mm isotropic resolution 3D echo-planar imaging at 7T, we studied the intracortical depth profiles of fMRI blood-oxygen level dependent (BOLD) signals to unimodal or multisensory stimuli in 11 healthy subjects. Subjects were presented with 5-stimulus trains of 300-ms auditory noise bursts (A), visual static checkerboard patterns (V), and audiovisual (AV) combinations of these two. In a simple oddball task, subjects were asked to detect occasional target stimuli (pure tone and/or diamond shape). The fMRI data were resampled into a family of 11 equally spaced surfaces within the gray matter. Intracortical depth-profiles of percentage-signal-changes of the BOLD signal were determined in five anatomically defined regions of interest (ROIs) in auditory (Heschl's gyrus, HG; Heschl's sulcus, HS; planum temporale, PT; superior temporal gyrus, STG) and polymodal (superior temporal sulcus, STS) cortices. The biases caused by the draining vein effect, increasing the BOLD sensitivity towards the superficial layers, were accounted for by using a variety of normalization techniques. Our linear mixed-effect model of the contrast AV-A suggested that combining auditory stimuli with visual inputs increased the BOLD signal more in the superficial than deeper 'layers' in PT and STS (p < 0.05). The cortical depth profile of the BOLD signal may be modulated differentially for unisensory and multisensory stimuli in posterior non-primary auditory cortices and adjacent polymodal areas.
Supported by: R01DC017991, R01DC016765, R01DC016915, R01MH111419.
TALK 5: Contiguous locations increase reliability of parietal maps
Summer Sheremata, Florida Atlantic University
Reducing the correlation of stimulus positions protects visual retinotopic maps from artifacts known to affect properties such as the size of the spatial representation. Outside of visual cortex, however, it is not clear what properties are necessary to demonstrate map structure. In the parietal cortex spatial attention increases map reliability. While it is not clear what properties of spatial attention drive these effects, one possibility is that presenting stimuli in contiguous spatial locations serves as a spatial cue with which the stimulus can be tracked. In this experiment, we used the population receptive field (pRF) method while presenting stimuli at contiguous or discontiguous spatial locations to determine whether stimulus presentation affected the properties of spatial representations in parietal cortex. We compared the first and last runs of each stimulus presentation to estimate reliability of size and preferred location estimates. As predicted by known properties of spatial attention, contiguous spatial presentation led to greater reliability of spatial representations across parietal cortex. However, greater reliability also occurred with larger pRF sizes for contiguous as compared to discontiguous presentations. These results demonstrate contiguous stimulus presentations allow demonstrating parietal retinotopic map structure more reliably that may be collected with fewer runs despite changes in individual pRF properties.
TALK 6: Lists with and without syntax: Neural correlates of syntactic structure
Ryan Law, New York University Abu Dhabi
A fundamental challenge for the neurobiology of syntax is de-confounding syntax from semantics. Recent magnetoencephalographic (MEG) findings implicate the left posterior temporal lobe (PTL) for syntactic composition, evidenced by cases in which two words semantically combine in two conditions but syntactically combine only in one (Flick & Pylkkänen, 2018). Here we used lists as both test and control conditions as a novel approach to controlling semantics to examine neural effects of syntactic structure. Three-noun lists (pianos, violins, guitars) were embedded in sentences (The music store sells pianos, violins, guitars?) and in longer lists (theater, graves, drums, mulch, pianos, violins, guitars?). These list items were matched in both their lexical characteristics and local combinatorics across conditions: in neither case do these words semantically nor syntactically compose with one another (e.g. 'pianos violins' does not form a phrase). We also varied the semantic association levels of the list items to contrast syntax with associative semantics. In a memory-probe task, the presence of structure resulted in increased source-localized MEG activity for lists-inside-sentences over lists-inside-lists in left inferior frontal cortex (242-273ms post-stimulus-onset), left (310-331ms) and right (465-499ms) anterior temporal lobes, and left PTL (344-368ms). Association effects were observed in the left temporo-parietal cortex, with higher activity elicited by high than low associative words (353-419ms). While explanations in terms of the global sentential semantics cannot yet be ruled out, our approach in using lists allows us to rule out explanations in terms of lexical semantics and local semantic composition.
TALK 7: Spatiotemporal dynamics of left Inferior Frontal Gyrus recruitment during spontaneous and cued speech production
Nikita Agrawal, NYU School of Medicine
A variety of speech production tasks are used to localize language for surgical planning to avoid postoperative language deficits. Neuroimaging studies in fMRI and PET have shown that overlearned speech production, such as number counting, does not reliably activate left hemisphere language cortex. Similarly, electrical stimulation of cortex during counting does not reliably produce a speech deficit. While prior studies have linked left inferior frontal gyrus (IFG) activation to pre-articulatory stages of speech production, the timing and degree of IFG recruitment during spontaneous speech remains underspecified. Here, we draw on the high spatial and temporal resolution offered by electrocorticographic (ECoG) data recorded in neurosurgical patients to examine the degree and timing of left IFG recruitment. We measured high gamma (70-150 Hz) power responses time-locked to speech production for several spontaneous and cued speech production tasks: number counting, months recitation, sentence repetition and word reading. We cross-correlated the neural activity with the amplitude of the patient's speech in order to measure the degree of correlation as well as the latency between neural activity and actual speech produced. Preliminary data (N=3) demonstrates that IFG recruitment preceded speech production across tasks but the degree of IFG recruitment increased as the tasks became effortful and utterances less overlearned. Furthermore, adjacent frontal regions were recruited during the spontaneous tasks, including anterior and middle frontal gyri, post speech production. This activity was not seen during cued visual word reading and most likely reflects speech monitoring processes.
TALK 8: VWFA Functional Connectivity for Print and Speech Processing in Emerging Readers
Rebecca Marks, University of Michigan
Learning to read requires children to develop an efficient neural network that connects the visual and language systems of the brain. Recent work suggests that specificity for print processing in the Visual Word Form Area (VWFA) emerges rapidly over the first year of schooling (Dehaene-Lambertz, Monzalvo & Dehaene, 2018). Furthermore, the VWFA has been found to be responsive to auditory stimuli in beginning readers, ages 5-6 (Wang, Joanisse & Booth, 2018). How is the VWFA functionally connected to language regions of the brain during word reading, and how does this connectivity differ between print and speech processing for emerging readers? 78 kindergarteners (mean age = 5.7) completed visual and auditory word processing tasks during fMRI. PPI analyses suggest that during print processing, beginning readers show functional connectivity between the VWFA and the left superior temporal gyrus, inferior parietal lobe, and bilateral frontal cortex. During auditory word processing, VWFA activity is significantly correlated with activation in the left superior temporal gyrus (STG), a critical region for language processing in speech and print. This suggests the emergence of a functional network connecting the VWFA and language regions during the early phases of literacy acquisition. Ongoing analyses will further examine connectivity between VWFA and key language regions. We hypothesize that more advanced readers will show greater VWFA activation in response to auditory stimuli, and that VWFA-STG connectivity will be associated with concurrent reading ability.
TALK 9: Tracking lexical consolidation of novel word meanings: ERP and time frequency analyses
Yushuang Liu, The Pennsylvania State University
The Complementary Learning Systems Theory (Davis & Gaskell, 2009) proposes that novel words are initially encoded by the hippocampal learning system; after a period of consolidation, memory representation stabilizes in the neocortical network. Measuring EEG in multilingual speakers, Bakker, Takashima, van Hell, Janzen, and McQueen (2015) found supporting evidence for the role of offline consolidation on the semantic integration process. Here, we tested monolinguals, with little foreign language learning experience, to examine the extent to which consolidation patterns differ between inexperienced and experienced (tested by Bakker et al., 2015) foreign language learners. We examined the offline consolidation effect in semantic integration both 24-hours after learning and one-week after learning, using ERP and time frequency analyses. Thirty monolingual English speakers learned novel words with meanings on Day-1, and another set of novel words with meanings on Day-2. Immediately after word learning on Day-2, they completed two EEG semantic tasks, including words learned on both Day-1 and Day-2. Participants returned on Day-8 and received the same tasks. Only for novel words learned 24-hours before testing, ERP analysis revealed a semantic priming LPC effect; this semantic priming effect reliably emerged in both sets of novel words on Day-8. Time-frequency analysis (TFR) revealed increased theta synchronization and alpha desynchronization after a period of consolidation. Taken together, offline consolidation effects also emerged in inexperienced learners learning novel words with meanings. Novel word meaning lexicalization is thus a gradual process for both inexperienced and experienced learners, but prior language learning experience seems to expedite this process.
TALK 10: Humor modulates prediction error updating in first and second language reading comprehension
Megan Zirnstein, Pomona College
Monolinguals, as well as bilingual L2 readers who are highly proficient and skilled at regulating the dominant L1, typically show the same pattern of brain responses when engaged in predictive reading: smaller N400 effects for predicted words, and larger frontal positive effects for plausible prediction errors (Zirnstein et al., 2018). However, studies that use pragmatic cues to guide expectations (e.g., a child speaker is less likely to talk about her forthcoming retirement; van Berkum et al., 2008; Foucart et al., 2015), generally do not elicit the same prediction error responses. One possibility is that stimuli in these studies were unintentionally humorous, and that humor may indicate to readers that prediction errors need not be resolved or learned from. In two ERP experiments, monolingual English and Dutch-English bilingual speakers viewed pictures and read sentences in their L1 and L2 (e.g., the Queen of England; 'Every morning, I drink...'). Target words were predictable (tea), plausible prediction errors (juice), humorous prediction errors (gin), or implausible (paper). Robust N400 effects were observed for all unexpected words. Attempts to resolve and learn from prediction errors, indicated by frontal positive responses, were reduced for monolinguals with higher self-reported sense of humor. For bilinguals, higher L2 proficiency led to better discrimination between conditions, with a frontal positive response for plausible prediction errors, but not for humorous words. Humor, pragmatic knowledge, and L2 proficiency all appear to play an important role in determining how L1 and L2 readers treat the prediction errors they encounter during comprehension.
TALK 11: Using EEG to investigate the neuro-modulatory systems underlying stress and decision making
Thomas D. Ferguson, Centre for Biomedical Research, University of Victoria
When we make decisions and multiple options are available, we compare the known benefits of the best choice (exploiting) to the possible benefits of the other options (exploring). When people are stressed their ability to effectively manage this explore-exploit trade-off is diminished, as stress leads people to over-exploit. However, it is not entirely clear why this is the case as multiple neuro-modulatory systems play both a role in both the explore-exploit trade-off and the stress-response. Here, we used computational modeling and electroencephalography (EEG) to further investigate the explore-exploit trade-off under stressful conditions. More specifically, we sought to determine how different neuro-modulatory systems that play a role in the explore-exploit trade-off - our decisions to explore (in which norepinephrine plays a role) and our ability to learn from feedback (in which dopamine plays a role) - were affected by stress. In the current study, participants were acutely stressed before playing a multi-option slot machine (Bandit) task. We used a reinforcement learning model to classify participant's trials as either exploration or exploitation and found that both exploration rate and the neural learning signals indicative of norepinephrine (the P300) and dopamine (the Reward Positivity) were modulated by stress in a negative fashion. Our results show that stress affects multiple neural learning systems that underlie exploration and exploitation. These findings in turn suggest that EEG is an important tool in revealing the interplay between behaviour, neuro-modulatory systems, and stress.
TALK 12: Classifying individuals into 'info types' based on information-seeking motives
Christopher Kelly, UCL
The human pursuit for information drives intellectual development and social engagement. Here, we test whether individuals can be categorized into 'info-types' according to their motives for seeking knowledge expressed in information-seeking decisions. We further test if this classification provides clues about latent psychiatric conditions. Participants indicated whether they wanted to receive 40 different pieces of information related to themselves. They also rated (i) how useful each piece of information will be, (ii) its likely impact on their affective state, and (iii) how often they think about it. Cluster analysis revealed three well-defined 'info-types'. The first type included participants that made information-seeking decisions based predominately on whether information was useful ('Action Group'). The second included participants who primarily took into account the expected influence of information on their affect ('Affect Group'). The third type consisted of participants who predominately made information-seeking decisions based on the frequency in which they think about the information in question ('Cognitive Group'). The 'Affect group' reported the most trans-diagnostic psychopathology symptoms and the 'Cognitive group' the least. The data suggests that information-seeking behavior can be indicative of mental health. Thus, the research may inform the development of new screening tools based on information-seeking patterns.
TALK 13: A spatio-temporal analysis on neural correlates of intertemporal choice
Qingfang Liu, The Ohio State University
Intertemporal choice requires choosing between an immediate smaller reward and a delayed larger reward. Previous studies suggest a delay discounting mechanism where the subjective value of monetary reward decreases with time delay and this subjective value is tracked by ventral medial prefrontal cortex and ventral striatum. Then an accumulation process subserved by dorsal medial frontal cortex (DMFC) and self-control mechanism subserved by dorsal lateral prefrontal cortex (dlPFC) together select a choice based on subjective valuation result. However, the mechanisms of how value accumulation and self-control interact to make a choice, and how self-control applies on the subjective valuation process remain elusive. To examine these questions in the time course of decision, we developed and performed an EEG experiment and manipulated the probability of choosing delayed option as an independent variable by a staircase procedure before the EEG session. A computational model equipped with mechanisms including power transformation of time and reward information, attention selection and stochastic value accumulation was developed and fit to choice and response time data in a hierarchical Bayesian approach. Phase-based functional connectivity between putative dmFC and posterior parietal cortex resembles the reconstructed accumulation dynamics from the best-fitting computational model on every experimental condition, and this functional connectivity tracks both value encoding and accumulator competition mechanisms. By combining computational model and phase-based functional connectivity, our result suggests an interaction between choice valuation and accumulation competition in the time course of intertemporal choice.
TALK 14: The relationship between creativity and individual semantic network properties
Marcela Paola Ovando Tellez, Institut du Cerveau et de la Moelle épinière, Sorbonne Uni
The associative theory of creativity suggests that creative abilities rely, at least in part, on the organization of semantic associations in memory. Recent research has demonstrated that semantic network methods allow exploring the properties and organization of semantic associations and testing this hypothesis. The aim of the current study was to investigate the properties of semantic networks and relate them to creative abilities at the individual level, using graph theory. Individual semantic networks were estimated using relatedness judgments of pairs of words. Thirty-five words were selected based on French association norms and controlled for the theoretical semantic distance between them and for linguistic properties. Topological properties of the estimated individual semantic networks were measured by several graph metrics which were correlated with individual creativity scores. The theoretical semantic distance between words correlated with the relatedness ratings given by the participants, indicating the validity of our approach. Importantly, we observed a significant correlation between semantic network metrics and creativity as measured by creative achievement and creative task performance. These findings replicate and extend previous similar results and suggest that exploring semantic network properties is a valuable approach to study creativity.
TALK 15: 1 Hour of Lost Sleep Impacts Financial Markets: Daylight Saving Time Compromises Financial Trading
Frank Song, UC Berkeley
A lack of sleep has negative effects on motivational-effort and optimal decision-making (Krause et al., 2017). However, whether sleep loss impacts real-life financial choice behavior, en masse, has yet to be examined. Here, we tested the hypothesis that a 1-hour sleep manipulation, imposed by daylight saving time (DST), influences appetite for financial decision-making in financial markets. Trading activity of E-mini S&P 500 Futures contracts was analyzed on each Sunday after DST change from 2002-2019, compared with the surrounding Sundays (N=165 trading days, N=6.17 million contracts). Based on the hypothesis of sleep-loss impairments in motivational drive and effort, analyses focused on daily trading volume (representing cumulative trading activity) and intraday volatility (representing price variations linked to trading activity). Following the Spring DST change, resulting in a 1-hour loss of sleep opportunity, both these effort-based trading metrics dropped significantly (38-43%), relative to the surrounding Sundays (p=0.0001-0.0013). Following the Fall DST change, providing a 1-hour increase in sleep opportunity, there was no significant relative change in these trading metrics. Together, these findings establish that a modest reduction in sleep opportunity (1-hour) significantly impacts trading activity, while a converse increase in sleep opportunity (Fall DST) may not be capitalized upon by individuals, obviating a beneficial behavioral effect. These results support a biological framework of sleep loss reflecting a marked state of impaired motivational-effort. Moreover, such data illustrate how even very subtle, ecologically common, reductions in sleep time across the population can have non-trivial societal and economic ramifications.
DATA BLITZ SESSION 2
TALK 1: Neural Correlates of Aesthetic Engagement with Literature
Yuchao Wang, Haverford College, Penn Center for Neuroaesthetics
Literary stories contain artistic value in what is said and how it is said. Reading literature typically affects readers emotionally: they may experience empathy, suspense, and even physical sensations like chills because of the wording used. To better understand what brain networks are co-opted when laypeople engage with literature, we modeled functional magnetic resonance imaging (fMRI) data while people listened to literary stories. We tested the hypotheses that emotional and literary experiences of narratives are neurally dissociable. While emotional arousal during story engagement is correlated with activity in sensory and social-processing areas, comprehending literary language is correlated with language and attention areas. We collected ratings of emotional arousal (N=27) and literariness (N=27) of two stories from two independent groups of raters to create two regressors (emotional arousal and literariness). These regressors were used to parametrically model blood-oxygen-level-dependent signal changes of 52 participants listening to the same two narratives. The fMRI results show that emotion and literariness of narratives are processed by independent brain networks. Highly emotional content leads to increased activation in bilateral superior frontal gyri, right medial superior temporal sulcus, and left tempo-parietal junction, an area predominantly involved in social cognition. Literary language in the narrative activates left perisylvian areas, including the angular gyrus and inferior frontal gyrus, both of which process and integrate semantic information during language comprehension. Overall, our results support our hypotheses and shed light on the function of and interaction between attention, social understanding, and semantic networks during literary engagement.
TALK 2: EEG frequency-tagging of apparent biological motion dissociates action and body perception
Guido Orgs, University of London
Language and music are hierarchically organized with clearly identifiable components such as phonemes or notes that are combined to produce sentences or rhythms. The structure of human action, on the other hand, is less clear. Based on dance choreography, we propose that action sequences can be broken down into a series of movements from one body posture to another. By frequency-tagging fluent, non-fluent and random apparent biological motion sequences, we show that brain activity entrains not only to the presentation rate of individual body stimuli, but also to the repetition of specific body postures within the stimulus stream, and to the rhythm of whole-body movements. Entrainment to individual body postures were strongest for non-fluent sequences and across bilateral occipitotemporal electrodes, consistent with processing of static body postures in extrastriate visual cortex. In contrast, neural responses to the rhythm of movement were strongest for fluent sequences and across occipito-temporal and fronto-central electrodes. Body- and movement-related neural responses were absent for random posture sequences without compositional structure. Instead, these sequences evoked brain activity only at the visual presentation frequency. Frequency tagging of apparent biological motion thus reveals multiple brain representations for observed actions, driven by change in visual surface form, by repetition of static body postures, and by rhythm of movement. Our results are consistent with a hierarchical process of action perception that builds complex rhythmical action sequences by connecting fluent trajectories between static body postures.
TALK 3: Neural representation of social craving following isolation in the human brain
Livia Tomova, Massachusetts Institute of Technology
Social motivation has been conceptualized as a fundamental drive in humans (Baumeister 1995, Sheldon 2009), yet little is known about neural mechanisms underlying the motivation to re-engage in social interaction after acute isolation, here called social craving. In a mouse model, dopamine neurons of the dorsal raphe nucleus code for the drive to re-engage in social interactions following acute social isolation (Matthews 2016). Here we used functional magnetic resonance imaging (fMRI) to investigate the neural representation of social craving in the human brain. Socially connected and extroverted typically-developing human adults (n=40) were acutely socially isolated and subsequently underwent fMRI scanning with a cue-induced craving paradigm. We found that isolation causes self-reported feelings of social craving and loneliness (average increase of ~30% after 10 hours of isolation). Furthermore, the caudate nucleus (i.e., a part of the striatum and a core area of the motivation circuitry (Berridge 2012)) showed increased activation in response to social cues following isolation. These results are in line with evidence that activity in dorsal striatum in humans is correlated with craving for food and drugs after deprivation (Volkow 2002, 2006; Noori 2016). However, within the same individual participants, we found partially non-overlapping neural responses to food craving after 10 hours of fasting. Our results suggest both overlapping and distinct neural representations of social craving and food craving after deprivation.
TALK 4: Assessing the relationship between alpha power and hemodynamic activation during emotional mental imagery
Maeve Boylan, University of Florida
Mental imagery is a critical factor in the etiology and maintenance of many psychiatric disorders, as well as a component in gold-standard treatment options. The neural underpinnings of mental imagery are however poorly understood. At the level of hemodynamics, research has demonstrated that mental imagery activates emotion networks of the brain. Scalp-recorded EEG has also shown an increase in endogenous activity in the alpha band during mental imagery tasks. To define the neurophysiology of mental imagery, we combined the information from blood oxygen level-dependent (BOLD) signals with concurrently recorded EEG alpha-band power during a visual script-driven mental imagery task in a sample of 20 healthy participants. Ongoing analyses demonstrate that established BOLD activation patterns during mental imagery were replicated with the addition of EEG recordings: BOLD was selectively enhanced during emotional, compared to neutral imagery, in medial prefrontal cortex, precuneus, and cerebellum. These changes were associated with alpha-power changes, assessed on a trial-by-trial basis, as well as related to the level of alpha-power change across trials. Together, findings suggest that alpha-power changes in the scalp-recorded EEG may represent a sensitive index of emotional imagery.
TALK 5: Effects of interactive social context on visual attention to social partners
Ashley Frost, Texas State University
Social neuroscience research involving eye-tracking predominantly examines attention to static faces or prerecorded dynamic stimuli. Recent evidence suggests that live context influences gaze behavior, but few studies have directly compared how different social contexts alter social attention. This study used a well-controlled within-subjects design to compare gaze across three contexts: (1) face-to-face, (2) webcam-based interactions, and (3) prerecorded videos. In all contexts, participants (N=52) were eye-tracked during a two-turn interaction with a confederate where participants spoke freely for one minute on a pre-selected neutral topic before listening to the confederate speak for one minute about a similar topic. Participants also completed measures of personality and anxiety. We hypothesized that context would influence attention to the social partner's face and that more naturalistic contexts would better predict real-world traits. A three-way repeated measures ANOVA found significant main effects for context (live, webcam, prerecorded), speaker role (talker, listener), and region (eyes, mouth). Overall, participants looked at the eyes roughly three times more than the mouth, engaged in more face-looking when listening than talking, and showed the most attention to their partner's face in the live condition. No interaction terms were significant. Increased eye-looking in contingent (i.e., webcam, live) but not video interactions related to increased extraversion. In contrast, social anxiety related to reduced eye-looking in only webcam and video exchanges. Our systematic comparison between contexts underscores the importance of naturalism in social neuroscience and suggests that the clinical utility of eye-tracking measures may be improved by considering contingent, face-to-face paradigms.
TALK 6: Relationship of mood, cognition and physical activity in Depression: Remote symptom monitoring using wearable technology
Nathan Cashdollar, Cambridge Cognition
The ubiquity of digital technology in our day-to-day life, such a mobile phones and wearable technology, has allowed researchers to capture the daily fluctuations in mood and cognition that many individuals with psychiatric disorders experience. Here we demonstrate the feasibility of remotely collecting cognitive data in individuals suffering from Depressive Disorder, as well as the relationship of these high-frequency cognitive assessments with the remote monitoring of symptoms and physical activity. This was a study of six weeks duration in 30 adults with mild-moderate depression, stabilized on antidepressant monotherapy. Daily remote data collection (via an Apple Watch) consisted of a working memory assessment (N-back) up to 3 times a day, self-reported mood assessments, step count and average heart rate. Participants showed an initial improvement in N-back performance, but reached a learning plateau on average of 10 days after study onset. N-back performance also showed a significant diurnal effect for the time of day, and step counts were lower at the beginning and end of each week. Higher step counts overall were associated with better N-back learning and increased daily step count was associated with better mood on the same and following day. Daily N-back performance covaried with self-reported mood after participants reached their learning plateau. The current results support the feasibility of deploying remote symptom monitoring techniques via wearable technology in psychiatric populations and establish methods for synthesizing high-frequency cognitive data, brief mood and biometric data in order to create sensitive digital profiles of clinical symptoms.
TALK 7: The distinct roles of prefrontal GABA and glutamate/glutamine in two types of cognitive control
Boman Groff, University of Colorado Boulder
This study tested whether individual differences in neurotransmitter levels in lateral prefrontal cortex are associated with brain activation during a cognitive control task. More specifically, we tested the hypothesis that individual differences in levels of excitatory (glutamatergic) neurotransmitter in the dorsolateral prefrontal cortex (dlPFC) are associated with brain activation when maintaining a task goal in the presence of competing information (goal-maintenance), while inhibitory (GABAergic) neurotransmitter levels in ventrolateral prefrontal cortex (vlPFC) are associated with brain activation when selecting information from multiple task-relevant options to guide responding (goal-related selection). In a sample of 47 adult women, PRESS and MEGAPRESS sequences were used to determine resting GABA+ and Glutamate/Glutamine (GLX) concentrations (accounting for grey matter) in two separate voxels (dlPFC, vlPFC). Participants then underwent functional magnetic resonance imaging while performing a verb generation task with a 2-by-2 design that separately manipulated the difficulty (high, low) of goal-maintenance and goal-related selection. Concentration of GABA+ (controlling for GLX) in vlPFC was associated with differences in activation between the high and low goal-related selection conditions in occipital and temporal regions. In contrast, GLX concentration (controlling for GABA+) in dlPFC was associated with differences in activation between the high and low goal-maintenance conditions in parahippocampal and inferior temporal regions. These findings are the first to show that individual differences in GLX and GABA+ in lateral prefrontal cortex are associated with brain activation during a cognitive control task, and that these relationships differ by region (dlFPC, vlPFC) and type of control mechanism required (goal-maintenance vs. goal-related selection).
TALK 8: Opposite lateralization for face recognition and gender perception
Ana Chkhaidze, UCSD; University of Nevada, Reno
The perception of boundaries between stimuli existing along a graded continuum of physical properties is referred to as categorical perception (CP). Divided field studies of color and shape perception suggest a relationship between left-lateralized CP in these cases, and cerebral laterality for language. Unlike color and shape processing, face recognition is associated with right-lateralized circuits in the visual cortex and beyond. We used a divided field method to study two different kinds of face perception: (1) gender discrimination and (2) identity recognition. In four experiments, observers performed a visual search task on arrays of faces split between the left visual field (LVF) and the right visual field (RVF). The search required visual discrimination of faces by virtue of their identity, gender, or both. Our results showed categorical face perception effects in all three types of tasks. Crucially, however, hemifield biases for categorical perception of gender were different from the categorical perception of identity. We found that the well-known LVF advantage for face recognition showed modulation by categorical versus non-categorical face perception when the change was happening across gender. While for identity categories, we found that the CP effect was stronger in the RVF. Our findings show that categorical effects on face recognition may depend on opponent cerebral laterality for language and the visual processing of faces.
TALK 9: Not always the face: differences between human and dog neural face- and conspecific-preference
Attila Andics, ELTE Department of Ethology, Budapest
What drives processing preferences when viewing individuals, and how these preferences evolved, are key questions of comparative social neuroscience. Preference for faces and for same-species stimuli are two well-documented organizing principles. Yet, the evolutionary origin and the relative role of neural face- and species-sensitivity in visual social processing are largely unknown. We performed awake fMRI with humans (n=30) and family dogs (n=20), presenting them with identical stimuli: short videos of human and dog faces and occiputs (back of the head). We compared neural sensitivity to conspecificity and faceness between the two phylogenetically distant mammal species. Across-species representational similarities were mostly driven by species-sensitivity for faces. Both humans and dogs showed stronger neural response to same-species stimuli, suggesting that conspecific-preference may be an ancient characteristic of the visual system. In contrast, while in humans we identified all previously reported face areas, in dogs we found no brain regions responding preferentially to faces. In humans, only the face areas involved in processing emotional information (e.g. the pMTG) preferred human to dog images and 89.2% of the visually-responsive cortex showed face-over-conspecific preference. In dogs, a bilateral temporo-parietal region (mid suprasylvian gyrus) showed increased response to dog relative to human images and 94.6% of the visually-responsive cortex showed conspecific-over-face preference. These findings suggest that visual social perception follows different organizing principles in humans and dogs. The central role of face-sensitivity in human (and primate) perception of individuals may not be general across all mammals.
TALK 10: Progression from feature-specific brain activity to hippocampal binding during episodic encoding
Rose Cooper, Boston College
The hallmark of episodic memory is recollecting multiple perceptual details tied to a specific spatial-temporal context. To remember an event, it is therefore necessary to integrate such details into a coherent representation during initial encoding. Here we tested how the brain encodes and binds multiple, distinct kinds of features in parallel, and how this process evolves over time during the event itself. We analyzed data from 27 subjects who learned a series of objects uniquely associated with a color, a panoramic scene location, and an emotional sound while functional magnetic resonance imaging data were collected. By modeling how brain activity relates to memory for upcoming or just-viewed information, we were able to test how the neural signatures of individual features as well as the integrated event changed over the course of encoding. We observed a striking dissociation between early and late encoding processes: left inferior frontal and visuo-perceptual signals at the onset of an event tracked the amount of detail subsequently recalled and were dissociable based on distinct remembered features. In contrast, memory-related brain activity shifted to the left hippocampus toward the end of an event, which was particularly sensitive to binding item color and sound associations with spatial information. These results provide evidence of early, simultaneous feature-specific neural responses during episodic encoding that predict later remembering and suggest that the hippocampus integrates these features into a coherent experience at an event transition.
TALK 11: Using mobile EEG to assess brain health and performance
Olav Krigolson, University of Victoria
In recent years it has become possible to use mobile electroencephalographic (mEEG) technology to collect research grade data (Krigolson et al., 2017). The recent advances in mEEG data quality and the ease of use have opened the doors for a wide range of real-world applications for human neuroimaging in addition to allowing large scale data collection. Here, we present the results from a large sample size study (n = 1000) wherein we used a combination of event-related potentials (ERPs), time-frequency analysis (FFTs), and machine learning classifiers to examine relationships between neural data and cognitive fatigue. In this study, participants played two simple games on an Apple iPad using PEER research software ? a visual oddball task and a two-choice gambling task while mEEG data was recorded from a MUSE headband. In line with previous research, our results demonstrate that diminished ERP responses (P300, reward positivity) are associated with increased cognitive fatigue. Further, using a combination of multivariate regression and machine learning classifiers we were able to greatly increase the explained variance in our results (Discriminant Analysis Classifier with Bayesian Optimization, 91.6% accuracy) and come up with a more accurate prediction of cognitive fatigue level. Importantly, we demonstrate two key things here. One, we provide further evidence for the use and validity of mEEG in research. Two, we provide an important building block for cognitive fatigue detection capability ? something that obviously could have huge impact in a variety of real-world applications.
TALK 12: A Gaussian process model of human electrocorticographic data
Tudor Muntianu, Dartmouth
There is increasing evidence from human and animal studies that memory encoding and retrieval is supported by fast timescale network dynamics involving the coordinated activities of widespread brain structures. However, measuring these network dynamics directly in the human brain poses a substantial methodological challenge. In prior work, we developed a method for inferring high spatiotemporal resolution activity patterns throughout the brain, using recordings taken at only a small number of ECoG electrodes (Owen and Manning, 2017). The method, SuperEEG, builds a covariance model that describes how activity patterns throughout the brain are related as a function of their spatial location. We train the covariance model by stitching together recordings taken from a large number of patients and electrode locations. Once the covariance model has been fit, we can apply the model to ECoG recordings from a small number of locations to estimate activity patterns throughout the rest of the brain. In our prior work, we showed that the activity patterns estimated at held-out (unobserved) electrode locations were reliably correlated with the true (observed) activity recorded from those electrodes. Here we apply this same approach to two new large ECoG datasets. We first replicate our prior results, reliably estimating activity patterns from held-out electrodes across both patients and experimental tasks. In other words, the properties our approach leverages appear to be person-general and task-general. Then, we assess reconstruction quality across six frequency bands and broadband power; while quality remains stable across frequencies, highest quality reconstructions come from broadband power activity patterns.
TALK 13: Predicting Depression from Speech Recordings: A Machine Learning and Feature Selection Approach
Siamak Sorooshyari, University of California, Berkeley
Features of recorded speech have been shown to be predictive of depression severity. However, little consensus exists on the appropriate combinations of voice features that should be used to successfully identify depression. The current study sought to find the voice features most relevant for an accurate classification of depression. Voice recordings and depression ratings (PHQ-9 scores) were remotely collected from 49 adult participants. Prosodic, phonetic and spectral voice features were extracted using two software packages: Praat and openSMILE. A support vector machine (SVM) was trained on various combinations of the voice features, and their accuracy in depression classification was evaluated. A leave-one-out (LOO) cross-validation analysis was used to assess the predictive capability of our methodology. Comparison between the performance attained with Praat and openSMILE showed that the optimal Praat set yielded nearly equivalent performance to the optimal openSMILE set using a significantly fewer number of features. The results support the importance of pruning the feature space prior to training a machine learning algorithm, as a larger number of features does not necessarily result in superior classification. Collectively, these results provide encouraging evidence for remotely recorded speech as an effective means of predicting depression.
TALK 14: Military Blast Exposure and PTSD are Associated with Aging White Matter Integrity and Functioning
Emma Brown, VA Boston Healthcare System
Emerging evidence has demonstrated independent risks that history of military blast exposure (MBE) and PTSD pose for adverse health outcomes including changes to brain microstructure and function. Given these findings, we evaluated (a) the association of MBE and diagnosed PTSD with white matter integrity indexed by diffusion tensor imaging; (b) the relationship between MBE and PSTD with neurocognitive function; and (c) if neurocognitive function is associated with white matter alterations in a large veteran cohort. The sample consisted of OEF/OIF/OND veterans, aged 19 to 62 years (n = 191 MBE with PTSD; 106 MBE-only; 34 PTSD-only; 43 no MBE or PTSD). Delayed recall was measured by the Brief Visuospatial Memory Test (BVMT-R). PTSD diagnosis was determined by the Clinician-Administered PTSD Scale (CAPS-4). Voxelwise cluster-based statistics revealed a significant MBE and PTSD x age interaction on diffusion parameters with the MBE and diagnosed PTSD group exhibiting a more rapid cross-sectional age trajectory towards reduced white matter integrity. We identified distinct regions of lower fractional anisotropy in those with MBE and PTSD than other groups (p < 0.05). MBE and PTSD demonstrated indirect influence on delayed memory recall performance (p < 0.01). Delayed recall performance was associated with altered white matter integrity. We found that MBE and PTSD are associated with altered cross-sectional aging at the microstructural level and may confer risk for cognitive decline. Additional work examining neurobiological underpinnings of PTSD and longitudinal changes of brain tissue integrity after blast exposure will be important in developing effective interventions for returning veterans.
TALK 15: Linking hierarchical cortical gradients to cognitive effects of intracranial electrical stimulation in the human brain
Josef Parvici speaking for Kieran Fox, Stanford University
For more than a century, intracranial electrical stimulation (iES) of brain tissue in awake neurosurgical patients has been known to elicit a remarkable variety of cognitive, affective, perceptual, and motor effects, including somatosensations, visual hallucinations, emotions, and memories. To date, a comprehensive, whole-brain mapping of these effects has not been attempted, nor has there been any effort to integrate patterns of iES effects with other models of large-scale cortical organization. Toward these aims, we analyzed the effects of iES at 1559 cortical sites in 67 patients implanted with intracranial electrodes. We found that intrinsic network membership and the principal gradient of functional connectivity strongly predicted the type and frequency of iES-elicited effects in a given brain region. While iES in unimodal brain networks at the base of the cortical hierarchy elicited frequent and simple effects (such as muscle twitches and phosphenes), effects became increasingly rare in heteromodal and transmodal networks higher in the hierarchy, and the elicited effects more heterogeneous and complex (e.g., complex emotional states and multimodal sensory experiences). Our study provides the first comprehensive exploration of the relationship between the hierarchical organization of intrinsic functional networks and causal modulation of human cognition with iES. Although iES has long played a seminal role in understanding human brain function, our study goes beyond prior work by showing that iES can shed light not only on local functional properties, but also global patterns of brain organization and their relationship with subjective experience.
DATA BLITZ SESSION 3
TALK 1: The striatal feedback response reflects goal updating
Ian Ballard, University of California, Berkeley
Decades of neuropsychology and neuroanatomical research has converged on the theory that the striatum is a gate: it selects between potential action or goal representations in cortex. In contrast, fMRI investigations often characterizes the striatal BOLD response as a reward prediction error signal arising from midbrain dopaminergic inputs. However, prediction error is confounded with updating: if you discover that your decision resulted in a disappointing outcome, you must both represent that disappointment and update your behavior. We test whether apparent reward prediction error BOLD responses in the striatum are better described as goal updating responses, and reflective of gating functions rather than the activity of dopaminergic inputs. Subjects performed two tasks: a standard 2-arm bandit task (in which prediction error and updating are confounded), and a variant of the 2-arm bandit that dissociates goal updating from reward prediction error. We accomplish this by introducing conditions where losing money indicates the need to change goals. We find that in the traditional 2-arm bandit where goal updating and reward prediction error are confounded, the striatal BOLD response is consistent with both interpretations. However, in the same subjects performing the task where they are de-confounded, the striatal BOLD response tracks goal updating and not reward prediction error. Specifically, the accumbens and putamen respond more strongly to losing than winning money when losing money is more informative for goal selection. These results suggest that the striatal feedback response reflects updating of cortical goal representations, consistent with the gating theory of striatal function.
TALK 2: Does combined decision-making training and tDCS produce generalizable cognitive benefits in healthy older adults?
Kristina Horne, University of Queensland
Much excitement has been generated about the use of brain stimulation techniques to ameliorate age-related neurocognitive decline. Existing studies suggest that transcranial direct current stimulation (tDCS) might enhance cognitive training effects. It remains unclear, however, whether benefits 'transfer' to other cognitive domains or persist over time. This study is the largest to date, and the first Registered Report (Stage 1 in-principal accepted at Nature Human Behaviour) to investigate the effects of a combined training and tDCS protocol on cognitive functions in older adults. We assigned 131 healthy participants, aged 60-75, to four demographically-matched groups. Each group received one of four protocols over five consecutive days: decision-making training and anodal tDCS over the left prefrontal cortex (PFC); decision-making training and sham tDCS (left PFC); training on a control task and anodal tDCS over the left PFC; or decision-making training and anodal tDCS over the visual cortex (control electrode location). Participants completed a comprehensive battery of eleven cognitive tasks and two ecologically valid questionnaires pre- and post-intervention and at one and three-month follow-up time-points. In contrast to young adults, anodal tDCS did not enhance training benefits in healthy older adults, perhaps reflecting structural and functional brain changes experienced in ageing. In addition, observed training gains did not transfer to other cognitive domains or everyday function at the group level. However, analysis of individual differences revealed that for individuals who received tDCS, magnitude of training benefits was associated with performance gains on several transfer tasks at follow-up time-points hinting the possibility of transfer.
TALK 3: Age-related decline in resting state brain signal variability: Cause and Consequences
Poortata (Pia) Lalwani, University of Michigan, Ann Arbor
Brain signals as measured by fMRI vary considerably from moment-to-moment even in the absence of any task and this variability declines with age. However, there are significant individual differences in the brain signal variability. What are the behavioral consequences of these differences and what is their neurochemical basis? Based on computational and animal research we hypothesized that individual differences in GABA (the brain's major inhibitory neurotransmitter) might play a critical role. In order to investigate this hypothesis, we recruited 50 older and 50 young adults and measured 1) brain signal variability using resting-state fMRI, 2) GABA levels using MR Spectroscopy in the bilateral ventrovisual, auditory and somatosensory cortex, and 3) behavioral performance on standardized fluid processing tasks from the NIH toolbox. We also pharmacologically manipulated GABA activity in a subset of our sample by administering lorazepam (a benzodiazepine, known to potentiate GABA activity). We found that whole-brain signal variability was significantly lower in the older adults and was significantly associated with their fluid processing ability. GABA levels in the visual, auditory and somatosensory cortex were also reduced in the older group and were associated with brain signal variability even after controlling for age and tissue-composition. Finally, potentiating GABA activity with lorazepam significantly increased brain signal variability relative to a placebo. These results are consistent with the hypothesis that age-related declines in GABA levels cause age-related declines in brain signal variability which in turn contribute to individual differences in fluid processing abilities among older adults.
TALK 4: Sensory modality and information domain modulate behavioral and neural signatures of working memory interference
Justin Fleming, Harvard University
Recent evidence from functional magnetic resonance imaging has revealed interleaved sensory-biased regions in the lateral frontal cortex that are preferentially recruited during either visual or auditory attention and working memory (WM). These regions participate in sensory-biased cortical networks that can be flexibly recruited depending on information domain. Spatial auditory WM tasks recruit the visual-biased network, while temporal visual WM tasks recruit the auditory network. Using electroencephalography (EEG) and a WM interference paradigm, we assessed the behavioral costs and neural signatures of recruiting the same versus the complementary network during WM retention. Participants (N=20) were asked to remember spatial or temporal properties of auditory or visual stimuli. To explore effects of network interference, a second auditory task was sometimes presented during the retention period; this interfering task emphasized either spatial or temporal processing. Performance on the interfering task was worst when auditory information was being held in WM, reflecting a cost of increased load on the auditory network. In contrast, no behavioral costs of switching between the visual and auditory networks were observed. Neurally, we identified time-frequency-channel regions of interest (ROIs) in which the interfering tasks significantly altered oscillatory power. ROIs were found during the retention and probe task phases in the theta (4-7 Hz) and alpha (8-12 Hz) frequency bands. Within these ROIs, we observed differential signatures of WM depending on whether the sensory modality and information domain matched between the two tasks. These results help quantify the relative costs of loading one cognitive network versus switching networks mid-task.
TALK 5: Functional organization of hippocampus is altered by associative encoding and retrieval
Wei-Tang Chang, UNC at Chapel Hill
The hippocampus is critical for learning and memory and can be separated into anatomically-defined hippocampal subfields (aHPSFs), including subiculum, CA1, CA2/3, CA4 and dentate gyrus. However, the assumptions of within-subfield functional homogeneity and across-subfield functional dissociation were not supported by clear evidence. The data-driven approaches offer an alternative means to investigate the hippocampal functional organization without a priori assumption. Nevertheless, the relatively low spatial resolutions employed in the previous studies precluded the examination of the functional specialization across aHPSFs. Hence, we developed a functional Magnetic Resonance Imaging (fMRI) sequence on a 7T MR scanner with 1-mm isotropic resolution, a TR of 2s and brain-wide coverage. Healthy young adults were scanned at rest and in associative memory task. We aim to investigate: 1) how the associative memory tasks alter the functional organization of hippocampus, and 2) how the functionally-defined hippocampal subfields (fHPSFs) connect with the rest of the brain. Using a spatially restricted hippocampal Independent Component Analysis (ICA) and k-means approaches, we observed that the fHPSFs were distinct from aHPSFs with the exception of CA1. Additionally, 30+ fHPSFs were identified at encoding phase while only 5 fHPSFs were identified at retrieval phase. More areas within hippocampus were relatively inactive at retrieval phase than at encoding phase. For the brain-wide functional networks, primary sensory networks connected with selective fHPSFs while high-level association networks connected with the hippocampus more uniformly. Our analyses of the fine-grained functional segmentation and the respective functional networks hold a great promise in the applications of neurodegenerative diseases.
TALK 6: Integrating MVPA and Connectivity in a Multiple Constraint Network to Bootstrap Brain Models
Chris McNorgan, University at Buffalo
Brain-based cognitive models draw on traditional general linear model fMRI analyses, which have been more recently complemented by multivariate pattern analyses (MVPA) and by connectivity analyses to identify regions supporting cognitive processes and the interactions between them. We describe a machine-learning approach that represents an explicit union of MVPA and functional connectivity, aiming to facilitate the integration of evidence afforded by these two analytic methods. Multilayer neural networks learned the real-world categories associated with macro-scale cortical BOLD activity patterns generated during a multisensory imagery task, while simultaneously encoding interregional functional connectivity in an embedded autoencoder. Our technique permits the MVPA and functional connectivity solutions to mutually constrain one another, and we argue that these Multiple Constraint Networks naturally generate models that best fit all available data. We find that functional connectivity encoding significantly improved MVPA classifier accuracy, and used the resulting models to simulate lesion-site appropriate category-specific impairments and identify semantic category-relevant brain regions. We conclude that data-driven Multiple Constraint Network analyses encourage parsimonious models that may benefit from improved biological plausibility and facilitate discovery.
TALK 7: Relationships Between Sleep Quality and Neural Reinstatement of Associative Memory in Young and Older Adults
Emily Hokett, Georgia Institute of Technology
Compared to young adults, older adults tend to have worse sleep quality and episodic memory. Older adults experience habitually disrupted sleep patterns and have difficulty binding and retrieving detailed associative memories. Sleep fragmentation may interfere with both encoding and retrieval. Individual differences in the degree of reinstatement of neural activity present during encoding and retrieval supports episodic memory accuracy. However, the association between individual differences in objectively-measured sleep quality and episodic memory at the neural level is largely unexplored, especially in older adults and diverse racial groups. Considering that racial/ethnic minorities report worse sleep quality than non-minorities, the degree of neural reinstatement in racial/ethnic minorities at encoding and subsequent retrieval could be associated with the degree of sleep fragmentation. Thus, the current study primarily aimed to answer whether sleep quality was differentially associated with behavioral memory performance and underlying neural reinstatement of associative memory by age group and racial group. To explore this, we recruited a diverse sample of young and older adults; measured one week of their sleep quality using accelerometry; and recorded participants' EEG during an associative memory task. Older adults demonstrated worse associative memory than young adults, and Black adults experienced poorer sleep than White adults. Across age and racial groups, neural reactivation for confidently-remembered word pairs between encoding and retrieval was positively related to memory accuracy. Furthermore, sleep fragmentation was associated with reduced pattern similarity and reduced memory performance. Thus, poorer sleep quality corresponded with poorer associative memory accuracy and reduced memory-related neural reactivation.
TALK 8: Stronger structural connectivity in the default mode network is associated with youthful memory in superaging
Jiahe Zhang, Northeastern University
'Superagers' are older adults who maintain youthful memory despite advanced age. Previous studies demonstrated that superagers have greater morphometric integrity and stronger functional connectivity in the default mode network (DMN) and salience network (SN), which contributes to their youthful memory performance. In this study, we used diffusion-weighted imaging to examine structural connectivity within the DMN and SN in 41 young adults (24 males, ages 18-35) and 40 older adults (24 males, ages 60-80). Superaging was defined as youthful performance (males: 13; females: 14) on the long delay free recall measure of the California Verbal Learning Test. We masked the DMN and SN, and their assessed the integrity of structural connectivity using fractional anisotropy. As predicted, within both DMN and SN, superagers had higher fractional anisotropy compared to typical older adults (DMN: t = 2.51, p = 0.01; SN: t = 2.89, p = 0.01). Compared to young adults, superagers had weaker DMN fractional anisotropy (t = 2.53, p = 0.01) and similar SN fractional anisotropy (t = 0.11, p = 0.92). Higher fractional anisotropy within the DMN predicted better performance on both recall (r = 0.27, p = 0.07) and recognition memory tasks (item recognition: r = 0.48, p = 0.00; associative recognition: r = 0.43, p = 0.01) in older adults. Completing a link between morphometry and functional connectivity, these structural connectivity results continue to extend the multimodal characterization of superaging.
TALK 9: Parallel Networks Dissociate Episodic and Social Functions Across Distributed Cortical Regions Within Individuals
Lauren DiNicola, Harvard University
Recent within-individual analyses revealed that two parallel networks exist within the bounds of the canonically-defined default network (DN). These networks (A and B) are juxtaposed but distinct across distributed cortical zones (e.g., Braga & Buckner 2017 Neuron). Preliminary work examining these networks' functions revealed that Network A, linked to parahippocampal cortex, is preferentially recruited for Episodic Projection tasks (e.g., remembering), while Network B, linked to the temporoparietal junction, preferentially subserves Theory of Mind (ToM) tasks (DiNicola et al. 2019 bioRxiv). The present work sought to quantify whether such distinctions were limited to specific regions, aligning with prior, group-averaged work, or were present across distributed network zones. In an initial dataset, we scanned 6 individuals 4 times each and replicated a functional dissociation between Networks A and B, which preferentially subserved Episodic Projection and ToM tasks, respectively. Using a trial-level approach, we estimated 60 Episodic Projection and 40 ToM contrasts per network, for 5 distributed cortical regions within each individual. Across individuals, 18 of 30 region-specific tests found significant network by domain interactions (60.0%). 5 individuals showed interactions in 3 or more regions, including those along the midline previously considered DN hubs. Equivalent analyses of null data yielded only one false positive result (3.3%). After establishing analysis procedures, we replicated the approach and findings in an independent sample of 6 additional individuals (70.0% of regions show interaction effects; 0 false positives). These results refine understanding of how parallel, distributed networks in association cortex are organized to support task processing demands.
TALK 10: Moment-to-moment and individual differences in spontaneous lapses of attention at encoding predict subsequent memory
Kevin P. Madore, Stanford University
The ability to sustain attention prior to an experience may impact the event's encoding and later remembering. Extant research indicates that experimentally induced blocks of full vs. divided attention impact episodic encoding and subsequent memory. We recorded concurrent EEG+pupillometry (N=80) during a goal-directed encoding and retrieval task to answer two related questions: How do spontaneous lapses of attention at the trial level relate to goal coding and goal-directed behavior during learning, and subsequent memory? Are trait-level differences in memory partially explained by differences in the ability to sustain attention? During incidental encoding, subjects classified objects via either a conceptually- or perceptually-cued goal; subsequent memory was assessed via source and item recognition. In addition, subjects completed a separate attention go/no-go task (gradCPT). During encoding, moment-to-moment pre-trial tonic lapses of attention assayed from posterior alpha power (8-12Hz) and pupil diameter significantly predicted RT slowing for object classifications; these effects were partially mediated by the strength of goal coding, assayed from a midfrontal ERP cluster. These multimodal lapse markers also significantly predicted subsequent source memory for the cued objects at retrieval, partially mediated by changes in established difference-due-to-memory effects at encoding from a midfrontal ERP cluster and RT. At the individual level, we further observed that no-go errors on the independent attention task, and neural lapsing and RT variability at encoding, were significantly negatively related to memory discriminability. These results indicate that moment-to-moment and individual differences in attention lapsing partially account for why we sometimes remember and sometimes forget.
TALK 11: Transfer effects of musical training to speech salient temporal features: improved sensitivity to VOT
McNeel Jantzen, Western Washington University
Previous research has shown that musicians have enhanced selective attention and increased sensitivity to acoustic features of speech that is facilitated by musical training and supported, in part, by right hemisphere homologues of established speech processing regions of the brain (Jantzen et al., 2014; Jantzen & Scheurich, 2014). Here we provide evidence that musical training enhances the processing of acoustic information for speech sounds via improved discrimination and enhanced sensitivity to speech differing in voice onset time after completing a musical training program. A pre/post training perceptual mapping procedure consisting of a synthetic speech continuum ranging from the voiced, unaspirated, alveolar [d] to the voiceless, unaspirated, alveolar [t]. During the perceptual mapping, subjects identified the stimuli (2AFC) and judged how good the stimuli were as exemplars of each of the two categories. One-hour musical training sessions occurred over an 11-day period and focused on melodic and harmonic aspects of music (ear training). Musical training effects and organization of acoustic features were reflected in the EEG as observed by location and amplitude of the ERP's. Results show early neural response to VOT was both faster and greater following musical training. Behavioral results indicate that the pattern of performance differed as a function of whether or not subjects became more sensitive to the VOT distinction. Moreover, it may not be the specific similar/homologous components of music that transfer to speech (e.g. spectrotemporal), but rather the precision required for music competence that produces enhanced attention and sensitivity to salient speech features.
TALK 12: Hierarchical statistical learning: Behavioral, neuroimaging, and neural network modeling investigations
Cybelle Smith, University of Pennsylvania
How does the brain encode contextual information at different temporal scales? When processing familiar sensory and semantic input, cortex is sensitive to input further into the past along a posterior to anterior gradient (Hasson et al. 2015). To investigate how we learn new hierarchical temporal structure, we designed a novel paradigm employing statistical learning that can be used to map neural contributions to contextual representation at different time scales. Over four behavioral experiments (N=72), we demonstrate that humans are sensitive to transition points among both low- and high-level sequential units during exposure to sequences of abstract images (fractals). However, results may be attributable to low-level learning of image trigrams. Thus, we altered the paradigm to more effectively disentangle learning of nested order information at slow and fast temporal scales. One of eight context cue images is presented multiple times, and embedded in this stream are paired associate images. Critically, pairwise contingencies depend on both the identity of the context cue (fast temporal scale) as well as the time since the previous context shift (slow temporal scale). We have found that multi-layer recurrent neural networks trained to predict the upcoming image in this paradigm encode order information at shorter time scales at lower levels (closer to perceptual input). Planned neuroimaging work will test the idea that brain regions similarly spatially segregate these timescales. In particular, we anticipate that the hippocampus will represent these hierarchical timescales on an anterior-posterior gradient and that prefrontal cortical regions will be engaged along a lateral-medial gradient.
TALK 13: This sounds good! Hurdling and tap-dancing re-afferences are processed differently in the brain
Nina Heins, University of Muenster, Germany
It is congruent with our everyday experience that most of our actions produce sounds. So far, it is unclear whether these action sounds are used as auditory feedback to evaluate the quality of action execution and are therefore important for motion control. For this study, we trained our participants in two sound-producing actions, one with intentional (tap dancing), one with incidental (hurdling) action sounds, and showed them point-light videos of their own actions in a functional Magnetic Resonance Imaging (fMRI) experiment. We examined the diverging influence of action sound omission on action evaluation (via action performance rating scores) and neuronal processing of these two action sound types, especially regarding the question whether auditory predictions are provided whenever the sound is removed. Findings suggest that the brain enhanced auditory predictions during tap dancing, and visually in hurdling. Auditory predictions manifested in the supplementary motor area (SMA), whose activity correlated both positively with rating scores and negatively with primary auditory cortex activity when sound was removed from tap dancing videos. In these videos, we suggest that a generative model of the expected sound was delivered by SMA, leading to attenuation in primary auditory areas. Our results contribute to a deeper insight into the importance of action sounds for understanding, evaluating and improving our action execution and action perception in sports and in everyday life.
TALK 14: Patients with hemispherectomies evince intact visual recognition behaviors
Michael C. Granovetter, Carnegie Mellon University
Cortical resection is an efficacious treatment for pharmacoresistant epilepsy. For markedly intractable epilepsy, a hemispherectomy--resection or disconnection of an entire cerebral hemisphere--may be performed to alleviate a patient's seizures. Prior studies have reported moderate maintenance or recovery of post-operative cognitive behaviors in resection patients. However, to date, visual recognition abilities of patients with full hemispherectomies have not been systematically investigated. Here, 16 hemispherectomy patients aged 8- to 38-years-old (5 right, 11 left) performed a visual discrimination task. Pairs of stimuli (words in one block, faces in another) were consecutively presented for brief intervals, and participants reported whether images were identical or different. Stimuli were presented at central fixation for patients (who were all hemianopic) and in the left or right visual fields for controls (such that images would likely be most immediately registered in a single hemisphere). Remarkably, accuracy for 15 of 16 patients with left and right hemispherectomies were comparable to age-matched controls viewing stimuli in their left and right visual fields, respectively, as determined by the Crawford & Howell individual subject analysis method. This was verified with a mixed effects analysis showing no effects of stimulus category (words versus faces) or group (patients versus controls) on accuracy. A mixed effects analysis did reveal longer reaction times for the right resection patients than controls viewing words in the right visual field, but only among participants less than 15-years-old. Altogether, these findings suggest that patients with complete hemispherectomies are able to maintain or recover critical visual recognition behaviors.
TALK 15: Learning and Reward through a New Musical System
Matthew Sachs, Columbia University
Previous studies have shown that the process of learning musical structure relates to preference and liking. However, it remains unclear how this relationship develops de novo, given that we are exposed to music, and develop preferences, very early in life. The Bohlen-Pierce (BP) scale, a unique musical system, can be exploited to help resolve this issue. While most musical scales recur at the octave, the BP scale recurs around the 3:1 frequency ratio. Here we compare and contrast the effects of preference and familiarity using new music in the familiar Western scale and in the Bohlen-Pierce (BP) scale. In Experiment 1, 100 participants rated newly composed BP musical clips for liking, musicality, and familiarity. Ratings were higher for musicality than liking and familiarity and there were significant positive correlations among liking, musicality, and familiarity ratings. In Experiment 2, participants listened to the BP clips and newly composed clips in Western musical scales and rated them for liking and familiarity in fMRI. Behaviorally, liking and familiarity ratings were similar between the two styles. When comparing fMRI activity during BP clips against new Western clips, greater activity was found in bilaterally in the Heschl's gyri, SMA, and DLPFC (p