Permanent Present Tense: The Unforgettable life of the amnesic patient, H.M.
Saturday, April 5, 2014, 2:00 – 3:00 pm, Grand Ballroom Salon A-F – OPEN TO THE PUBLIC
Speaker: Suzanne Corkin; Professor of Neuroscience, Emerita, Department of Brain and Cognitive Sciences, MIT
At age 27, Henry Molaison (H.M.) received an experimental operation to alleviate intractable epilepsy. Bilateral removal of his medial temporal lobe structures left him with a dense amnesia but preserved intellect. I will highlight results from 55 yrs of behavioral and imaging studies showing that short-term, long-term, declarative, and nondeclarative memory rely on different brain circuits. H.M. died in 2008, leaving his brain to Mass General and MIT for further study. He continues to illuminate the science of memory.
Distributed circuits, not circumscribed centers, mediate visual recognition
Monday, April 7, 2014, 6:00 – 7:00 pm, Grand Ballroom Salon A-F
Speaker: Marlene Behrmann; Carnegie Mellon University
Increasingly, the neural mechanisms supporting visual cognition are being conceptualized as a distributed but integrated system, as opposed to a set of individual, specialized regions each subserving a particular visual behavior. Consequently, there is an emerging emphasis on characterizing the functional, structural and computational properties of these broad networks. In this talk, I will present a novel theoretical perspective, which elucidates the developmental emergence, computational properties and vulnerabilities of integrated circuits using face and word recognition as a model domain, and I will offer empirical data from developmental studies, ERP, fMRI and neuropsychology to support this account. Additionally, I will argue that, rather than being disparate and independent, these neural circuits are overlapping and subject to the same computational constraints. Specifically, the claim is that both word and face recognition rely on fine-grained visual representations but, by virtue of pressure to couple visual and language areas and to keep connection length short, the left hemisphere becomes more finely tuned for word recognition and, consequently, the right hemisphere becomes more finely tuned for face recognition. Thus, both hemispheres ultimately participate in both forms of visual recognition but their respective contributions are asymmetrically weighted.