CNS 2018 | Conference Videos

To kick off the 25th anniversary meeting of the Cognitive Neuroscience Society, Michael Gazzaniga (University of California, Santa Barbara) took us back to the beginning of the field, and then on a whirlwind tour through the history of thought on consciousness. How do neurons turn into minds? How does physical “stuff”—atoms, molecules, chemicals, and cells—create the vivid and various alive worlds inside our heads? Although we've had massive breakthroughs in neuroscience in the last century, these puzzles faced by the ancient Greeks are still present. Gazzaniga believes that understanding how consciousness works will help define the future of brain science and artificial intelligence, and close the gap between brain and mind.

Keynote Address: The Consciousness Instinct


 

Big Theory versus Big Data: What Will Solve the Big Problems in Cognitive Neuroscience?

DEBATE Moderated by David Poepple

All areas of the sciences are excited about the innovative new ways in which data can be acquired and analyzed. In the neurosciences, there exists a veritable orgy of data – but is that what we need? Will the colossal datasets we now enjoy solve the questions we seek to answer, or do we need more ‘big theory’ to provide the necessary intellectual infrastructure? Four leading researchers, with expertise in neurophysiology, neuroimaging, artificial intelligence, language, and computation will debate these big questions, arguing for what steps are most likely to pay off and yield substantive new explanatory insight.

Eve Marder (Brandeis): The Important of the Small for Understanding the Big

The brain employs highly degenerate systems that allow for resilience and robustness. These can be found in studies of large ensembles of neurons, and are likely to show up in all kinds of large-scale simulations and theoretical studies. Nonetheless, if one ever wishes to account for the behavior of large numbers of neurons, at some point it is necessary to go down to the cellular level for analysis to see which biological mechanisms are consistent with conclusions made and proposed at higher levels of analysis.

Jack Gallant (University of California, Berkeley): Which Presents the Biggest Obstacle to Advances in Cognitive Neuroscience Today: Lack of Theory or Lack of Data?

Science is a collection of methods and processes for constructing elegant theories that can explain and predict high-dimensional data. It is obvious that both theory and data are required. But at any point in time, progress is likely to be limited relatively more by a lack of theory or a lack of data. It is my contention that at the current time, progress in human cognitive neuroscience — our ability to construct powerful explanatory, predictive models — is more limited by a lack of data than a lack of theory. This is because the human brain data that are available currently offer such a coarse view of brain function that they do not provide sufficient information to develop and test rich cognitive theories. Thus, most current cognitive theories do not predict well either human brain data or complex behavior under naturalistic conditions. Development of new devices, new methods of measurement and new experimental paradigms are required in order to support cognitive models that respect the complexity of brain structure and function.

Alona Fyshe (University in Victoria, British Columbia): Data Driven Everything

The structure of every organism, including humans, is the product of adaptation and evolution in the face of data. Clearly data is a powerful force, but in practice we will not have eons of data at our disposal. Does that necessarily mean we will need strong model priors? How far can we get with big-but-finite data?

Gary Marcus (NYU): Neuroscience, Deep Learning, and the Urgent Need for an Enriched Set of Computational Primitives

Large strands of AI and contemporary neuroscience are dominated by a quest to find a single computational primitive (or canonical cortical circuit) to rule them all, typically some version of hierarchical feature detection, first made popular by Hubel and Wiesel, and more recently by deep learning. At first glance, the superficial success of deep learning seems to be argument in favor of a homogenous computational system. I argue, however, that deep learning is far more superficial than widely believed, and that both deep learning and models of neuroscience must be supplemented by a broad range of elementary computational devices.

Latest from Twitter