Contact Us

Create an Account   

Understanding What Shapes Our Visual Reality: Q&A with William Newsome

Visual cortex; http://commons.wikimedia.org/wiki/GNU_Free_Documentation_License Visual cortex; http://commons.wikimedia.org/wiki/GNU_Free_Documentation_License

“Anyone who works with monkeys on a day-in-day-out basis eventually asks him or herself a startling question: Exactly who is training whom here?”

 

Our brains, not our eyes, are largely responsible for our visual reality. Although the eyes take and lightly process the pictures, it is our brains that reconstruct what we have seen from electrical signals. And how our brains process those signals affects not only how we view the world but also how we make decisions.

Over the past 30 years, William Newsome of Stanford University has pioneered work to understand how electrical signals in the brain translate to what we visually perceive and how that impacts our decision-making processes. His and colleagues’ work has identified, among other discoveries, neurons in a specific brain region responsible for perceptual judgments about direction, and has shown that stimulating different brain areas with gentle currents can change our perception of direction – up , down, left, and right.

Newsome is now working on the connection between vision and decision-making. Newsome’s lab conducts behavioral and physiological experiments in animals that are trained to perform selected perceptual or eye movement tasks. They use tiny electrodes to record the activity of cortical neurons during the tasks, to better understand how the visual cues affect the animal’s behavior. The work has extended into neuroeconomics, looking at how the brain judges return on an investment. In one experiment, the monkeys are more likely to receive a reward if they move their eyes in a particular direction – allowing the researchers to train the monkeys to move their eyes in that direction.

Newsome will be delivering a keynote address at the CNS 20th anniversary meeting in San Francisco this April and shared with CNS a preview of what he will be discussing, as well as thoughts on working with monkeys and his perspectives on changes in the field over the past few decades.

William Newsome, Stanford

CNS: How did you become involved in studying the visual systems of monkeys?

Newsome: During my Ph.D. work at Caltech in the 1970s, the new excitement in cortical physiology was the discovery of multiple cortical areas in every major sensory sensory system, as well as in the motor system. I was studying extrastriate [region of the cortex in the brain located next to the primary visual cortex] visual areas with John Allman and David Van Essen using classical techniques in anesthetized animals. I became convinced that we were never going to understand how these areas were actually working unless we could study them in awake, behaving animals who could indicate to us what they were seeing. The behaving monkey was a new neuroscience prep at the time, but I had read the early papers of Bob Wurtz and Mickey Goldberg on visuo-motor integration and attention. After visiting Wurtz’s lab, seeing a behaving monkey in action, and hearing the activity of superior colliculus neurons during eye movements, I was totally sold. That was to be my future.

CNS: Do you have a story or two you can share about some of the unique challenges in working with monkeys?

Newsome: Anyone who works with monkeys on a day-in-day-out basis eventually asks him or herself a startling question: Exactly who is training whom here? I once had a monkey train me to stop the experiment every 45 minutes or so and feed him a biscuit before we could continue. The training process is always a trade-off, of course. Each party gets something it wants.

Anyone who works with monkeys also becomes aware at some point that some of their cognitive processes work exceedingly differently from ours. The lack of generalization is a major point of difference. We can train a monkey to discriminate different directions of motion along one axis, and then present exactly the same task along a different axis, and the animal is completely flummoxed – a small leap that would be trivial for a human child is extremely challenging for a monkey. The ability to think abstractly and extract even the simplest principles appears very challenging to monkeys.

CNS: How does your work on visual systems relate to decision-making in monkeys and people?

Newsome: When Mike Shadlen and I started the decision-making work in the mid-1990s, it was a very straightforward extension of our vision work. We, together with Ken Britten and Tony Movshon, had spent several years relating visual signals recorded from the middle temporal visual area to the psychophysical behavior of the animal in a motion discrimination task.

To make a quantitative model of this relationship, it is essential to postulate a decision process – the precise mechanism by which sensory evidence is transformed into some sort of decision variable that guides behavioral choices. We made up a simple, reasonable model of the decision process, but we did just make it up. It was a small step, then, to wonder about the real decision process in the brain and whether we could study it physiologically. Since that time, the field has gone on to study many sorts of decision processes beyond the simple sensory case that Mike and I started with, but much of the reasoning and many of the experimental approaches are very similar to the simple case we began with.

CNS: What will you be talking about in your keynote address at the CNS meeting this April?

Newsome: I will present new work that emerged from a great collaboration between Valerio Mante, a physiology/behavior postdoc in my lab, and David Sussillo, a theory/computation postdoc in Krishna Shenoy’s lab. When Valerio I conceived this project several years ago, our focus was on “gating” – the dynamical routing of information within the cortex based on contextual demands. We trained animals on a contextual decision-making task that leverages the accumulated base of knowledge that we have built over the years concerning sensory and decision-making circuitry in the cerebral cortex.

Our original goal was to study the attentional “gating” or “filtering” mechanisms that weed out irrelevant sensory information in each behavioral context, thus permitting only the relevant evidence to reach the frontal lobe decision circuits. To our great disappointment, all types of information – both relevant and irrelevant – hit the decision circuits, leaving us with a mish-mash of signals that appeared to defy rational understanding.

The first key to solving this puzzle was Valerio’s insight that these signals are nicely separable at the population level, even though they appear hopelessly mixed at the single-unit level. The second key was David’s insight that a recurrent neural network model could in fact solve the behavioral task with no prior filtering that we normally associate with attentional mechanisms. In doing so, the model nicely reproduces several aspects of the physiological data, and suggests an entirely novel mechanism for selection and gating within cortical circuits. My hunch is that this mechanism is going to be broadly important for understanding cortical information processing in the coming decades.

CNS: What are some of the most exciting lines of research now (and into the future) on visual perception systems?

Newsome: Pattern recognition – both in space and in time – is key in any sensory system. Understanding how patterns are recognized, remembered, and recalled will be an enduring line of research in the future.

Decision-making will certainly continue to be a vigorous field as our conceptual and empirical insights extend into multiple academic fields, as well as into the practice of medicine, business, law, education, and public policy. We have only begun to scratch at the problem of how information flow in the brain is rapidly and flexibly adjusted (gating) to meet changing behavioral demands. The key insights to many of these questions will lie at the circuit/systems level, not at the molecular or genetic levels.

I am told that roughly 2 minutes is required from the time a signal at the cell surface (e.g. a ligand binding to a receptor) can be converted into gene expression and the production of new protein within the cell. Within that 2-minute interval, animals can engage in several social situations that require very different translations of sensory and internal inputs into behavior!

Genetics and molecular biology are providing powerful new tools for dissecting neural circuits. But flexible, on-the-fly behavior arises from the dynamical activity of neural circuits as the animal interacts with a changing environment. The actual answers to our most interesting questions must be sought at that level.

CNS: How do those research areas differ from where the field was 20 years ago?

Newsome: Two things: new tools and new concepts. The tools are fabulous. When I was in graduate school, we could not imagine being able to peer inside the functioning human brain with MR [magnetic resonance] tools that are routine today, nor could we imagine being able to control the activity of genetically targeted neural circuits during real-time behavior in an experimental animal. And it was barely conceivable that would be able one day to record from hundreds or thousands of neurons simultaneously. But the new tools are of little value in the absence of major new conceptual strides. This is where theory and computation have contributed critically to the current revolution in neuroscience.

The development of computational neuroscience as a field began seriously in the 1980s, but solid, “on the ground” contributions to our understanding of the nervous system were thin in the beginning. That has changed dramatically during the ensuing decades. Theory now exerts a strong impact at all levels, from interpreting action potential generation by single cells, to investigating codes and computation at the population level, to incisive modeling of behavior. This trend will certainly continue in the future.

The best systems neuroscientists today are familiar with the tools of linear algebra, signal processing, advanced statistical analysis, and the theory of stochastic processes – all in addition to the nuts and bolts of microelectrodes, microscopes, and anatomical tracers. In the old days, we ended our physiology papers with qualitative discussion sections about how the signals we described might contribute to behavior. Today, quantitative models that link physiological data to behavior are the norm, and the best models make predictions that can then be tested in new experiments.

The experiments that are being performed today by young scientists – junior faculty, postdocs, and grad students – are breathtaking in their ambition and technical sophistication. I, like many other neuroscientists of my generation, am not sure I could compete today! We must become even more sophisticated in the future, but the strides of the last decade have been enormous. Not to be trite, but it really is an exciting time to be a neuroscientist.

Media contact: Lisa M.P. Munoz, CNS Public Information Officer, cns.publicaffairs@gmail.com

One Response to Understanding What Shapes Our Visual Reality: Q&A with William Newsome

Leave a Reply