Last month at dinner with friends, I heard about a training program some employers are using to reduce implicit bias in hiring – involving, among other things, acknowledging before interviews any potential sources of bias the interviewer may have through past experiences with particular individuals or groups of people. The goal, to increase diversity in the workplace, is important, but the science of implicit bias has yet to make clear recommendations on how best to address it.
Many of the efforts described by my friends seem to miss the point that implicit bias happens largely through learning associations while we are young and is out of our conscious control. This connection, between implicit bias and learning associations, is key to understanding not only how implicit bias happens in the brain but how best we can reduce this unconscious bias as a society.
Recognizing that implicit bias occurs is an important first step, says Elizabeth Phelps of NYU, who is delivering a public address on racial bias and decision-making at the annual CNS meeting this April in New York. A growing body of research by Phelps and others is showing that the neural circuits involved in implicit bias overlap with those in emotion learning and lead to decision-making consequences – giving a possible route to intervention through emotion regulation techniques.
While the expression of explicit negative racial attitudes has dropped dramatically among Americans over the last 50 years, implicit bias still persists, as seen in courts, the healthcare system, hiring, and elsewhere. For example, Jennifer Eberhardt of Stanford University has shown that the more a black person convicted of a crime looks black, the more likely that person is to receive a harsher sentence. Phelps has come at the challenge from a neuroscience perspective, starting in a seemingly unlikely place.
She first began thinking about implicit bias while studying fear learning in the amygdala early in her career. Neural processing in the amygdala causes chemical processes in brain cells that lead to our natural defenses in the face of a threat; the amygdala is also involved in emotion’s influence on cognition. At the time of the early studies, researchers had modeled fear learning in animals but not people, as it was before the widespread use of fMRI and thus quite challenging.
In patients with amygdala damage, Phelps noticed an interesting pattern: Pairing together a stimuli with a shock (as in classic Pavlovian-type fear studies), the patients did not show increased arousal at the sight of the stimulus but yet knew that the stimulus meant they would be shocked. “So they had perfect knowledge of what would happen but they didn’t show me in this implicit expression that they have that association,” she says. The observation paralleled work on implicit, compared to explicit, bias and led her to collaborate with social psychologist Mahzarin Banaji of Harvard University.
Banaji with Anthony Greenwald had pioneered work with the Implicit-association Test (IAT), showing that we have unconscious biases that cause us to have slower reaction times when trying to pair together, for example, a black face with good word and a white face with a bad word. Phelps wanted to understand what was happening at the neural level, so she and Banaji analyzed activity in the amygdala while study participants viewed black and white faces; they also took physiological measures and took IAT data from the participants. They found that people’s implicit, but not explicit, negative attitudes toward members of other racial group correlated with greater activity in the amygdala.
“That linked up the emotion learning world and the implicit bias world,” Phelps says. Since those early studies, she has been working to use data about emotional learning in the brain to help learn more about racial bias and how and when it is expressed.
In one study using neuroeconomic models, for example, Phelps and colleagues sought to test the effects of implicit bias on decision-making. While implicit bias had been documented in some situations, such as treatment decisions by doctors, research had not yet established the overlap between implicit bias and emotion in economic decision-making. The goal was to “show reliable, quantifiable changes in decisions and that implicit bias engages the same neural systems that you would find when just emotion influences decision-making,” Phelps says.
Neuroeconomic models are a framework for understanding decision-making that combine classic behavioral economic tasks with quantitative tools and neuroscience techniques. The study used the standard behavioral economic trust game in which researchers give participants money to choose to invest with one individual or another based on the individuals’ faces. If the participant invests with someone, the money gets multiplied, but the recipient can choose whether to give back some of the money to the investor or keep it all.
In classic trust game studies, people with amygdala damage will rate faces as generally trustworthy and invest money more readily, whereas other participants without amygdala damage will form a general consensus on those faces that look more trustworthy. In the study on implicit bias, the researchers varied the race of the potential investment recipients.
“We were able to show that whether or not individuals are likely to trust or not trust people of different races does correlate with their implicit bias,” Phelps says. “So someone who feels a stronger pro-white bias is going to invest more money with anonymous white participants than anonymous black partners, whereas somebody who shows a sort of roughly equal bias basically shows no bias trust.”
The study showed that implicit bias engages the same brain systems involved in trust decisions and emotions – the amygdala and the striatum. The findings are consistent with other research suggesting that the amygdala codes evaluative information and values through connectivity with the striatum.
Making these neuroscience links to what has traditionally been in the realm of social psychology has been an important step in debunking the notion, Phelps says, that the brain divides up how it processes social versus cognitive information. “The brain does not divide that way,” she says. “The fact that we can make these links means that we can take advantage of our knowledge of emotional learning to actually enhance our knowledge of social learning and implicit bias.”
The work is leading the way toward possible interventions to curb implicit bias. Phelps thinks that emotion regulation strategies, such as those used in cognitive behavioral therapy, may one day be able to help. She has done one study in which she and colleagues teach people a story that counters stereotypes to see if it changes their behavior. Results from the first study suggest that this method has promise but more is needed.
While scientists work to harness this research to develop such techniques, the best thing people can do to reduce implicit bias, Phelps says, is to recognize that none of us is immune. “We all walk around with this idea that we’re perfectly in control of everything we do and especially where we’re making judgments about other people,” Phelps says. But that is not the case. She points out that women, for example, are as likely to be gender biased as men in their judgments.
“We all grew up in the same environment with the same learning associations growing up,” she says. “Let’s acknowledge that is the case and then try to fix it.”
Learn more about this work in a Nature Neuroscience review paper co-authored by Phelps. And join us April 2 at 1pm for Phelps’ keynote address for the CNS annual meeting at the Hilton Midtown Hotel in New York City.
-Lisa M.P. Munoz