When we are around different people, our behavior changes. Some of it is intentional – like talking about things you have in common – but much of it happens spontaneously without us even realizing it, such as folding your arms or scratching your nose when others do the same. But what about around robotic or computer-generated behavior? New research has found that imitating others depends on someone both looking human and on us believing the behavior to be human.
“Our research seeks to understand if there are dedicated neural circuits for controlling social interactions with other humans compared to non-human agents,” says Richard Ramsey of Bangor University in North Wales. While scientists have previously determined the brain networks responsible for “automatic imitation,” they have yet to understand all the factors that determine when it occurs.
Ramsey’s interest in the topic is rooted in his distinctive accent from growing up in the northern English city of Leeds. “On arriving at university, I noticed that the way I spoke was being copied by other students that I met,” he explains. “This struck me as odd since many students and friends were making fun of the distinctive nature of my accent at the same time as spontaneously copying the way I made vowel sounds.”
With the ever-increasing presence of robotic and computer-generated behaviors in our lives (think Siri), Ramsey and his colleagues decided to test how interacting with such non-human agents affects our social imitation. “An interesting twist we added to our design was not only whether another agent looked like a human or a robot but also whether participants believed the agent they were seeing had human or non-human origins,” Ramsey says.
In the experiment, just published in the Journal of Cognitive Neuroscience, the researchers showed participants videos of a hand moving to induce automatic imitation while in the fMRI scanner. They had two cues as to whether the hand was human. The hand either looked human or robotic. And although the movements were identical across the videos, the researchers told participants that some movements were generated by a computer algorithm, while others were copied from people using motion-capture suits.
Before watching the videos, the participants watched a 10-minute custom-made documentary that described the two techniques for producing digital movements. “The video incorporated a lot of video footage from Hollywood movies, professional motion capture labs, and even some stimuli and footage from our own lab to make it all more believable,” Ramsey says. “Participants believed that movements they would see later in the experiment could be made in fundamentally different ways.”
As predicted, both the human appearance and the belief that the movements were human were important in triggering the automatic imitation “For the first time, we show that one node in the imitation control circuit – the right temporoparietal junction – is tuned to regulate interactions with human more than non-human interaction partners,” Ramsey says. While the imitation was the same when one or both cues to humanity were present, the imitation dropped off when both were absent.
“In the future, as it is very likely that robots will play a larger part in everyone’s lives, these data provide evidence for the neurobiological mechanism that may control how we may respond and interact differently with humans compared to robots,” Ramsey says.
-Lisa M.P. Munoz
The paper, “The Control of Automatic Imitation Based on Bottom–Up and Top–Down Cues to Animacy: Insights from Brain and Behavior” by André Klapper, Richard Ramsey, Daniël Wigboldus, and Emily S. Cross, was published online on April 17, 2014, in the Journal of Cognitive Neuroscience.