As the summer lawn mowers roar outside my window, I cannot help but think about how that sound is being processed in my brain. Like many people, I think about the process as quite hierarchical – with the auditory nerve in my ear registering the sound and then deciphering the volume (it’s loud), the pitch, and where it’s coming from. Information like that can then combine with experience – to help me understand and predict other sounds, whether another lawn mower or a similar leaf blower, fireworks for next week in the U.S. for July 4th, or all the variety of music I listen to daily. But this process is not as simple as it may seem.
“This hierarchy of music perception is quite neat, but not exactly true,” says Zofia Hołubowska of the Max-Planck-Institute for Human Cognitive and Brain Sciences. “The auditory system is extremely well interconnected, meaning that in between these steps that process different qualities of sounds, there are many loops, which allow the information to travel back and forth, modifying the way sounds are perceived.”
In a new study in the Journal of Cognitive Neuroscience led by Hołubowska, cognitive neuroscientists wanted to better understand that interconnectivity, specifically looking at how changing the location of sounds affected people’s perceptions of musical phrases. Participants in the study listened to excerpts of Johann Sebastian Bach’s compositions, while at random points in time, the location of the speaker playing the sound was changing. The researchers asked participants to signal with a button press whenever this change happened and also recorded the participants’ brain activity to see how their brains registered these changes.
They found that the musical structure affected how people perceived changes in the location of the sounds. “Depending on musical context, or more specifically, if the change happened somewhere within a phrase – a small coherent melody within a piece of music – or at the boundary between two phrases, we found differing perceptions of an identical change in the sound source location,” Holubowska explains. “Because we used EEG brain measures, we can also confirm that this difference is on a perceptual, early level before people are able to draw their attention to the task.”
I spoke with Holubowska about these findings, her own interests in music, and what’s next for this line of work.

CNS: How did you become personally interested in studying music?
Holubowska: I think I have two answers to that question – one professional, one personal.
If I were to answer professionally: When exploring the perception of sound, we quite often use very simple sounds, such as artificially created noise or very simple sound waves. However, that’s not what people listen to most of their lives. Rather, we are constantly surrounded by very complex sounds: our environment, people talking, playing music. In research, we should take this complexity into account. At the same time, we are facing methodological concerns. When adding complexity to the sounds we are using the studies, we also scale-up the number of potential reasons why we observe certain effects. For that we have to control for all the potential answers. Is our effect because of some acoustic parameters? Or was it because the participant has a particular emotional connection to music that they were presented with? Music, in particular, allows for experimental control while still capturing this richness. You can manipulate precise acoustic features while also introducing high-level structure, emotional content, or context. That’s why I find music such a powerful tool in auditory research.
On a personal level, the story is the reverse: I started studying music when I was six, and I’ve always loved it. I play and sing until this day, but more as a hobby than anything else. At the same time, I was drawn to science – first chemistry, which moved to neuroscience. Pursuing research in the field of auditory neuroscience and music perception lets me bring together my two main interests: music and science. In a way, it feels like a natural convergence of passions I’ve had since childhood.
CNS: What is your favorite way to describe how sound processing works to nonscientists?
Holubowska: I always like to think of sound processing as a very creative process. When you start thinking about the different sounds you can hear in your surroundings, you can probably identify multiple sources: the clock ticking, birds singing outside the window and maybe a washing machine in the next room. However, this is not how your brain receives this information. What reaches your ears is a single, complex sound wave containing all the information. Your brain disentangles this big mixture, deciding what you hear and where the sound comes from. This means that the separate sounds are created in your brain because, when they reach your ear, they are all mixed together.
CNS: What were you most excited to find in the new study?
Holubowska: I am quite excited about the upcoming possibilities stemming from these initial results. This was a proof of concept, that you can manipulate the perception of very simple acoustic qualities with musical context. This opens the whole new avenue for testing how different levels of music complexity are being stored in the brain, and how they can dynamically influence not only how we interpret what we hear, but also actively change the perception.
CNS: Were there any novel approaches in the study that you would like to highlight?
Holubowska: I think what I am quite proud of, is that together with my supervisor, Prof. Marc Schönwiesner, we worked out a pretty clever paradigm for this experiment. The general problem in the studies is that when you introduce a manipulation in one place, you have to be careful not to change the perception of something else. In our case, we wanted to make some changes in the acoustics, while not disrupting music, which sounds like a challenging task – music is based on acoustics after all. The clever idea was to play with the spatial location of the sound. When the loudspeaker playing sound changes slightly, it doesn’t really interfere with the musical content. At the same time, it is a noticeable acoustic change. With this design we were able to observe how a response to something unrelated to music (spatial location of the sound) can be modified with music itself.
Also, with combining EEG and behavioural tasks, we managed to have two separate responses to the change: conscious decision about the change measured with behaviour, and a really fast, pre-attentive response measured from the brain activity.
CNS: What do you most want people to understand about this work?
Holubowska: I think it is important to know that the brain doesn’t work in a simple, linear way, meaning that one thing is processed after another, without interference. Rather, our perception is full of forward and feedback loops, when one process can change how the same thing is being perceived differently. This makes our brain work faster and more efficiently but, at the same time, it makes us prone to errors, and misunderstandings.
Our results shed also some light on how the auditory pathway is constructed. For cognitive neuroscience it is important to know that we probably still need to better understand which information is stored, and how and where predictions and biases are propagated.
CNS: What’s next for this line of work?
Holubowska: Our findings help us understand the dynamics of information processing along the auditory pathway on different levels of complexity: from simple acoustic information to meanings, expectations, and emotions. Hopefully, with some follow-up studies, we will be able to better describe the neural mechanism, to tell more about what happens where. Moreover we would be also interested in how dynamic is the change: does it happen on the level of more coarse conditions within music, or does our sensitivity to acoustic deviants changes note by note?
In general, I would love to fully understand the process of unravelling the incoming information, and how this information can be processed differently, depending on our priors and predictions.
-Lisa M.P. Munoz