Receive our editor’s picks

Musicians’ virtuosity shows cues for hard of hearing

Scientists are looking at how musicians process speech in noisy settings to help people with hearing trouble. Image credit: Flickr/ Anaïs

At a lively cocktail party, a noisy bar or busy restaurant, musicians are best at following conversations, and brain scans are revealing how this can be used to help the hard of hearing.

Music and speech are both rich in harmonic complexity, with notes and syllables, but also bursts of silence, then noise. Training allows musicians’ brains to hear small differences in sounds and melodies.

A new advanced method is showing that individuals listen differently, and pinpoints the most important snippets in syllables for each person. The method sketches for scientists an image of a sound – using time and frequency – in order to visualise the cognitive cogs that are engaged during speech perception.

It also reveals that musicians tend to use the same cues in speech, but much more consistently and with a higher degree of precision than the rest of us. 

Looking at those who are best at something helps everyone improve, whether that be in sport, technology or craftwork. The same is true of listening – understanding why musicians can pick words out better in noisy places will show how to assist people with hearing difficulties. 

‘Our results strongly suggest that increased selective auditory attention abilities, overtrained in musicians, can benefit speech perception in noise.’

Dr Fanny Meunier, University of Lyon, France

‘When you have speech and noise, the noise can hide part of the signal, but still you usually manage to understand enough,’ said Dr Fanny Meunier, who led the research project called SPIN, funded by the EU’s European Research Council.

Key to her discoveries is a method to pinpoint exactly where in the sound spectrum cues hide. She used statistical lessons from brain imaging to track the hearing system in musicians and others as they processed speech – with noise in the background – and was able to show that for a word or syllable that lasts 200 milliseconds, most people use cues just 10 to 50 milliseconds long to identify it.

‘This is so short, tiny really, that it was difficult to isolate,’ explained Dr Meunier, who leads the auditory language processing research group at the University of Lyon, which is moving to University of Nice, France. Her work shows that the beginnings and ends of syllables are most critical.

‘Our results strongly suggest that increased selective auditory attention abilities, overtrained in musicians, can benefit speech perception in noise,’ she said.

It means that, with the right training, people who have trouble hearing could learn to zoom in on one particular frequency nugget in a complex harmonic, or improve their ability to discriminate timing. 

‘People who have trouble understanding speech don’t understand what they are missing and it has been hard to do training,’ she says. ‘We didn’t know what cues they were missing.’

Dr Meunier is hoping her research on the listening brain could also help improve cochlear implants or hearing aids for handling essential auditory cues.

For example, she has shown that transitions are key to categorising words in noisy places, so it would make a big difference if a hearing aid could accentuate them. 


Developmental dyslexia is linked to trouble with writing, spelling and reading. However, dyslexic people also struggle to process speech in noisy places.

Usually researchers interested in language and speech processing run experiments in silence. Dyslexic people do not have any trouble in such ideal conditions. Throw noise into the room, however, and their performance goes down.  

Some experts thought they were not using the right cues, but Dr Meunier’s research challenges this. ‘It seems that most of the dyslexic people used cues as normal, but with more variation, so it is possible we could help to train people to listen to speech and pick out the best cues each time,’ she said. 

And, in fact, Dr Meunier’s laboratory showed that dyslexic people require a three decibel greater signal-to-noise ratio than normal reading individuals. The better training of musicians pushes their abilities forward by around the same. 

Watching the brain

Watching the brain as it works has become easier as technology to see what’s going on inside our heads has improved. Professor William Marslen-Wilson at the University of Cambridge, UK, is investigating how the brain processes speech; he focuses on the differences between native speakers who are listening to Italian, Polish, Russian, English or Arabic.

He has led a project called NEUROLEX, funded by the EU's European Research Council, to probe this inner mystery – how the brain deciphers language and which parts it uses. He has showed that our brains work in the same way for different langauges.

So fast is the brain at interpreting languages that the professor has to slow down the brain movies – which show the electrical activity.

For more information

More info