Rewriting Life
Why Aren’t Hearing Aids Better?
Researchers believe hearing aids are ripe for improvement in the coming years.
Hearing aids can’t restore someone’s auditory abilities the way glasses can restore 20/20 vision. Glasses work by refocusing the light hitting your retina, when the rest of your eye’s sensory apparatus (the parts that sense the light and convert it to signals to transmit to the brain) is perfectly fine. But people who need hearing aids typically have deeper problems with their ears. They may have lost some of the thousands of hair cells in the inner ear that translate sounds into electrical signals, or the nerve cells that ferry these signals to the brain may be damaged.
A standard hearing aid works like this: a microphone picks up sound and converts it to an electrical signal. The signal is amplified and sent to a tiny speaker, which changes it back into sound and sends it to the ear. You need some hair cells in the inner ear to pick up these sounds, or it won’t work.
But while there are fundamental limits to how well a hearing aid can re-create sounds, improvements over the current technology are still possible.
Fixing feedback
Until recently, hearing aids had to block the ear canal, because it was the only way to separate the microphone and the speaker enough to avoid terrible squeaky feedback. If you’ve ever plugged your ears, you know how uncomfortably loud your own voice can sound. About 10 years ago, manufacturers largely solved the feedback problem by developing technology that could detect feedback and create an opposite sound signal to cancel it out. This opened the door to more comfortable open-fit hearing aids that don’t block the ear canal fully.
Sorting signal from noise
Another thorny problem for hearing aids is distinguishing important noise, like speech, from background noise, like traffic or the murmur of background conversation. With the advent of digital hearing aids in the late 1990s came the ability to more easily program hearing aids for different auditory environments: speech or music, for example. A speech program might maximize intelligibility but the sound might not be very enjoyable, whereas the music program might sacrifice some of the crispness of speech, most of which is in the higher frequencies, for a more pleasant sound at a wider range of frequencies.
Modern digital hearing aids take this several steps further: they have speech-recognition algorithms and the ability to alter their settings automatically. Without any input from the user, they can detect the auditory environment and switch to the most appropriate program or even average between them (half speech and half music, for example).
Ongoing research into designing better directional microphones may also help sort signal from noise. This is tricky because hearing aids are so tiny, but groups led by Ronald Miles at Binghamton University, Neal Hall at the University of Texas, Miao Yu at the University of Maryland, and Scotland’s University of Strathclyde are designing directional microphones inspired by the ears of a parasitic fly.
Slow it down
Age-related hearing loss affects high frequencies more than low frequencies. So newer hearing aids can perform frequency transposition: they take high frequencies and recode them in lower frequencies. It’s not unlike what happens when you slow down a recording and make the voices sound lower.
Personalization
Jeffrey DiGiovanni, head of the auditory psychophysics and signal processing laboratory at Ohio University, thinks hearing aids will eventually be tailored to each person’s ability to process sound and even to their short-term memory abilities, which can influence how much auditory information someone can process at once.
The Takeaway
Even if hearing aids can’t ever restore perfect hearing to people with damaged sensory mechanisms in their ears, there are promising avenues of research for improving the devices.
Thanks to Janet Lowenthal for this question. If you have one, send it to bigquestions@technologyreview.com.