Helping the Deaf Hear Music
A new test measures music perception in cochlear-implant users.
John Redden is a deaf professional musician. He can sing on key, harmonize on key, and hear musical intervals well enough to reproduce them. He does this with a cochlear implant, which is a computer chip surgically embedded in his skull. The chip drives 16 tiny electrodes threaded into his inner ear that stimulate his auditory nerves. It gets auditory data from an external computer sitting on his ear that looks like a hearing aid. Instead of amplifying sound, though, it digitizes it and sends it to the implant by radio through the skin.
The technology is a marvel, but people like Redden are a mystery. The software is designed for speech, so it only “listens” to the speech frequencies rather than the much wider range occupied by music. The device delivers the overall shape of sound rather than the detailed frequency information that is crucial to distinguishing one pitch from another.
Most people with normal hearing can tell the difference between pitches that are 1.1 semitones apart. (A semitone is the smallest pitch interval in Western music.) But a 2002 study at the University of Iowa found that most implant users can only distinguish pitches when they are at least 7.6 semitones apart.
Some progress has been made in writing better software for music. I’m a cochlear-implant user myself. In 2005, I tried new software, called Fidelity 120, that simulated seven virtual electrodes between each physical electrode, not unlike the way that an audio engineer can make a sound seem to come from between two speakers. By targeting nerve populations between each electrode, the software gave me better frequency resolution. For me, it made a big difference. When I play this simulation of a piano with my old software, called Hi-Res, I can’t tell any three adjacent keys apart. But with Fidelity 120, I can. Music sounds fuller, richer, and more detailed.
Multimedia
But not everyone gets the same result. Redden, who does far better musically than I do, tried Fidelity 120 but still prefers Hi-Res. Such variations between user experiences present real perplexities for researchers who want to develop better software. The experience of music is inevitably subjective. A Sex Pistols fan might tell you that a given piece of software lets her hear “Anarchy in the U.K.” better, while a Mozart fan might tell you that the software doesn’t do anything for Eine Kleine Nachtmusik. Subjective reports don’t give developers enough information to know whether they’re making progress.
I asked Jay Rubinstein, an otolaryngologist and cochlear-implant researcher at the University of Washington, to explain the problem. “Music is not just one entity,” he told me. “It consists of combinations of rhythm, melody, harmony, dissonance, and lyrics. One needs to break it down into its component parts in order to determine how well or how poorly someone can hear it.”
Rubinstein and his team of researchers at the University of Iowa and the University of Washington are doing just that. At the Association for Research in Otolaryngology’s annual meeting in Phoenix on February 17, they unveiled a computerized test called the Clinical Assessment of Music Perception (CAMP). A paper outlining their work has just been published in Otology and Neurotology’s February issue.
CAMP sidesteps differences in taste by stripping down music to three basic components–pitch, timbre, and melody–and systematically assessing how well users perceive each.
Pitchperception is measured when the program plays two tones a short interval apart and asks the user to decide which is higher pitched. When the user is correct, the program gives her tones that are closer together. When she isn’t correct, it gives her tones that are farther apart. Over a number of trials, the program works out the closest interval that she can reliably differentiate. It’s a relatively easy test because all the listener has to do is determine which of two tones is the higher one.
Timbre perception is measured by playing the same note on eight different instruments. The user is asked to identify which instrument she hears. For example, the subject might be asked whether a particular note was played on a piano, a flute, or a saxophone. Timbre is perhaps the hardest aspect of music to define, but it provides a sensitive measure of a user’s ability to hear distinct but subtle differences.
Melody perception is measured in a highly unusual way. The test uses familiar tunes such as “Frère Jacques” and “Three Blind Mice.” But anyone would recognize “Frère Jacques” from the lyrics, so the lyrics are taken out. So is the rhythm–that is, the timing and duration of the notes. What is left is a string of equally spaced notes of equal duration: the melody and nothing but the melody.
I was one of the test subjects in early trials of CAMP. The first time I took the test I used my old software, Hi-Res. The pitch testing was fairly easy: I identified most of the frequencies correctly 75 percent of the time. I didn’t do as well on the timbre test, getting about 40 percent right.
But the melody test bewildered me. The very first tune sounded like beep boop beep beep boop beep bip beep boop. I stared at the computer. What the hell was that?
I “identified” it by choosing a song title at random, since I had no clue what it was, and waited for the next tune. Beep boop beep beep boop beep bip beep boop.
And the next. Beep boop beep beep boop beep bip beep boop.
Were they even different?
My score was less than 10 percent. I talked to Chad Ruffin, one of the designers of the test, who had a cochlear implant himself. How well, I wanted to know, would a person who hears normally do on the melody test? About 100 percent, he told me.
We did the test again with Fidelity 120. I did better on the melody test this time, scoring about 20 percent. That was closer to the mean score, which, Rubinstein told me, was 25 percent.
But John Redden had done far better. Redden gave me his score on the melody test: 100 percent. For a cochlear-implant user, that was an extraordinary score. Having a professionally trained brain for music probably helped. Richard Reed, a musician who had lost his hearing at 37 and gotten an implant at 46, had scored 86 percent. Only a handful of the subjects had gotten scores in that range.
Rubinstein says that people like Redden and Reed are proof of what’s possible. He told me, “I don’t want to lead people to unrealistic expectations of the ability to hear music with a cochlear implant, but in fact, the results are better than we expect.” There were, of course, the high scorers, but even many of the low scorers on the melody test had still done well on the pitch-perception test, as I had.
The scores suggested that the nascent capacity to perceive pitch was there, waiting to be exploited with better software and better training. For example, Rubinstein’s lab has been experimenting with an algorithm using a phenomenon called stochastic resonance to improve music perception.
So there was a good reason for the melody test to be hard, I realized. It was not an impossible test for implant users, but merely a very difficult test. It was a simple, easy to use, and reliable test that let researchers measure the performance of new algorithms objectively. (A paper giving data on a larger group of subjects and demonstrating test-retest reliability is currently in review, Rubinstein says.)
The test also lets subjects measure progress over time. If in 10 years scores have doubled, that will mean that implant users really are hearing the basic elements of music better.
The test would also make it easier for researchers to analyze the performance of “superlisteners” like John Redden so that, ultimately, they can develop new software to let other deaf people hear music better.
Michael Chorost covers implanted technologies for Technology Review. His book, Rebuilt: How Becoming Part Computer Made Me More Human, came out in 2005.