Show Summary Details

Page of

PRINTED FROM OXFORD HANDBOOKS ONLINE (www.oxfordhandbooks.com). © Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use (for details see Privacy Policy and Legal Notice).

date: 18 August 2019

The Auditory Brainstem Implant: Restoration of Speech Understanding from Electric Stimulation of the Human Cochlear Nucleus

Abstract and Keywords

The auditory brainstem implant (ABI) is a surgically implanted device to electrically stimulate auditory neurons in the cochlear nucleus complex of the brainstem in humans to restore hearing sensations. The ABI is similar in function to a cochlear implant, but overall outcomes are poorer. However, recent applications of the ABI to new patient populations and improvements in surgical technique have led to significant improvements in outcomes. While the ABI provides hearing benefits to patients, the outcomes challenge our understanding of how the brain processes neural patterns of auditory information. The neural pattern of activation produced by an ABI is highly unnatural, yet some patients achieve high levels of speech understanding. Based on a meta-analysis of ABI surgeries and outcomes, a theory is proposed of a specialized sub-system of the cochlear nucleus that is critical for speech understanding.

Keywords: cochlear nucleus, auditory brainstem implant, auditory prosthesis, speech understanding, low spontaneous neurons, small cell cap

Introduction: Short History of the Auditory Brainstem Implant

Back in the late 1970s it was already becoming clear that cochlear implants (CIs), that is, electric stimulation of the cochlea, could produce functionally useful hearing and even some open set speech recognition without lipreading. There were clear cases of CI patients conversing over the telephone using only the CI (Bilger & Hopkinson, 1977; Eddington, 1983; Spahr & Dorman, 2004). This result seemed impossible at the time and was met with much disbelief and skepticism. Auditory scientists were studying the complexities of basic cochlear biophysics and physiology. For example, phase locking in auditory nerve fibers was the rage, and researchers were trying to understand the role of phase-locking on auditory perception (Young & Sachs, 1979; Young, 2008). Basic auditory researchers thought that phase locking must be important for conveying fine timing information for speech, pitch and for localizing sounds in space. For more than 15 years, patients with cochlear implants have mostly been able to understand speech well enough to converse on the telephone. Children with cochlear implants at an early age are able to learn to speak and understand speech at high levels. The research community was skeptical and puzzled by the fact that some CI patients could understand speech with the crude pattern of nerve activation created by a cochlear implant because CIs provided neither fine time structure nor fine spectral structure.

One of the clinics that had pioneered cochlear implants, the House Ear Institute in Los Angeles, was also a large clinic that specialized in removal of vestibular schwannomas (VS) from patients with neurofibromatosis type 2 (NF2). Although NF2 is a rare genetic disorder, Drs. Bill House, a neurotologist, and Bill Hitselberger, a neurosurgeon, performed this tumor removal surgery in about 300 patients per year. It was traumatic to see NF2 patients lose all hearing in the tumor ear. The VIII nerve was removed along with the tumor, which had to be removed to save the patient’s life. NF2 usually causes bilateral VS, so the patient will usually lose all hearing in both ears as both tumors required surgery.

House and Hitselberger decided to try something akin to a CI in these patients in an attempt to preserve some degree of hearing following VS removal. Since there was no remaining auditory nerve, a CI was not an option. They placed a ball electrode into the cochlear nucleus, near the root entry zone of the cochlear nerve (Hitselberger et al., 1984). This first auditory brainstem implant (ABI) was placed in a volunteer in 1979 and with a revision in 1981, and it has functioned stably until the present day, 2018 (House & Hitselberger, 2001). Although this innovation was met with skepticism and criticism, it did provide auditory perception at the level of a single-channel CI: sound awareness, an aid to lip reading, and identification of environmental sounds (Brackmann et al., 1993; Otto et al., 2002).

Later, around 2000, a significant proportion (about 30%) of ABI patients in some clinics were achieving high-level open-set speech recognition with only the ABI. The best ABI patients were performing at levels comparable to the best CI patients (Colletti & Shannon, 2005).

How is this possible? The ABI stimulates the rostral surface of the cochlear nucleus with multiple electrodes. But the surface of the cochlear nucleus (CN) does not have a simple tonotopic structure. There are many anatomically and physiologically different sub-structures within the CN, and each has its own tonotopic organization (Moore & Osen, 1979). They are not all near the CN surface, and the tonotopic axes don’t all line up in the same orientation. While the ABI provides multiple points of stimulation, it is not clear how this stimulation maps onto the multiple tonotopic structures and multiple functional divisions within the CN. In spite of this, many ABI patients can understand speech well enough to have a relatively normal conversation by telephone. How can such a chaotic connection between the ABI electrode and the complex CN structure result in good speech recognition? In this chapter we discuss in more detail the factors that may have led to this good performance and propose a hypothesis to explain how it could happen.

The ABI Device

First, let us consider the design of a cochlear implant device. The design of a cochlear implant electrode is straightforward. The cochlea is organized tonotopically, so the CI electrode array consists of a linear array of 16–24 electrical contacts that are inserted into the scala tympani along the tonotopic axis. Electrodes near the round window activate neurons that would typically respond to high-frequency sound and those further in the cochlear would activate neurons that would typically respond to lower-frequency sound (Loizou, 1999).

However, it is not clear how to design an electrode to connect to the multiple tonotopic dimensions of the CN. Following removal of the VS, the surgeon cannot see any anatomical landmarks for the CN. If there is a remaining 8th cranial nerve (VIIIn) stump, the surgeon can follow that to the CN, but that is not often the case. Anatomical studies in human cadaver specimens has shown that the CN lies immediately along the dorsal side of the lateral recess of the IV ventricle. So an electrode array was designed as a mesh pad to be inserted into the lateral recess (LR) with the contacts facing dorsally and inferiorly. Over time additional electrodes were added, going from 1, to 3, to 8 and finally evolving to the 21-electrode system used in the Cochlear Corp ABI. The Cochlear Corp. ABI has electrodes arrayed along a 3 by 8 mm mesh substrate. An ABI is produced by Med-El that has 12 stimulating contact electrodes on a slightly smaller pad. The mesh substrate encourages fibrous in-growth to fix the array in position, which occurs within 1 to 2 weeks. At one time an ABI with penetrating microelectrodes was designed and used in 10 patients, but the results were no better than the surface electrode system (Otto et al., 2008).

In a CI there are very few non-auditory side effects from stimulation. Occasionally CI patients report a tingling or motor activation of the facial nerve. However, the CN is located in the brainstem, where there are many other neural structures that could be activated by the electrode. Anatomy and experience show that the primary points of nonauditory activation are the flocculus of the cerebellum (which is located on the opposite side of the LR from the CN) and the cerebellar peduncle, which is an uncrossed, ascending fiber tract in this location (Brackmann et al., 1993; Otto et al., 2002). Activation of the flocculus causes a sensation of the room jumping when stimulation occurs, presumably because it interferes with the fine ocular motor control. Activation of the peduncle produces a tickle or tingling sensation that occurs along the ipsilateral side of the body to the ABI. Activation of peduncle fibers can produce action potentials in both directions, causing sensations in the brain as well as motor activation at the peripheral terminus. Neither site of nonauditory activation is potentially harmful or unpleasant, and the sensations are eliminated if those electrodes are turned off. In a few instances more serious activation of the IXn was observed, causing and unpleasant coughing and tightness in the throat. In those few cases it was found that the entire electrode array had moved post-surgically and was located outside of the LR. The IXn passes near the mouth of the lateral recess of the IVth ventricle, so the IXn can be activated if the electrode array is extruded from the LR. Even in some of these cases, ABI use can still be useful by turning off electrodes that activate IXn and only using electrodes that remain inside the LR and produce auditory sensations.

ABI Outcomes

So what does an ABI sound like? Patients typically describe the sound of the ABI as bizarre and robotic. One patient said, “It’s like you’re trying to communicate with me by rattling a large sheet of plastic.” At first, most patients are not able to make any use of the sound at all. It may take many months, or even years, to adapt to the sound patterns produced by the ABI, which are probably quite different from the normal tonotopic patterns of neural activation coming from the normal cochlea. Presumably the plasticity of the brain must map the strange new ABI pattern of information onto the stored patterns learned from a lifetime of sound experience. Most ABI patients eventually learn to use the device, but the individual differences in outcome are large. Some patients can only use the ABI at a level similar to single-channel CIs, that is, awareness of sound, aid to lip reading, and some identification of environmental sounds. A few patients achieve the ability to recognize speech without using lip reading. They can converse on the telephone without difficulty. One patient I know is a tradesman who relies on his cell phone for jobs. It is not clear what underlies the large differences in outcome across patients. Is it the electrode design and placement? Is it differences in the damage to the anatomy? Are there large individual differences in the ability to adapt to new patterns of information? Answers to some of these questions have developed over the last 20 years as ABI applications spread to more clinics and countries.

In the early 2000s Vittorio Colletti of Verona Italy reported CI-like speech recognition in ABI patients (Colletti & Shannon, 2005). These patients were not NF2 ABIs, but people who had lost their auditory nerves from non-tumor causes, such as trauma, infections, and so on. His initial reports were met with skepticism. However, further independent testing showed several non-tumor ABI patients who had outcomes comparable with good CI users. This observation suggested that the cause of poor ABI outcomes might be related to NF2. It was thought that the disease itself or damage from the tumor removal was the difference between good and bad performance with an ABI.

Soon, good outcomes were observed even in NF2 ABI patients. Two clinics in Germany, headed by Robert Behr and Cordula Matthies, both neurosurgeons, showed about 30% of their NF2 ABI patients able to recognize open-set speech at better than 30% correct (Behr et al., 2007, 2014; Matthies et al., 2013, 2014), a result similar to that obtained by Colletti on non-NF2 ABI patients. These result suggest that NF2 is not the limiting factor. This new pattern of results may help answer the question of how the ABI can work so well in some patients.

Colletti also pioneered the application of the ABI to children (Colletti et al., 2014). Sime children were born with no cochleas or auditory nerve and others lost the nerve due to disease process or trauma. He observed a wide range of outcomes, with the poorest performance in children with genetic syndromic disorders, such as CHARGE. However, he also observed speech development and speech understanding in some of the children, predominantly in children with damage only to the cochlea and auditory nerve and without other disabilities. This finding shows that the information provided by an ABI is even sufficient for children to learn auditory communication from the sound of the ABI and no prior auditory experience.

A meeting was held in Munich in 2012 to pool results across ABI clinics to see what factors might be related to ABI outcomes. Patient demographic information was collated as well as signal processing parameters and surgical procedure information. There was little correlation between demographics and signal processing with outcomes, but a strong relation between outcomes and surgical procedure. Patients with the best outcomes were from two clinics that used a semi-sitting neurosurgical position during surgery. There was consensus that it was not the patient position per se that made the difference, but something related to the position (Behr et al., 2014). The best guess is that the semi-sitting position produces lower vascular pressure in the surgical area so the surgeon needs little or no electrocautery because the intra-operative bleeding is minimal. Cautery can have two detrimental effects: excitotoxicity and direct tissue damage. If the VIIIn is interspersed in the tumor (which is common in NF2 tumors), then cautery applied to the tumor may cause high levels of activation in the neurons, possibly resulting in excitotoxicity in the VIIIn, as well as trans-synaptic injury in the CN regions receiving projections. It is also common that the anatomical landmarks are not clear in the case of large tumors, and application of cautery may impact the surface of the brainstem and CN directly. We will speculate in the next section as to how such excitotoxicity or surface damage to the CN may alter the outcome.

Hypothesis

At almost every step, performance of auditory prostheses has surpassed expectations. In the early single-channel cochlear implants, there was no frequency or tonotopically selective stimulation, yet most people found the stimulation useful and even indispensable in everyday life. A few patients even achieved some open set speech recognition (Berliner et al., 1989). In multichannel CIs, patients received separate channels of tonotopic information, and most were able to achieve open-set speech recognition (Spahr & Dorman, 2004). We now think we understand how such performance is possible (Shannon et al., 1995, 2004). Pattern recognition by the brain is a powerful system and speech patterns can be recognized using only temporal envelope fluctuations and four or more tonotopic channels (Shannon et al., 1995). The brain’s powerful neural net must develop the ability to extract meaning from rapidly changing patterns in neurons across any listening situation and across different talkers, speaking styles, and accents. The ear and neural hearing system is capable of processing a wider range and complexity of information, like fine spectral resolution and fine temporal resolution, but not all of that complexity is necessary for speech. Musical melody recognition and localization of sounds in space require fine spectral resolution and fine time resolution (Smith et al., 2002; Shannon et al., 2004; Oxenham, 2012), but apparently speech recognition does not. Some aspects of speech also require fine spectral and temporal resolution, like voice pitch and prosody, and talker identification. While this information is not necessary to recognize the identity of the words, it is necessary to recognize the emotional content and emphasis of the spoken message. (Jiam et al., 2017)

Certainly, the brain’s pattern recognition is part of the answer, but not the complete picture. No matter how impressive it is, even the brain cannot recognize sensory patterns based on no information—the key elements of the pattern must be transmitted to the brain. But what are the “key” elements? Are all portions of the auditory nerve involved or only a specialized subset? An important clue comes from the ABI results.

Most ABI patients hear sound from activation of the ABI. Most ABI patients hear temporal patterns of sound and some pitch changes, so even if they are not able to understand speech the ABI is activating auditory neurons and is connecting to the tonotopic dimension of the CN. But some ABI patients are able to understand speech well enough to converse on the telephone. This level of performance does not occur immediately but takes months to years of practice and adaptation. If the brain is slowly learning to use the new pattern of neural activity generated by an ABI, why don’t all ABI patients learn to do it?

At that 2012 meeting in Munich it appeared that some aspect of the surgery was associated with the difference in outcomes across ABI patients. ABI patients who did the best were mostly ones who had surgery in the neurosurgical semi-sitting position. The participants thought that the differences in outcomes were due to the minimal use of cautery in these surgeries compared to other surgical approaches or patient positions. What might cautery be damaging that could cause such a significant difference in ABI performance?

VSs originate on the vestibular portion of the VIIIn near the myal/glial junction. As the tumor grows, it balloons out into the cerebellopontine angle and eventually fills the space between the medial opening of the internal auditory meatus and the surface of the brainstem. Large tumors contact the surface of the brainstem and draw a blood supply from there (angiogenesis). As the tumor is removed, the surface of the tumor must be teased away from the surface of the brainstem. Since the tumor draws its blood supply from the surface of the brainstem, there is often considerable bleeding from the shared vascularization. In most surgical positions, this bleeding is often halted using bipolar electrocautery. However, such cautery may damage the surface of the brainstem by excitotoxicity or direct electrical tissue damage. In the region of the tumor, the CN lies on the surface of the brainstem beneath a membrane. Thus, the surface of the CN may be in harm’s way from the damaging effects of cautery. In the semi-sitting position there is less venous pressure and thus less need for cautery, so this damage may be less.

Let’s review the anatomy of the human CN to see what portion might be affected by this potential cautery damage. Figure 1 shows schematic representation of the human CN adapted from Moore and Osen, 1979. The yellow shaded area, added by hand by Jean Moore, indicates the small cell cap (SCC). The SCC is a large structure in the human CN, significantly larger than almost all mammals except porpoises. The CN and SCC of the cat are shown for comparison. Little is known about the function of the cells in the SCC, but it is known that most of its innervation is received from the low-spontaneous rate (LSR) primary auditory afferents (Liberman, 1991). We know that the cells in the SCC are excellent at preserving modulation responses in firing rate (Goshal & Kim, 1996, 1997), probably because of the sloping saturation of the LSR fibers. We know that modulation sensitivity is one of the only psychophysical measures that correlates with speech perception (Fu, 2002; Colletti & Shannon, 2005). The low spontaneous rate fibers (LSR) arise from the non-pillar side of the inner hair calls (IHC) and have different synaptic connection to the IHC. Recent work by Kujawa and Liberman (2015) show that the LSR fibers are more susceptible to loud sounds and show significant mortality, even for sound levels that do not damage the IHCs. Such damage to LSR fibers seems to cause a loss of speech understanding even when there is no loss of threshold sensitivity and has been called “hidden hearing loss.” IN CIs and ABIs as well, the variability in performance is not accompanied by an elevation in threshold or a degradation in any other psychophysical ability, except modulation detection. So if the LSR fibers play a special role in speech understanding, and primarily project to the SCC, and if the SCC region is damaged by cautery during tumor removal, then I suggest that the loss of this function may be a major factor in the individual differences in ABI speech understanding. Other regions can still carry general auditory information, including tonotopic information, but the LSR/SCC system may be essential for speech pattern perception.

The Auditory Brainstem ImplantRestoration of Speech Understanding from Electric Stimulation of the Human Cochlear NucleusClick to view larger

Figure 1. Comparative diagram of the cochlear nucleus from the cat (from Bahmer, 2007 based on Olsen, 1969) and human (after Moore & Olsen, 1976). Copyright © 1979 Wiley‐Liss, Inc.).

The SCC is indicated in yellow as added by Jean Moore. Cap—dorsolateral cap area; AVCN—anteroventral cochlear nucleus; coch.n.—cochlear nerve; DCN—dorsal cochlear nucleus; PVCN—posteroventral cochlear nucleus; cent.—central region of ventral cochlear nucleus; oct.—octopus cell area; sph—spherical cell area; vest.n.—vestibular division of VIII nerve.

This might seem like a dubious string of suppositions, but let us consider vision as an analogous system. The eye contains two receptor systems—rods and cones. Rods compose about 95% of the receptors and cones only 5%, similar to the relative proportion of HSR and LSR fibers in the ear. Rods and HSR fibers are highly sensitive to low energy levels but saturate at light and sound levels that occur in most daytime situations. The fovea, where most fine pattern recognition (like reading and facial recognition) takes place, contains mostly cones even though they are only 5% of the total receptor population. I suggest that the LSR fibers and SSC function similarly to the fovea of the eye—it is a separate and distinct subsystem that has an important function for pattern recognition. I suggest that both eye and ear have evolved separate subsystems for different sensory functions. HSR fibers in the ear and rods in the eye have extremely low thresholds, showing sensitivity that pushes the limits of physics. While such neurons/receptors have a clear evolutionary benefit, they have the disadvantage of saturating at low stimulus levels. The LSR neurons in the ear and cones in the eye have a larger dynamic range and may have preferential projection to higher neural regions that specialize in pattern recognition. In the eye there are distinct populations of primary receptors that feed the distinct subsystems. In the ear the segregation is accomplished by differences in the synaptic ribbons on the IHC membrane.

Summary

It is surprising that working with deaf people has helped us achieve new insights into auditory neuroscience. At first, it appears that the crude pattern of activation achieved by a CI or ABI won’t be very helpful for hearing. As a field we were obsessed with the complexity and intricacies of the cochlea and auditory nerve firing patterns. Most people assumed that all of the detail was important—we were all acting like the proverbial shoemaker who sees the world through shoes. When phase-locking was discovered in the auditory nerve the zeitgeist turned to temporal coding, assuming that this temporal code was a critical part of pitch and vowel coding. Like the shoemaker, we all placed too much importance on what we were studying, and we forgot that there is a brain attached to the ear. The brain’s pattern recognition is a powerful system that, once trained by millions of repetitions in childhood, can recognize speech patterns in our native language even in the presence of massive distortion and degradation. Vision scientists learned this lesson long ago in the study of face recognition (Harmon & Julesz, 1973). They visually degraded a famous picture of Abraham Lincoln by pixilizing it. To their surprise people could recognize the degraded picture with surprisingly few pixels. Interestingly, the surrealist artist Salvador Dali immediately recognized the perceptual significance of this finding and incorporated his own version of the pixilated Lincoln in many of his later paintings.

A second surprise from the work on auditory prosthesis is that it provides us with a hypothesis about the biological substrate of such pattern recognition. At present there is no hard evidence of the LSR/SCC system as an acoustic fovea, but it is at least broadly consistent with the outcomes of CI and ABI as well as a broad range of other auditory observations. How many times does science teach us that new insights can come from unexpected directions? Auditory research and auditory prostheses are intimately interconnected—as we discover more basic information about the normal auditory system we may be able to apply this new knowledge to the design of better prostheses devices. But I suggest that the arrow points both ways—insights from auditory prosthesis research may inform us about aspects of the normal system we could not have realized if not for the study of the disordered system. It is important to study both the forest and the trees.

References

Bahmer, A. (2007). Computer simulation of chopper neurons: intrinsic oscillations and temporal processing in the auditory system, PhD Dissertation, Technical University Darmstadt.Find this resource:

Behr, R., Colletti, V., Matthies, C., Morita, A., Nakatomi, H., Dominique, L., . . . Skarzynski, H. (2014). New outcomes with auditory brainstem implants in NF2 patients. Otology & Neurotology 35(10):1844–1851.Find this resource:

Behr, R., Müller, J., Shehata-Dieler, W., Schlake, H. P., Helms, J., Roosen, K. K., et al. (2007). The high rate CIS auditory brainstem implant for restoration of hearing in NF-2 patients. Skull Base 17:91–107.Find this resource:

Berliner, K. I., Tonokawa, L. L., Dye, L. M., & House, W. F. (1989). Open-set speech recognition in children with a single-channel cochlear implant. Ear and Hearing 10(4):237–242.Find this resource:

Bilger, R. C., & Hopkinson, N. T. (1977). Hearing performance with the auditory prosthesis. Annals of Otology, Rhinology & Laryngology, Supplement 86(3 Pt 3 Suppl 38):76–91.Find this resource:

Brackmann, D. E., Hitselberger, W. E., Nelson, R. A., Moore, J., Waring, M. D., Portillo, F., … Telischi, F. F. (1993). Auditory brainstem implant: I. Issues in surgical implantation. Otolaryngology–Head and Neck Surgery 108(6):624–633.Find this resource:

Colletti, V., & Shannon, R. V. (2005). Open set speech perception with auditory brainstem implant? Laryngoscope 115:1974–1978.Find this resource:

Colletti, L., Shannon, R. V., & Colletti, V. (2014). The development of auditory perception in children after auditory brainstem implantation. Audiology & Neurotology 19(6):386–394.Find this resource:

Eddington, D. K. (1983). Speech recognition in deaf subjects with multichannel intracochlear electrodes. Annals of the New York Academy of Science 405:241–258.Find this resource:

Fu, Q.-J. (2002). Temporal processing and speech recognition in cochlear implant users. NeuroReport 13:1635–1639.Find this resource:

Goshal, S., & Kim, D. O. (1996). Marginal shell of the anteroventral cochlear nucleus: Intensity coding in single units of the unanesthetized, decerebrate cat. Neuroscience Letters, 205: 71–74.Find this resource:

Ghoshal, S., & Kim, D.O. (1997). Marginal shell of the anteroventral cochlear nucleus: single-unit response properties in the unanesthetized decerebrate cat. Journal of Neurophysiology 77(4): 2083–2097.Find this resource:

Harmon, L. D., Julesz, B. (1973). Masking in visual recognition: effects of two-dimensional filtered noise. Science 180(4091):1194–1197.Find this resource:

Hitselberger, W. E., House, W. F., Edgerton, B. J., & Whitaker, S. (1984). Cochlear nucleus implants. Otolaryngology–Head and Neck Surgery 92(1):52–54.Find this resource:

House, W. F., Hitselberger, W. E.(2001). Twenty-year report of the first auditory brain stem nucleus implant. Annals of Otology, Rhinology & Laryngology 110(2):103–104.Find this resource:

Jiam, N. T., Caldwell, M., Deroche, M. L., Chatterjee, M., Limb, C. J. (2017). Voice emotion perception and production in cochlear implant users. Hearing Research 352:30–39.Find this resource:

Kujawa, S. G., Liberman, M. C. (2015). Synaptopathy in the noise-exposed and aging cochlea: Primary neural degeneration in acquired sensorineural hearing loss. Hearing Research 330(Pt B):191–199.Find this resource:

Liberman, M. C. (1991). Central projections of auditory nerve fibers of differing spontaneous rate: I. Anteroventral cochlear nucleus. Journal of Comparative Neurology 313:240–258.Find this resource:

Loizou, P. C. (1999). Introduction to cochlear implants. IEEE Engineering in Medicine and Biology Magazine 18(1): 32–42.Find this resource:

Matthies, C., Brill, S., Kaga, K., Morita, A., Kumakawa, K., Skarzynski, H., … Behr, R. (2013). Auditory brainstem implantation improves speech recognition in neurofibromatosis type II patients. c ORL Journal of Otorhinolaryngology and Its Related Specialties 75(5):282–295.Find this resource:

Matthies, C., Brill, S., Varallyay, C., Solymosi, L., Gelbrich, G., Roosen, K., … Müller, J. (2014). Auditory brainstem implants in neurofibromatosis Type 2: Is open speech perception feasible? Journal of Neurosurgery 120(2):546–558.Find this resource:

Moore, J. K., & Osen, K. K. (1979). The cochlear nuclei in man. American Journal of Anatomy 154(3):393–418.Find this resource:

Osen, K. K. (1969). Cytoarchitecture of the cochlear nuclei in the cat. Journal of Comparative Neurology 136(4):453–483.Find this resource:

Otto, S. R., Brackmann, D. E., Hitselberger, W. E., Shannon, R. V., & Kuchta, J. (2002). Multichannel auditory brainstem implant: Update on performance in 61 patients. Journal of Neurosurgery 96(6):1063–1071.Find this resource:

Otto, S. R., Shannon, R. V., Wilkinson, E. P., Hitselberger, W. E., McCreery, D. B., Moore, J. K., & Brackmann, D. E. (2008). Audiologic outcomes with the penetrating electrode auditory brainstem implant. Otology & Neurotology 8:1147–1154.Find this resource:

Oxenham A. J. (2012). Pitch perception. Journal of Neuroscience 32(39):13335–13338.Find this resource:

Shannon, R. V., Fu, Q-J., & Galvin, J. (2004). The number of spectral channels required for speech recognition depends on the difficulty of the listening situation. Acta Oto-Laryngologica, Supplementum, 552: 50–54.Find this resource:

Shannon, R. V., Zeng, F-G., Wygonski, J., Kamath, V., & Ekelid, M. (1995). Speech recognition with primarily temporal cues. Science 270:303–304.Find this resource:

Smith, Z. M., Delgutte, B., & Oxenham, A. J. (2002). Chimaeric sounds reveal dichotomies in auditory perception. Nature 416(6876):87–90.Find this resource:

Spahr, A. J., & Dorman, M. F. (2004). Performance of subjects fit with the Advanced Bionics CII and Nucleus 3G cochlear implant devices. Archives of Otolaryngology—Head and Neck Surgery 130(5):624–628.Find this resource:

Young, E. D. (2008). Neural representation of spectral and temporal information in speech. Philosophical Transactions of the Royal Society of London Series B, Biological Sciences 363(1493):923–945.Find this resource:

Young, E. D., & Sachs, M. B.(1979). Representation of steady-state vowels in the temporal aspects of the discharge patterns of populations of auditory-nerve fibers. Journal of the Acoustical Society of America 66(5):1381–1403.Find this resource: