The challenges of restoring binaural hearing: A study of binaural fusion, listening effort, and music perception in children using bilateral cochlear implants

Written by

Morrison Mansour Steel

A thesis submitted in conformity with the requirements for the degree of Master of Science

Institute of Medical Science

University of Toronto

© Copyright by Morrison M Steel, 2014

The challenges of restoring binaural hearing: A study of binaural fusion, listening effort, and music perception in children using bilateral cochlear implants Morrison M Steel Master of Science Institute of Medical Science The University of Toronto 2014

Benefits from bilateral implantation vary and are limited by mismatched devices, stimulation schemes, and abnormal development. We asked whether bilateral implant use promotes binaural fusion, reduces listening effort, or enhances music perception in children.

Binaural fusion (perception of 1 vs. 2 ) was assessed behaviourally and reaction times and pupillary changes were recorded simultaneously to measure effort. The child’s Montreal Battery of Evaluation of Amusia was modified and administered to evaluate music perception.

Bilaterally implanted children heard a fused percept less frequently than normal hearing peers, possibly due to impaired cortical integration and device limitations, but fusion improved in children who were older and had less asymmetry and when there was a level difference. Poorer fusion was associated with increased effort, which may have resulted from diminished bottom-up processing. Children with bilateral implants perceived music as accurately as unilaterally implanted peers and improved performance with greater bilateral implant use.

ii

Acknowledgments I would not have been able to persevere through this process without continual support and encouragement from many important people in my life. I am especially thankful for the time and dedication of the children who participated in this study and their families (Figure 4.1). These incredible individuals have been a constant source of inspiration and have motivated me to pursue a career as a clinician-scientist.

My Family – I am grateful to my parents, Sam and Shoshanah, for never doubting me and always giving me the courage and confidence to face any challenge. I would definitely not be where I am, or who I am, today if it were not for their immeasurable guidance, support, faith, and love. I am grateful to my sister, Dr. Mirisse Foroughe, for igniting my interest in psychology from a young age, never ceasing to stimulate my imagination, and for supporting me throughout this project.

Contributions I am very grateful for the opportunity to be part of such a cohesive and talented team at the Cochlear Implant Laboratory at the Hospital for Sick Children.

My Research Supervisor – I am grateful to Dr. Karen Gordon for introducing me to the fascinating world of cochlear implant research, teaching me to think critically, and showing me that I can accomplish more than I ever thought was possible with the right amount of motivation and effort. I would like to thank Dr. Gordon for allowing me to be a member of her incredible team, always believing in me, and being such a great role model and mentor.

My Advisory Committee – I am grateful to Dr. Karen Gordon, Dr. Blake Papsin, and Dr. Sandra Trehub for providing me with invaluable guidance, thought-provoking questions, and expert advice throughout this project. I would like to thank Dr. Papsin for challenging me to think outside the box and fostering my confidence and creativity. I am grateful to Dr. Trehub for her attention to detail and for suggesting that I incorporate pupillometry and higher frequency stimuli into this project, as these modifications enhanced its quality; I appreciate that Michael Weiss offered his time to create the stimuli.

The Cochlear Implant Lab and Communications Department – I am grateful to Dr. Sharon Cushing for sharing her pupillometry equipment with me, despite having such a busy clinic schedule. I want to thank Dr. Nikolaus Wolter for being such a great mentor to me and taking the time to ensure that I am on the right track, every step of the way. I am grateful to Alexander Andrews for going out of his way to teach me programming and music, in addition to working so hard to create exceptional delivery programs and spending hours troubleshooting them with me. I would like to thank Stephanie Jewell for teaching me to be resourceful and always being so helpful in booking patients and coordinating appointments with other team members. I want to thank Michael Deighton for providing me with meaningful reassurance and camaraderie since day one, whether in the lab or on the squash courts. I am grateful to Melissa Polonenko for

iii sharing the equipment and for spending a significant amount of time teaching me how to calibrate stimuli and perform statistical analyses. I want to thank Salima Jiwani for helping me considerably with my writing and analyses. I appreciate the time Vickram Rao and Rouya Botlani spent analyzing so many brainstem recordings. I am also grateful to Jerome Valero, Patrick Yoo, Daniel Wong, Desiree de Vreede, Caroline Dooley, and Sara Giannantonio for their friendship and enthusiasm. Finally, I would like to thank Patricia Fuller and Dianne Struk for the laughs and for helping me involve so many families in this study and Gina Goulding, Mary Lynn Fenness, Laurie MacDonald, Valerie Simard, and Pat Di Santo for their patience and for sharing their audiological expertise with me.

Presentations

Results from this study were presented at the following conferences:

1) 4th Annual Midwest Miniconference on Cochlear Implants, Madison, Wisconsin, October

2013 – Podium Presentation

2) Association for Research in Otolaryngology 37th Midwinter Meeting, San Diego,

California, February 2014 – Poster Presentation – Graduate Student Travel Award and

School of Graduate Studies Conference Grant

3) University of Toronto Collaborative Program in Neuroscience Research Day, Toronto,

Ontario, April 2014 – Best Overall Poster Presentation Award

In addition, results from this study are in the process of being written and submitted for publication in PLOS ONE as two separate papers (Chapters 2 and 3). The unpublished data cited will be submitted for publication at the same time as the fusion data (Chapter 2).

iv

Table of Contents Chapter 1: Background and Research Aims______1

1.1 Normal Hearing______3

1.1.1 Physiology of the Ear______3

1.1.2 Binaural Hearing______8

1.2 Electrical Hearing______17

1.2.1 Sensorineural Hearing Loss______17

1.2.2 Unilateral Cochlear Implantation______18

1.2.3 Bilateral Cochlear Implantation______27

Chapter 2: Binaural Fusion and Listening Effort______31

2.1 Background______32

2.1.1 Binaural Integration with Normal Hearing______32

2.1.2 Binaural Fusion with Cochlear Implants______34

2.1.3 Mental Effort with Cochlear Implants______36

2.1.4 Pupillometry______38

2.1.5 Summary______40

2.2 Methods______41

2.2.1 Participants______41

2.2.2 Stimuli and Equipment______44

2.2.3 Binaural Fusion Task______45

2.2.4 Pupillometry______50

2.2.5 Electrically Evoked Auditory Brainstem Responses______51

2.2.6 Data Analysis______52

v

2.3 Results______53

2.3.1 Fusion with Level Differences______54

2.3.2 Fusion with Timing Differences______57

2.3.3 Fusion with Place of Stimulation Differences______60

2.3.4 Predicting Binaural Fusion______62

2.3.5 Reaction Time______64

2.3.6 Pupillary Responses______67

2.4 Discussion______70

2.4.1 Children with Bilateral CIs Perceive Distinct Auditory Images______71

2.4.2 The Most Effectively Fuses Input with Level Differences______72

2.4.3 Multiple Psychophysical Ranges May Exist______74

2.4.4 Bilateral CIs Promote Integration in the Auditory Brainstem______75

2.4.5 Binaural Fusion May Depend on Higher-Level Processing______77

2.4.6 Binaural Fusion May Improve with Later Ages at Implantation______78

2.4.7 Poorer Binaural Fusion Translates into Greater Listening Effort______80

Chapter 3: Music Perception with Cochlear Implants______85

3.1 Background______85

3.1.1 Limitations of Music Perception with Cochlear Implants______86

3.1.2 Outcomes of Music Perception with Cochlear Implants______87

3.2 Methods______88

3.2.1 Participants______88

3.2.2 Modified Child’s Montreal Battery of Evaluation of Amusia______90

3.2.3 Data Analysis______91

vi

3.3 Results______92

3.3.1 Performance on Subtests______92

3.3.2 Performance on High and Low Frequency Trials______93

3.3.3 Predicting Music Perception______94

3.4 Discussion______95

3.4.1 Music Perception is Similar in Children Using Bilateral and Unilateral CIs_____96

3.4.2 CI Users Perceive Musical Rhythm Better than Pitch______97

3.4.3 The Addition of Low Frequency Trials Diminishes Music Perception______98

3.4.4 Bilateral CI Experience May Improve Music Perception______99

Chapter 4: Conclusions and Future Directions______100

4.1 Conclusions______100

4.2 Future Directions______101

Chapter 5: References______103

vii

List of Tables Table 1. CI Participant Demographic Information for the Binaural Fusion Task. Data is provided for each CI user in the present study who participated in the binaural fusion task (n=25), including etiology, age at implantation, interimplant delay, age at test, and bilateral CI experience______42 Table 2. ILDs Presented to CI Users in CU and dB. For all CI children who participated in the fusion task, thresholds are shown in CU, as well as the ILDs presented in various conditions in terms of both CU and dB. {T+10,T}, for example, represents the condition in which levels were presented at 10 CU above CI-2 threshold and at CI-1 threshold______49

Table 3. CI Participant Demographic Information for the mcMBEA. Data is provided for each CI user in the present study who participated in the mcMBEA (n=14), including etiology, age at implantation, interimplant delay, age at test, and bilateral CI experience______89

viii

List of Figures

Figure 1.1. Cross-section of the , showing outer and inner hair cells between the basilar and tectorial membranes. Figure courtesy of Moore (2003)______5

Figure 1.2. Depiction of the relative importance of fine structure and temporal envelope cues for localization (Figure 1.2a) and speech recognition (Figure 1.2b) from Smith and colleagues (2002). Information carried in the fine structure is more relevant for . Solid lines indicate dichotic chimaeras synthesized with the same sentence; dashed lines indicate those constructed using different sentences______9

Figure 1.3. Binaural cues used for sound localization. NH listeners can use phase differences (∆t) in the fine structure of low frequency sounds (< 2000 Hz) for localization in the horizontal plane, whereas ILD cues (∆I) play a larger role at higher frequencies. Figure adapted from Grothe and colleagues (2010)______11

Figure 1.4. Synaptic connections and auditory brainstem structures involved in ILD (left) and ITD (right) processing. Relatively simple subtraction of inputs to the underlies ILD coding, while ITD coding requires very precisely timed bilateral excitation and inhibition (Grothe et al., 2010)______13

Figure 1.5. A typical auditory brainstem response for a normal hearing adult. Peaks are marked by roman numerals and indicate the time at which neural conduction reaches specific generators in the ascending auditory pathway: proximal and distal (I and II), cochlear (III), and / (IV and V). Differences in wave V latency were used as a measure of auditory brainstem asymmetry in the present study. Figure adapted from Salamy & McKean (1976)______15

Figure 1.6. Elimination of the β component of the binaural difference waveform at large ITDs. This reduction in amplitude correlates with behavioural fusion and lateralization and thus reflects binaural processing. Figure adapted from Furst and colleagues (1985)______16

Figure 1.7. Internal and external components of the CI device. The external piece captures acoustic information and attempts to recreate the most perceptually relevant features using electrical pulses, which are then delivered to the auditory nerve by the internal piece of the device. Figure adapted from Papsin and Gordon (2007)______20

Figure 1.8. Image from Loizou (1998) shows how CIs convert acoustic input into meaningful electrical signals. The process is complex and involves bandpass filtering and subsequent rect/LPF______22

Figure 1.9. Average loudness judgments are shown as a function of stimulus intensity as a percentage of dynamic range. Unilateral CI use promotes loudness perception in adolescents that is not statistically different from NH peers. Loudness growth was assessed behaviourally on a

ix continuous visual scale and pulses were delivered from an apical electrode to unilateral users. Figure courtesy of Steel and colleagues (in press)______26

Figure 1.10. X-ray image of a bilaterally implanted child using different CI device generations (straight array and contour array). All children in the CI group tested in the present study were implanted with contour arrays. The positions of the external and internal components of the CI device are shown relative to the child’s head (Gordon et al., 2012).

Figure 1.11. Mean responses from children to ITDs on a four alternative forced-choice lateralization task (Salloum et al., 2010). Sequentially implanted children with a long-delay between implantation dates were not able to detect changes in ITDs and perceived sound as coming from both devices more frequently than from the middle of their heads. NH children changed their responses from left to middle to right as ITDs changed from left leading to no ITD to right leading______31

Figure 2.1. Children who were implanted at later ages (n=15) had better residual hearing (i.e., lower aided thresholds) when assessed with standard audiometric testing prior to implantation (R = -0.88, p < 0.0001)______44

Figure 2.2 Schematic (Figure 2.2a) and labelled diagram (Figure 2.2b) of the experimental set- up for the binaural fusion task. The testing computer, 2 L34 speech processors, and 2 pods are highlighted in red. Children were asked to indicate whether they heard one or two sounds by clicking on the one or two blue circles shown on the computer monitor as quickly as possible_45

Figure 2.3a) Illustration of the range over which levels (CU) were typically balanced. Schematic diagrams of all conditions presented: b) binaural input with ILDs, c) binaural input with ITDs, d) binaural input with IPDs. Levels presented to CI-1 or the right are shown in red, whereas levels presented to CI-2 or the left ear are indicated in blue. For ILD-containing conditions, higher levels are represented by larger circles______47

Figure 2.4. Complete experimental set-up for the binaural fusion task, including VNG pupillometry equipment. Video goggles were used to measure centisecond changes in pupil diameter while children indicated whether they heard one or two sounds______50

Figure 2.5. Mean overall proportions of 1 response are shown, as well as proportions for each subset of conditions tested in each group. Standard error was used for error bars. CI listeners (n=25) perceived 1 sound less frequently than NH peers (n=24; p < 0.0001)______53

Figure 2.6. Group performance for conditions containing ILDs (ITD = 0 ms). The NH group (n=24) is indicated in purple (Figure 2.6a), while the CI group (n=25) is highlighted in green. Biphasic pulses were delivered from electrode 20 in the CI group. Mean ILDs (CI-2 – CI-1) presented are shown in Figure 2.6b in dB along with corresponding schematic illustrations. Mean ILDs (dB) as level was increased on the right side or from CI-1 were: 13.1±1.7, 0.6±0.2, - 0.2±1.2, and -1.1±.1.2. As level presented to the left ear or from CI-2 was increased, ILDs (dB) were: -12.6±1.6, -1.0±1.2, -0.2±1.2, and 0.6±1.2. CI listeners consistently perceived one image when there were level differences, albeit less frequently than NH peers (p < 0.0001). Negative

x

ILDs indicate levels leading to the right side or CI-1, while positive ILDs indicate levels leading to the left side or CI-2______55

Figure 2.7. Binaural fusion as a function of ILD for individual NH children. Negative values in Figure 2.7a denote ILDs that became increasingly weighted to the right ear (n=6), whereas positive ILDs (Figure 2.7b) signify increasing levels presented to the left ear (n=7). All individual regression functions are shown as broken grey lines, as none of the slopes were significant (p > 0.05). Mean responses are shown in thicker black lines for each side______56

Figure 2.8. Individual regression functions for CI users for ILD-varying conditions. Curves in the left panel (Figure 2.8a) were derived for conditions in which CI-1 level was increasing (n=22), whereas curves in the right panel (Figure 2.8b) are shown for increasing levels from CI- 2 (n=21). Negative ILDs denote CI-1 leading stimuli. The majority of curves can be seen to be decreasing as a function of increasing ILD. Significant slopes (n=3) are represented by dark grey solid lines and non-significant slopes (p > 0.05) are shown as light grey broken lines______57

Figure 2.9. Mean responses in NH (n=24) and CI groups (n=25) for conditions with varying ITDs. Balanced stimuli presented for ITD-varying trials in the CI group contained a small mean ILD of -6.48 ± 12.86 CU = -0.34 ± 0.90 dB. NH children were more likely to hear two separate sounds as the ITD increased to ±2 ms (p < 0.0001), while CI users were not (p > 0.05). As shown in the figure, there was a larger difference between groups at 0 ITD than when level was varied between sides (Figure 2.6), because in the former case, level was balanced perceptually and thus CI users did not have access to the information which they rely on most heavily. In both groups, ITDs interfered more with fusion than ILDs (p < 0.0001). Negative values denote R/CI-1 leading ITDs______58

Figure 2.10. Individual regression functions are plotted for the NH group across ITDs ranging from -2 to 2 ms. 10/24 NH listeners were more likely to hear 2 sounds as the ITD increased (p < 0.05). Curves in the left portion of the graph correspond to conditions in which ITDs were negative, i.e., leading to the right (n=20). Positive ITDs were left leading (n=19)______59

Figure 2.11. Individual regression functions for the CI group for ITDs varying from -2 to 2 ms. ITDs did not affect fusion in 24/25 CI users (p > 0.05). Curves in the left portion of the graph correspond to conditions containing negative ITDs, i.e., leading to CI-1 (n=23). Positive ITDs were CI-2 leading (n=22)______60

Figure 2.12. Binaural fusion in the CI group (n=25) is displayed as a function of increasing difference in the place of stimulation between sides. Mean responses for conditions in which CI- 1 place of stimulation was held constant at electrode 20 while CI-2 delivered pulses from more basal electrodes (electrode 16 for IPD = 4 and electrode 9 for IPD = 11) are indicated in blue. Mean performance is highlighted in red for conditions in which CI-2 place of stimulation was held constant at electrode 20 while CI-1 delivered pulses from increasingly basal electrodes (Figure 2.3d). Changes in IPD did not consistently affect binaural fusion (p > 0.05)______61

Figure 2.13. Logistic regression curves for bilateral CI users for IPD-varying conditions. IPDs did not affect fusion in 16/25 children with CIs (p > 0.05). Figure 2.13a shows responses from

xi

CI users when the CI-1 electrode was moved more basally (n=24). For conditions shown in Figure 2.13b, the CI-2 electrode was moved to more basal positions (n=23). Negative values denote place of stimulation differences leading to CI-1______62

Figure 2.14. Multiple regression analysis revealed that mean proportion of 1 response for individual CI users (n=24) when level and place differences were absent can be predicted (p < 0.05) by absolute wave eV latency difference (Figure 2.14a; β = -0.42) and the age at CI-1 (Figure 2.14b; β = 0.41)______63

Figure 2.15. Mean overall RTs and RTs for each subset of conditions are displayed for both groups. CI users (n=24) had longer RTs than their peers with NH (n=24; p < 0.0001)______65

Figure 2.16. Mean RTs are shown for individual participants (nNH = 24; nCI = 25) for conditions in which level differences were present (Figure 2.16a) and absent (Figure 2.16b). Poorer fusion predicts longer RTs (R = -0.52, p < 0.001; R = -0.32, p < 0.05). Within-group regressions were not computed due to limited spread of data points in either group alone_____66

Figure 2.17. In the NH group (n=24), children achieved faster RTs at older ages (Figure 2.17a; R = -0.77, p < 0.0001), while there was no relationship between RT and chronological age or duration of time-in-sound in the CI group (Figure 2.17b; n=24; p > 0.05)______66

Figure 2.18. Mean PCPD is shown for each group. Children with bilateral CIs (n=21) had greater changes in pupillary diameter than their NH peers (n=19; p < 0.0001)______67

Figure 2.19. Greater changes in pupil diameter were associated with poorer binaural fusion (nNH = 19; nCI = 21). The association is shown for both level (Figure 2.19a; R = -0.59, p < 0.0001) and timing differences (Figure 2.19b; R = -0.38, p < 0.05). Within-group regressions were not computed due to limited spread of data points in either group alone______68

Figure 2.20. Older NH children had smaller changes in pupil diameter, reflecting less listening effort (Figure 2.20a; R = -0.66, p < 0.005; n=19), while there was no relationship between pupil diameter and time-in-sound or chronological age in the CI group (Figure 2.20b; p > 0.05)____69

Figure 2.21. Mean overall pupillary responses were positively correlated with RTs (R = 0.69, p < 0.0001; n=40). There was also a strong relationship between RT and PCPD within each group (NH: R = 0.78, n=19; CI: R = 0.34, n=21)______69

Figure 2.22. The binaural difference response (at ~4 ms latency) was present in all bilaterally implanted children tested (12/12) with matched apical stimulation and was eliminated by large mismatches in the place of stimulation (Gordon et al., 2012). This constitutes evidence that children with bilateral CIs are able to integrate binaural input in the auditory brainstem. The binaural difference response is obtained by computing the difference between bilaterally evoked EABRs and the sum of unilaterally evoked potentials and is thought to reflect inhibitory processing in the SOC______76

xii

Figure 2.23. Grand-average mismatch negativity waveforms were obtained from NH listeners by using electroencephalography to calculate the difference between event-related cortical responses evoked by deviant and standard stimuli. 300 Hz diotic stimuli were used for Frequency-Oddball conditions, while dichotic stimuli (300 Hz and 321/831 Hz) were used for Location-Oddball conditions. The non-significant mismatch negativity is highlighted in red and suggests that binaural input was integrated on a cortical level. Figure adapted from Fiedler and colleagues (2011)______77

Figure 2.24. Grand mean cortical waveforms of unilateral CI users shown for different durations of time-in-sound. Early implantation leads to cortical response peaks that develop in a normal- like fashion; however, the P2 peak remains abnormally large, relative to NH peers. Larger differences are highlighted in dark red (Jiwani et al., 2013)______85

Figure 3.1. Example of musical stimuli used. The original melody is shown in A, its scale- violated version in B, contour-violated version in C, interval alternate in D, and rhythm alternate in E. Asterisks denote the changed notes (Lebrun et al., 2012)______91

Figure 3.2. Mean performances across NH (n=23), bilateral CI (n=14), and unilateral CI (n=23) groups are indicated for each mcMBEA subtest. Overall scores are also shown. NH children achieved the highest scores (p < 0.0001) and there was no difference between CI groups (p = 1.0). Data from unilateral users was previously collected and published (Hopyan et al., 2012)_93

Figure 3.3. Performance for each NH (n=23) and CI (n=14) groups is shown for standard frequencies (i.e., excluding high/low frequency trials) and when either high or low frequency trials were added. Adding low frequency stimuli reduced accuracy in both groups (p < 0.005)_94

Figure 3.4. NH children (n=23) achieved greater accuracy on the mcMBEA when they were tested at older ages (Figure 3.4a; R = 0.58, p < 0.005). CI children (n=14) improved music perception with greater durations of bilateral CI use (Figure 3.4b; R = 0.44, p = 0.11). Outlier CI6 is indicated by the unfilled point______95

Figure 4.1. Some of the inspiring children with CIs standing with their families. These children participated in studies conducted by the Cochlear Implant Laboratory at the Hospital for Sick Children and without their help, none of the work reviewed in this thesis would have been possible. Thank you. Picture is courtesy of the Cochlear Implant Laboratory______102

xiii

List of Abbreviations

ABR – auditory brainstem response ACE – advanced combination encoder ANOVA – Analysis of Variance BFT – Binaural Fusion Test BPF – bandpass filter CI – cochlear implant CI-1 – first implanted cochlear implant CI-2 – second implanted cochlear implant CIS – continuous interleaved sampling CU – clinical unit EABR – electrically evoked auditory brainstem response ILD – interaural/implant level difference IPD – interaural/implant place difference ITD – interaural/implant time difference LPF – low-pass filtering LSO – lateral superior olive MBEA – Montreal Battery of Evaluation of Amusia mcMBEA – Modified Child’s Montreal Battery of Evaluation of Amusia MSO – medial superior olive NH – normal hearing PCPD – percent of change in pupil diameter Pps – pulses per second Rect - rectification RT – reaction time SOC – superior olivary complex VNG – Videonystagmography

xiv

Chapter 1

Background and Research Aims

The objective of the present study was to define the challenges associated with using bilateral cochlear implants (CIs) to restore binaural hearing in deaf children. We aimed to provide answers to the following research questions:

1) Can children with bilateral CIs achieve binaural fusion, i.e., perceive one fused

auditory image when presented with bilateral stimulation?

2) Does poorer binaural fusion translate into increased listening effort?

3) Does bilateral implant use promote music perception?

Bilateral CIs have been provided to children to promote binaural hearing (Litovsky et al.,

2004, 2006, 2012; Van Deun et al., 2009a, 2010a, 2010b; Grieco-Calub & Litovsky, 2010;

Salloum et al., 2010; Chadha et al., 2011) and ease the increased effort required for listening demonstrated by unilaterally implanted children (Burkholder & Pisoni, 2003; Hughes & Galvin,

2013). Unilateral implant use causes abnormal reorganization of the auditory pathway at the level of the brainstem (Gordon et al., 2007b, 2008, 2011a, 2012) and the cortex (Gordon et al.,

2010b, 2013b; Jiwani et al., 2013). While unilateral CI use promotes near-normal rhythm perception, it remains unclear whether bilateral implantation benefits music perception (Gfeller

& Lansing, 1991; McDermott, 2004; Hopyan et al., 2012; Veekmans et al., 2009). Unfortunately, many differences between children using bilateral CIs and their normal hearing (NH) peers remain, such as greater variability in behavioural responses and increased reliance on CI-1 (first implanted CI) and interaural level cues for speech perception and localization.

1

Asymmetric development, poor neural survival, electrical stimulation, and mismatched places of stimulation across the could affect the ability of children who are deaf to perceptually integrate, or fuse, input delivered by bilateral implants and thereby impair binaural hearing (Gordon et al., 2008, 2010a, 2013b; Nadol, 1997; D’Elia et al., 2012; Rubinstein &

Hong, 2003; Rubinstein, 2004; Poon et al., 2009; Kan et al., 2013). Delays in time between receiving the first and second implants and resulting abnormal cortical lateralization are associated with poorer speech perception (Gordon et al., 2013b; Gordon et al., 2009; Chadha et al., 2011), whereas interaural mismatches degrade binaural sensitivity (Goupell et al., 2013; Kan et al., 2013). Children with bilateral CIs may thus have limited success with their devices if binaural fusion does not occur. The recent finding that children using bilateral CIs reported hearing from both devices simultaneously rather than one fused image (Salloum, et al., 2010) most clearly suggests that binaural fusion is absent. Therefore, it remains unclear whether bilaterally implanted children have access to accurate binaural cues or integrate these cues similarly to children with intact NH.

2

1.1. Normal Hearing

1.1.1. Physiology of the Ear

Sound conduction

Acoustic waveforms are the stimuli transduced by the normal human auditory system. By

22 weeks gestational age, auditory sensory cells, or (inner) hair cells, are capable of transducing vibrations as small as the diameter of an atom (Lavigne-Rebillard & Pujol, 1988; Purves et al.,

2008, Chapter 13). Auditory mechanoreceptors can respond several orders of magnitude faster than visual photoreceptors. In listeners with NH, acoustic input first reaches the outer ear. The pinna acts as a resonator to amplify sound level by up to approximately 20 dB, in a frequency- dependent manner (Shaw, 1974). Vertical sound localization is facilitated, in part, by the acoustic properties of the concha. Regarding perception in the horizontal plane, which was the focus of the present study, sound travels between the ears within a physiological range of approximately ±0.7 ms interaural/implant time differences (ITDs) for anechoic free sound field localization and ±1.0 ms ITDs for earphone-mediated click lateralization (Furst et al., 1985).

From this point forward, positive numerical values will be used to denote binaural input leading in the left ear for NH listeners (or CI-2, the second implanted CI, for implant users) and negative values will denote binaural input leading in the right ear or CI-1 for CI users.

While ITDs arise due to differences in the fine structure phases of low frequency acoustic waveforms (with long wavelengths), the head attenuates envelope intensities for sound frequencies above roughly 1500 Hz, resulting in reduced diffraction and perceived interaural/implant level differences (ILDs) as small as 1 dB (Colburn et al., 2006). The ossicular chain of the middle ear (malleus, incus, and stapes) connects the tympanic membrane to the oval window and acts as an impedance transformer to further increase the gain.

3

Conductive hearing losses result from damage to the external or middle ear and may compromise binaural integration (Polley et al., 2013). These hearing losses are not treatable with cochlear implantation and thus will not be discussed any further. Surgical intervention to treat pre/peri-lingual severe-to-profound sensorineural hearing loss in children is the focus of the present study.

Cochlear anatomy

The cochlea, situated within the temporal bone (Figure 1.1), compresses and further amplifies sound, decomposes complex waveforms, and transduces mechanical energy into neural impulses (Hudspeth, 1989). This organ is the target of cochlear implantation. In humans, the cochlea is ~35 mm in length and has little more than 2.5 turns, coiling around a central axis, or modiolus. Three fluid-filled sacs are separated by two membranes: Reissner’s membrane separates scala vestibuli from scala media, while the separates scala media from scala tympani. A third membrane, the tectorial membrane, lies above the basilar membrane.

The endolymph of the scala media has a similar composition to intracellular fluid with a high concentration of potassium ions and low sodium concentration, while the perilymphatic fluid within the two bordering scala resembles extracellular fluid.

The organ of Corti, which is comprised of ~12,000 outer hair cells and ~3,500 inner hair cells, is located on the basilar membrane, within the scala media (Moore, 2008, Chapter 1).

Stereocilia, or hair-like bundles, lie on top of each . Of the ~30,000 afferent spiral ganglion fibers, 95% are type I, ~20 of which innervate each inner hair cell, whereas each type II fiber terminates on many outer hair cells. There is a gradient of decreasing stiffness and increasing width of the basilar membrane from the base of the cochlea at the oval window to the apex. As discussed in the following section, basilar membrane mechanics underlie, in part, the

4 frequency tuning of auditory nerve fibers, which receive direct electrical stimulation from the CI array in deaf listeners.

Figure 1.1. Cross-section of the cochlea, showing outer and inner hair cells between the basilar and tectorial membranes. Figure courtesy of Moore (2003).

Sound transduction

Vibrations of the oval window are transmitted through scala vestibuli to scala media and across the basilar membrane to scala tympani. Travelling waves along the basilar membrane cause deflections in hair cell stereocilia that, in turn, open mechanically-gated K+ channels at or near the tip of each stereocilium, causing rapid membrane depolarization (Hudspeth, 1989). This

5 change in membrane potential results in the opening of voltage-gated Ca2+ channels and subsequent neurotransmitter release onto afferent auditory nerve endings, thereby initiating action potentials. This mechanotransduction can occur in as little as 10 µs (Purves et al., 2008,

Chapter 13), temporal precision which is necessary for accurate sound localization in normal listeners (Klumpp & Eady, 1956; Akeroyd, 2006). However, absolute and relative refractory periods lasting 0.5-0.75 ms and 2-3 ms, respectively, limit the rate of spike potential generation

(Moore, 2008, Chapter 1).

Coding of level

Before describing how the central auditory system integrates binaural information, it is helpful to first review the intricate peripheral mechanisms underlying the perception of changes in monaural level and frequency.

Increases in sound intensity, and perceived loudness, are coded in terms of neural firing rate, spread of excitation along the basilar membrane, and phase locking pattern. According to

Stevens’s Power Law (1957), perceived loudness doubles for each 10-fold increase in sound intensity (i.e., 10 dB increase in sound level). When the sound level is increased from threshold to the upper limit of an auditory neuron’s dynamic range, the neuron’s firing rate increases rapidly, but the rate of increase in firing subsequently decreases until saturation (Sachs & Abbas,

1974). This dynamic range is determined by outer hair cell function and Poisson-like spontaneous neural activity in the absence of sound. More sensitive neurons have higher spontaneous rates (18-250 spikes/second; Liberman, 1978), lower thresholds (~0 dB SPL), and narrower dynamic ranges (15-30 dB). Higher rates of spontaneous firing are associated with larger inner hair cell synapses and nerve fiber diameters (Liberman, 1982).

6

Although individual nerve fibers have dynamic ranges of only 20-40 dB, normal listeners can distinguish intensity differences over ranges as large as 120 dB, because the total number of active fibers codes changes in stimulus intensity through spread of excitation (Delgutte, 1996).

Phase locking plays a more crucial role when the auditory system is tasked with discerning changes in relative levels of complex waveforms, such as musical melodies (Moore, 2003). In these cases, response patterns may be detected across different frequency channels by coincidence detectors.

Coding of frequency

The cochlea exhibits or place coding: high frequency sounds (> 15,000 Hz) displace the basilar membrane maximally near the oval window, while low frequency sounds (<

100 Hz) produce maxima at the cochlear apex (Moore, 2008, Chapter 1). Each point on the basilar membrane acts as a bandpass filter (BPF) with a center frequency, or characteristic frequency, which evokes a maximum displacement at that point. Characteristic frequencies decrease from base to apex due to basilar membrane mechanics and outer hair cell motor function (Moore, 2003). As a result of overlapping bandwidths, there are effectively ~40 independent frequency channels, about 75% of which fall in the range of frequencies relevant for speech perception (~1000-4000 Hz). Each frequency band is relatively narrow: one sixth to one third of an octave (Colburn et al., 2006). This tonotopic organization is maintained throughout the ascending auditory pathway.

While place coding may occur in response to high stimulus frequencies, phase locking only occurs for frequencies up to 4000-5000 Hz and is limited by the precision with which neurons can fire in synchrony with a particular phase of the stimulus (Moore, 2008, Chapter 1).

Higher perceived pitches are typically associated with higher pure tone frequencies. In the case

7 of complex musical tones, the perceived pitch tends to resemble that of the fundamental frequency. Lower multiples/harmonics of the fundamental frequency are the most important for pitch perception, because they are more effectively resolved on the basilar membrane than higher harmonics. Place and temporal mechanisms allow the human auditory system to discern frequency differences between pure tones as small as 0.3% (Moller, 2006, Chapter 6). Across- channel coincidence detection may also be involved in coding changes in frequency (Loeb et al.,

1983). Frequency coding is relevant for binaural processing, as interaural frequency mismatches degrade binaural fusion and sensitivity (Zhou & Durrant, 2003; Goupell et al., 2013).

1.1.2. Binaural Hearing

Binaural hearing allows human beings to determine the direction of sound sources and perceive speech in noisy environments, through spatial unmasking: head shadow and squelch effects as large as 11 dB and 5 dB in adults, respectively (Bronkhorst & Plomp, 1988). Hearing with a second ear also enhances the perception of loudness by 2-3 dB, through binaural summation (Blegvad, 1975; Bess & Tharpe, 1986). The binaural ability to localize sound sources in the environment is more relevant for the present study and has been essential to the survival of many mammalian species over the past 200 million years (Grothe et al., 2010). Tympanic ears sensitive enough for binaural processing have evolved independently in the ancestors of modern amphibians, reptiles, birds, and (Schnupp & Carr, 2009). In both birds and mammals, complex synaptic pathways function to preserve the fine structure information central to binaural processing. ILD localization ability develops before ITD localization (Kaga, 1992) and by the age of 5-6, humans achieve adult-like binaural hearing (Van Deun et al., 2009b), as evidenced by the ability to distinguish differences in the angular location of sound as small as 1-2o (Litovsky,

8

1997). The main goals of bilateral cochlear implantation are to restore binaural hearing and promote symmetric auditory development.

Binaural psychoacoustics

Acoustic information concerning the location and pitch of sound is carried in the temporal fine structure, while the sound envelope (amplitude peaks) is more relevant for speech and interaural level perception (Smith et al., 2002; Hartmann & Constan, 2002). In their landmark study, Smith and colleagues (2002) synthesized auditory chimaeras using a Hilbert transform with speech information in either the envelope or fine structure and ITDs leading to the left or right. As shown in Figure 1.2, lateralization was determined by the direction to which the ITD was leading in the fine structure, while speech recognition improved as the number of frequency bands increased, when information was represented by the envelope.

Figure 1.2. Depiction of the relative importance of fine structure and temporal envelope cues for sound localization (Figure 1.2a) and speech recognition (Figure 1.2b) from Smith and colleagues (2002). Information carried in the fine structure is more relevant for sound localization. Solid lines indicate dichotic chimaeras synthesized with the same sentence; dashed lines indicate those constructed using different sentences.

9

As mentioned previously, we use differences in sound intensity and arrival time between the ears to localize sound sources in the horizontal plane (Figure 1.3). When no ILD or ITD is present, binaural sounds are perceived as coming from the center of the head (midline) in listeners with intact NH. Pure tones with frequencies greater than about 1500 Hz cast an acoustic shadow resulting in ILDs (Akeroyd, 2006). Higher frequencies typically result in larger ILDs, which are associated with more pronounced lateralization up to approximately 10 dB. When the

ILD exceeds 15-20 dB, binaural input is perceived as a single fused monaural image on an extreme side of the head. Large ILDs do not interfere with fusion, or result in the perception of two distinct auditory images, even as dichotic and monotic stimuli become indistinguishable

(Furst et al., 1985).

ITDs are processed with greater precision than any other temporal process within the mammalian (Grothe et al., 2010) and are generally the dominant cue for localization in normal listeners (Wightman & Kistler, 1992; Macpherson & Middlebrooks, 2002; Seeber &

Fastl, 2008; Van Deun et al., 2009b). Larger fine structure ITDs typically increase the likelihood of lateralization for frequencies lower than 1500 Hz, because wavelengths must be longer than the size of the head in order for diffraction to occur and phase locking deteriorates at higher frequencies. However, we can also detect ITDs in the envelope of amplitude modulated high frequency sounds (Nuetzel & Hafter, 1976) and at similar sensitivities, when the amplitude modulation is carefully constructed (Bernstein et al., 2001). Normal sensitivity to ITDs is so remarkable that differences as small as 10 µs in binaural clicks are detectable (Klumpp & Eady,

1956). As click ITDs increase beyond the physiological range of the binaural system, the position of the auditory image remains on an extreme side of the head, but two distinct sounds

10 can be heard. This phenomenon has been referred to as the end point of lateralization (Babkoff

& Sutton, 1966).

Figure 1.3. Binaural cues used for sound localization. NH listeners can use phase differences (∆t) in the fine structure of low frequency sounds (< 2000 Hz) for localization in the horizontal plane, whereas ILD cues (∆I) play a larger role at higher frequencies. Figure adapted from Grothe and colleagues (2010).

An analog to the end point of lateralization exists in the visual system. Perception of a single fused visual image begins to deteriorate when two images are presented beyond Panum’s

Fusional Area: 6 minutes of arc (6/60o or 1 tenth of a degree) in the center to 20 minutes of arc at a peripheral angle of 6o (Fender & Julesz, 1967). Stereopsis is lost at disparities that are approximately four times larger than this physiological range.

Coding of binaural cues in the brainstem

The lateral and medial nuclei of the mammalian superior olivary complex (LSO and

MSO) are specialized for ILD and ITD processing, respectively (Figure 1.4). A larger proportion of neurons in the LSO respond to high frequencies, whereas there is a relative over- representation of neurons with lower characteristic frequencies situated in the MSO (McAlpine,

11

2005). It is worthwhile mentioning that some LSO neurons also display sensitivity to envelope and high frequency ITDs (Joris, 1996; Tsuchitani, 1988). Neural impulses from auditory nerve fibers are first transmitted to the tonotopically arranged bushy cells of the in the rostral auditory brainstem. Synaptic terminals, such as the endbulbs of Held, mediate this transmission with high fidelity. Bushy cells in the can phase lock to the fine structure of low frequency sounds (< 5000 Hz) and envelope of high frequency sounds with even greater precision than auditory neurons (Joris et al., 1994).

LSO principal neurons receive direct excitatory glutamatergic inputs from spherical bushy cells of the ipsilateral anteroventral cochlear nucleus and indirect inhibitory glycinergic inputs from globular bushy cells of the contralateral anteroventral cochlear nucleus, via the medial nucleus of the trapezoid body (Moore & Caspary, 1983; Grothe et al., 2010). By contrast, while MSO principal cells receive direct excitatory inputs from the ipsilateral anteroventral cochlear nucleus on the lateral dendrites, they also receive excitatory inputs from the contralateral anteroventral cochlear nucleus on the medial dendrites and contralateral and ipsilateral inhibitory inputs on the somata via the medial and lateral nuclei of the trapezoid body, respectively. As the GABAergic and glycinergic synapses in the medial nucleus of the trapezoid body and LSO develop post-natally, they switch from depolarizing to hyperpolarizing (Kandler

& Gillespie, 2005). Glycinergic inhibition plays a key role in determining a neuron’s specific

ITD tuning (Brand et al., 2002).

12

Figure 1.4. Synaptic connections and auditory brainstem structures involved in ILD (left) and ITD (right) processing. Relatively simple subtraction of inputs to the superior olivary complex underlies ILD coding, while ITD coding requires very precisely timed bilateral excitation and inhibition (Grothe et al., 2010).

The Jeffress Model

The mechanism of ITD coding in the barn owl nucleus laminaris (avian homolog of the

MSO; Carr & Konishi, 1990) is largely consistent with the Jeffress labelled-line model (1948).

Jeffress proposed that coincidence detector neurons first receive input from the side to which sound is lateralized, while longer axonal distances and/or neural inhibition proportionate to the size of the ITD delay the signal received from the contralateral side (Colburn et al., 2006). There should therefore be an array of coincidence detectors in the auditory brainstem, each tuned to a physiologically relevant ITD. Evidence in support of the Jeffress model suggests that the bipolar dendrites of MSO neurons facilitate coincidence detection in mammals (Agmon-Snir et al., 1998;

Beckius et al., 1999). However, more recent evidence indicates that ITD coding in mammals is more likely mediated by broadly tuned hemispheric spatial channels (Breebaart et al., 2001;

McAlpine, 2005). McAlpine and colleagues (2001) argue that the relative activity of these

13 channels can code changes in the azimuthal position of sounds much like the visual system codes changes in colour. The brain may infer changes in sound source location from differences in activity between the hemispheric channels. Differences in head size and the frequencies over which ITDs are perceived determine the neurophysiological code utilized by a particular species.

Coding of binaural cues beyond the brainstem

Further ITD processing occurs in the lateral lemniscus and inferior colliculus (Fitzpatrick et al., 2002; Aharonson & Furst, 2001). Virtually all ascending auditory pathways, especially from the contralateral cochlear nucleus, as well as pathways from non-auditory areas and the cortex, converge on the inferior colliculus. The lateral lemniscus receives projections from both the cochlear nucleus and superior olivary complex (SOC) and sends inhibitory signals to the inferior colliculus. The inferior colliculus integrates information about the vertical location of sound from the dorsal cochlear nucleus with information provided by the binaural circuits of the

SOC to generate a three-dimensional map of auditory space. Central nucleus neurons of the cat inferior colliculus demonstrate ITD tuning comparable to the MSO (Rose et al., 1966). Human imaging data has more recently shown that peak inferior colliculus activity changes with increases in ITD (Thompson et al., 2006). Connections between the auditory and retinotopically organized superior colliculus of the visual system facilitate eye movements which are crucial for sound localization.

The medial geniculate body relays input, mainly from the inferior colliculus, to the primary , located in Heschl’s gyrus in the Sylvian fissure in humans. Human studies suggest that the right auditory cortex is more critical for spatial localization (Zatorre &

Penhune, 2001). Specifically, ILDs and ITDs, appear to be processed in distinct channels, as

14

ITDs have a greater capacity for engaging the left auditory cortex (Johnson & Hautus, 2010); however, both cues evoke larger amplitude responses in the right hemisphere.

Auditory brainstem development

Activity can be measured in humans from the aforementioned auditory brainstem structures using electroencephalography to record the voltage (generated by synchronous neural activity) at the vertex of the head, relative to the earlobe. The presence of a magnet in the receiver-stimulator component of the CI interferes with alternative neuromagnetic imaging techniques. As illustrated in Figure 1.5, about 5 small (< 1 µV) positive waves can be observed within 10 ms after stimulus onset (Jewett et al., 1970). It is believed that these waves reflect activity in specific brainstem nuclei, such as the lateral lemniscus and inferior colliculus (wave

V). The appearance of myelin at 26-28 gestational weeks coincides with the onset of the human

ABR (auditory brainstem response; Moore et al., 1995). Due to increases in myelination and synaptic efficiency, the ABR decreases in latency exponentially, reaching adult maturity at approximately 1-3 years of age (Salamy & McKean, 1976; Eggermont & Salamy, 1988).

Figure 1.5. A typical auditory brainstem response for a normal hearing adult. Peaks are marked by roman numerals and indicate the time at which neural conduction reaches specific generators in the ascending auditory pathway: proximal and distal cochlear nerve (I and II), cochlear nucleus (III), and lateral lemniscus/inferior colliculus (IV and V). Differences in wave V latency were used as a measure of auditory brainstem asymmetry in the present study. Figure adapted from Salamy & McKean (1976).

15

The binaural difference response

The binaural difference response is an electrophysiological measure that is recognized in animals (Smith & Delgutte, 2007b), normal listeners (McPherson & Starr, 1995; Riedel &

Kollmeier, 2006) and CI users (He et al., 2012; Gordon et al., 2012). This difference measure is calculated by subtracting the amplitude of binaurally evoked potentials from the sum of those evoked monaurally (Dobie & Norton, 1980). The binaural difference waveform is thought to reflect inhibition in the SOC, because the amplitude of the binaural response is less than the sum of monaural responses (Wada & Starr, 1989; Levine & Davis, 1991; Krumbholz et al., 2005).

This waveform has been used an objective measure of integration of binaural input, because the majority of inhibition in the brainstem is derived from contralateral sources. The β component

(baseline-to-peak amplitude) of the difference waveform is undetectable for ITDs larger than ~1 ms (Figure 1.6; Furst et al., 1985), which is consistent with the Jeffress model and data from behavioural tasks assessing binaural fusion.

Figure 1.6. Elimination of the β component of the binaural difference waveform at large ITDs. This reduction in amplitude correlates with behavioural fusion and lateralization and thus reflects binaural processing. Figure adapted from Furst and colleagues (1985).

16

1.2. Electrical Hearing

1.2.1. Sensorineural Hearing Loss

The CI users who participated in this study were children with bilateral severe-to- profound (≥85 dB HL) sensorineural hearing impairment that occurred in childhood prior to

(pre-lingual) or around the time of (peri-lingual) speech and language development. Pre-lingual severe-to-profound sensorineural hearing loss occurs in 0.5-3/1000 live births in developed countries (Niparko & Agrawal, 2009).

Early auditory experience is critical for normal language acquisition, speech perception, sound localization, and central auditory development (Rauschecker, 1999), which can be compromised by severe-to-profound deafness in childhood (Eisenberg, 2007; Gordon et al.,

2006). Unfortunately, the exact onset of deafness may not always be evident, because deafness does not overtly disrupt infant motor and social development (Papsin & Gordon, 2007). Longer periods of deafness leave auditory association cortices vulnerable to becoming decoupled from primary auditory cortex (Kral, 2007) and taken-over by visual (Lomber et al., 2010; Lee et al.,

2001) and somatosensory areas (Meredith & Lomber, 2011), which may limit the potential for later reactivation with CI stimulation. Consequently, visual and somatosensory functions improve in deaf individuals (Lomber et al., 2010; Meredith & Lomber, 2011).

Anatomical abnormalities of the deaf auditory system

Sensorineural hearing loss induces pathological changes at various levels throughout the ascending auditory pathway (Shepherd & Hardie, 2001) that underlie observed functional deficits and limit the potential benefit provided by a CI. Longer durations of profound deafness are associated with smaller cell size in the cochlear nucleus and fewer surviving auditory neurons in human listeners (Moore et al., 1994). However, while the vast majority of hair cells may be

17 absent, more than one third (> 10,000) of the normal complement of auditory neurons typically remains (Hinojosa & Marion, 1983) and CIs may only require less than 10% of these neurons to facilitate speech recognition (Fayad & Linthicum, 1991). Spiral ganglion degeneration tends to be more severe in the basal portion of the cochlea where the majority of CI electrodes are placed

(Nadol, 1997).

Several animal models have been used to document abnormal changes in the auditory structures responsible for transmitting and coding information relevant for binaural processing.

Spiral ganglion neurons normally transmit auditory information to the auditory brainstem with high fidelity, but gradually become demyelinated as a consequence of inactivity (Terayama et al.,

1977). Early abnormalities resulting from congenital deafness are most prominent in the endbulbs of Held and manifest as fewer vesicles and larger post-synaptic densities (Ryugo et al.,

1997; Lee et al., 2003). Further studies on congenitally deaf white cats revealed decreases in cell size and the number of synaptic terminals in the MSO (Schwartz & Higa, 1982; Tirko et al,.

2012) and decreases in ITD tuning and the number of ITD-sensitive neurons in the inferior colliculus and auditory cortex (Hancock et al., 2010; Tillein et al., 2010). Cochlear implantation is currently the most effective surgical intervention available to treat severe-to-profound sensorineural deafness.

1.2.2. Unilateral Cochlear Implantation

Despite Italian physicist Alessandro Volta’s remarkable discovery in the 1800s that electrical stimulation could be used to evoke auditory percepts, it was not until February 25,

1957 that French otolaryngologist Charles Eyries performed the first cochlear implant surgery

(Djourno et al., 1957). His patient achieved frequency discrimination, but could not comprehend

18 speech. Clinical testing in the ensuing decades provided the Food and Drug Administration with enough evidence to approve CIs for implantation in adults in 1984 and in children in 1990

(Papsin & Gordon, 2007). Since then, CIs have become the most successful neural prosthesis, restoring hearing to over 200,000 individuals with sensorineural deafness worldwide. Hearing aids, by contrast, are relatively less effective in treating sensorineural hearing losses in the severe-to-profound range.

Components of the cochlear implant

The CI design consists of both external and internal components, as illustrated in Figure

1.7. An external microphone detects acoustic stimuli, which are converted into electrical signals by a speech processor and transmitted across the skin via a transcutaneous link connecting an external transmitting coil to an implanted receiving coil (Loizou, 1998; Wilson et al., 2004). In turn, the internal receiver decodes the radiofrequency signal and generates electrical current that is sent through an internal cable to the electrode array within the cochlea. CIs are surgically implanted 20-26 mm into the scala tympani of the cochlea (Figure 1.7) to bypass damaged hair cells (as well as the external and middle ears) and directly stimulate surviving afferent auditory nerve fibers with customized electrical pulses.

19

Figure 1.7. Internal and external components of the CI device. The external piece captures acoustic information and attempts to recreate the most perceptually relevant features using electrical pulses, which are then delivered to the auditory nerve by the internal piece of the device. Figure adapted from Papsin and Gordon (2007).

Animal studies with CIs

CIs can partially reverse some of the above-mentioned anatomical abnormalities induced by deafness. In deaf white cats, for example, CIs can electrically restore endbulb synapses in the cochlear nucleus, as well as inhibitory inputs to the MSO (O’Neil et al., 2010; Tirko et al., 2012).

CI stimulation also increases cell size in both the anteroventral cochlear nucleus and SOC

(Hartmann & Kral, 2004) and restores ITD tuning in the inferior colliculus (Snyder et al., 1991;

Smith & Delgutte, 2007a).

20

Limitations of electrical stimulation

Speech processing strategies are continually evolving with efforts to maximize the amount of meaningful information extracted from incoming acoustic signals. The way in which auditory neurons respond to changes in different parameters of electrical stimulation must be taken into consideration during the design of CI stimulation schemes. Continuous interleaved sampling (CIS) and advanced combination encoder (ACE) speech processors first BPF auditory input (~200-8000 Hz) into 22 separate channels (Cochlear Corporation), each assigned to one electrode. The spread of electrical current and number of surviving neurons unfortunately limit the effective number of independent frequency channels to only 6-8, in contrast to ~40 in normal listeners (Boex et al., 2003; Henry & Turner, 2003). Given that electrical current can spread from one electrode to ~7 (van Hoesel, 2004; Hughes & Abbas, 2006), non-simultaneous stimulation of electrodes is used to maximize spatial separation. Considering that envelope cues from 4 channels are sufficient for speech perception in quiet and fine structure representation is limited by neural synchrony and refractoriness, stimulation schemes have been designed to extract the signal envelope (Shannon et al., 1995; Rubinstein & Hong, 2003). Envelope detection occurs at each channel, usually through rectification and low-pass filtering (Rect/LPF), as depicted in

Figure 1.8 for a basic 4 electrode system (Loizou, 1998). Discarding fine structure information limits the ability of CI users to accurately perceive ITDs and music (Smith et al., 2002).

21

Figure 1.8. Image from Loizou (1998) shows how CIs convert acoustic input into meaningful electrical signals. The process is complex and involves bandpass filtering and subsequent rect/LPF.

Coding of level with a CI

As in NH listeners, changes in loudness perceived by CI users are coded by the neural firing rate and spread of excitation. Higher current levels electrically recruit greater numbers of auditory neurons (Steel et al., in press). However, discharge firing rates of auditory neurons in response to electrical stimulation saturate more rapidly in the absence of normal basilar membrane compression, resulting in psychophysical dynamic ranges less than 20 dB and ranges as small as 2 dB for individual nerve fibers (Rubinstein, 2004). Therefore, the wide dynamic range of extracted envelope signals must undergo subsequent non-linear compression onto the considerably narrower range of electrical hearing.

22

Coding of frequency with a CI

To mimic cochlear tonotopy, envelopes from low frequency channels are used to modulate pulses delivered to apical electrodes, while envelope signals from high frequency channels modulate pulses sent to more basal electrodes. Although place coding with a CI is limited by several factors, such as the presence of dead regions in the cochlea (Moore et al.,

2001), absence of across-channel coincidence detection, and a limited number of electrodes, it is still the dominant frequency code utilized by CI listeners. In contrast to normal listeners, pitch perception by CI users only changes appreciably for pulse rates ranging from 50 to approximately 300 pulses per second (pps), in part, due to neural adaptation (Zeng, 2002; Davids et al., 2008b). Nonetheless, stimulation rates greater than 1000 pps are implemented by current speech processing schemes in an attempt to mimic the stochastic response properties and better envelope sampling of auditory neurons in normal cochleae (Rubinstein et al., 1999b).

CI stimulation strategies

The ACE strategy is considered by many audiologists and physicians to be the preferred strategy for Nucleus 24 CIs (Cochlear Corporation; Wilson et al., 2004). The main difference between the CIS and ACE strategies is that the latter stimulates a subset of electrodes with the highest envelope peaks to further minimize current spread. Speech perception performance with the ACE stimulation scheme has been found to be superior to that with the spectral peak stimulation strategy (Pasanisi et al., 2002). All CI children in the present study used Nucleus 24 devices (ACE) with pre-curved arrays (24CS, CA, and RE) designed to limit the spread of excitation by stimulating the spiral ganglion with greater proximity to the modiolus (Cohen et al.,

2003).

23

Unilateral implantation promotes auditory brainstem development

EABRs (electrically evoked auditory brainstem responses) can be recorded from children using CIs to measure auditory development and compare results to those obtained from the

ABRs of NH peers. EABRs were previously recorded in 24/25 CI children who took part in the present study. Compared to ABR waves, EABRs have larger amplitudes and shorter latencies

(wave eV latency ~4.0 ms; Shallop et al., 1990), due to greater neural synchrony, direct stimulation of the spiral ganglion, and potentially different neural generators.

While neural conduction along the auditory nerve increased in deaf children over their first year of life in the absence of auditory stimulation, unilateral CI use was necessary to promote activity-dependent auditory development in the rostral brainstem (Gordon et al., 2010a).

Remarkably, development occurred at a similar rate to NH children (Gordon et al., 2006; Thai-

Van et al., 2007; Eggermont & Salamy, 1988) and did not depend on the duration of deafness

(Gordon et al., 2003). Neural conduction along the caudal auditory brainstem (eN1-eIII interwave latency differences) remained faster over the first year of use when evoked by apical versus basal stimulation, possibly due to different neural response properties or deafness etiologies (Gordon et al., 2007a; Propst et al., 2006). Unilateral CI use also promotes normal developmental changes and connections at the level of the cortex (Eggermont & Ponton, 2003;

Jiwani et al., 2013), but development of the cortex is more negatively impacted by the duration of deafness (> 3.5 years; Ponton et al., 1996; Sharma et al., 2002) than the brainstem.

Functional benefits of unilateral implantation

The focus of this study is some of the limitations of CI use in children. First, the listening advantages are reviewed. Near-normal physiological development promoted by unilateral CI use

24 translates into a wide variety of significant functional benefits. Deafness delays language acquisition and speech production and causes emotional problems and social maladjustment

(Pulsifer et al., 2003). In fact, most severe-to-profoundly deaf children aged 6-15 who used hearing aids had language skills below the average level of 3-year-old children with intact NH

(Geers & Moog, 1978).

On the contrary, most deaf children who received CIs before the age of 5 achieved language skills (e.g., verbal reasoning and lexical diversity) that were comparable to age- matched NH peers (Geers et al., 2003). Benefits in speech recognition improve with earlier implantation (Rubinstein et al., 1999a; Cheng et al., 1999). Additionally, CI use improves speech production, speech intelligibility, and non-verbal cognitive functions (Kirk & Hill-Brown, 1985;

Miyamoto et al., 1996; Shin et al., 2007), and promotes temporal resolution (Muchnik et al.,

1994), emotion perception (Hopyan et al., 2011), and normal-like loudness growth, as depicted in Figure 1.9 (Steel et al., in press). Long-term follow-up studies indicate that many children with CIs achieve functional speech perception, speech production, and age-appropriate oral language (Uziel et al., 2007).

25

Figure 1.9. Average loudness judgments are shown as a function of stimulus intensity as a percentage of dynamic range. Unilateral CI use promotes loudness perception in adolescents that is not statistically different from NH peers. Loudness growth was assessed behaviourally on a continuous visual scale and pulses were delivered from an apical electrode to unilateral users. Figure courtesy of Steel and colleagues (in press).

While auditory brainstem development promoted by unilateral CI use did not depend on duration of deafness (Gordon et al., 2003), children with > 8 years of congenital deafness prior to receiving a CI performed more poorly on post-implant tests of speech perception than their peers with < 6 years of pre-implant deafness (Harrison et al., 2005). Pre-lingually deaf children implanted younger than age 3 typically attain the best speech perception skills (Kirk et al., 2002;

Manrique et al., 2004). Reorganization of the thalamocortical pathway and recruitment of the auditory cortex by non-auditory regions may limit post-implant speech perception performance

(Gordon et al., 2005a; Lee et al., 2001). Therefore, in order to maximize the functional benefits of CI use, intervention should occur as early as possible.

26

Limitations of monaural hearing

CI devices were traditionally provided only in one ear to minimize surgical complications and cost and leave the contralateral ear amenable to future and potentially superior interventions

(Papsin & Gordon, 2008). The numerous benefits afforded to deaf children by unilateral CI stimulation are accompanied by serious limitations (Lyxell et al., 2008), especially in the domain of binaural hearing, which is essential for accurate speech detection in noise and sound localization. Unilateral hearing delays speech and language development, resulting in educational problems and feelings of embarrassment and helplessness (Lieu, 2004; Giolas &

Wark, 1967). Individuals with monaural hearing impairments report difficulties understanding speech on their impaired side. This would be particularly deleterious for children who regularly find themselves in difficult and complex listening environments.

1.2.3. Bilateral Cochlear Implantation

Functional benefits of bilateral implantation in adults

Bilateral implants were first provided to adults with bilateral severe-to-profound hearing loss. Over the past 10 years, bilateral implantation has improved speech detection in noise and sound localization in adult users (van Hoesel & Tyler, 2003; Litovsky et al., 2012). In the absence of the precise temporal synchronization (Joris et al., 1994) and tonotopic organization

(Goupell et al., 2013) required for normal ITD sensitivity (Klumpp & Eady, 1956), bilateral CI users are more heavily dependent on ILDs than their NH peers (Seeber & Fastl, 2008; Van Deun et al., 2009b). While adult users demonstrate sensitivity to ILDs as small as 0.1-0.2 dB (van

Hoesel & Tyler, 2003; van Hoesel, 2004) when stimulated directly through a research processor and approximately 1 dB (Laback et al., 2004; Grantham et al., 2008) when stimulated through

27 their own speech processors, ITD thresholds are not usually less than 100 µs (Lawson et al.,

1998; van Hoesel et al., 2002). Sensitivity to ITDs is even poorer when pulse rates exceed 300 pps (van Hoesel et al., 2002, 2009; van Hoesel, 2007), which is consistent with neural adaptation seen at the level of the auditory brainstem (Davids et al., 2008b) and midbrain (Smith &

Delgutte, 2007a). Despite the finding that auditory nerve fibers with low characteristic frequencies are better at transmitting the fine structure with electrical stimulation, these fibers are normally out of the range of CI electrode stimulation (Middlebrooks & Snyder, 2010). The deterioration of horizontal-plane localization when the stimuli were low-pass filtered, but not high-pass filtered, constitutes further evidence that ILDs are the dominant binaural cue for CI users (Grantham et al., 2007). When ILDs are held constant, sound localization is diminished

(Aronoff et al., 2010). Compared with adult users, CI children who were deaf in childhood have considerably less, if any, pre-implant acoustic experience, which may make the perception of binaural cues more challenging following implantation.

Functional benefits of bilateral implantation in children

Whether in the classroom or on the playground, children must localize and identify multiple sound sources in challenging or noisy listening environments. Receiving a second CI device (Figure 1.10) was associated with an increase in quality of life that continued to improve in bilaterally implanted children with greater bilateral CI experience, in contrast to their unilaterally implanted peers (Sparreboom et al., 2012). 86% of parents who recently participated in a questionnaire found the second implant to be beneficial for their children (Galvin et al.,

2014).

28

Figure 1.10. X-ray image of a bilaterally implanted child using different CI device generations (straight array and contour array). All children in the CI group tested in the present study were implanted with contour arrays. The positions of the external and internal components of the CI device are shown relative to the child’s head (Gordon et al., 2012).

Benefits of bilateral implantation in children are similar to those in adults and include improved speech detection in noise (Mok et al., 2007; Van Deun et al., 2009a), speech perception in noise (Litovsky et al., 2004; Galvin et al., 2007; Van Deun et al., 2010b) and sound localization (Litovsky et al., 2006; Grieco-Calub et al., 2008). These improvements were greater when the first CI was provided early (before the age of 2; Van Deun et al., 2010a) and when the second implant was provided less than 2 years after the first (Gordon & Papsin, 2009; Chadha et al., 2011; Galvin et al., 2014). However, even with long interimplant delays, some bilaterally implanted children can achieve sound localization (Grieco-Calub & Litovsky, 2010) and lateralization (Salloum et al., 2010).

A subset of children who participated in the present study previously completed a lateralization task (unpublished data). A large proportion (21/29) of simultaneously implanted children lateralized sounds on the basis of ITD cues, though responses were not as consistent as they were for NH counterparts. Unexpectedly, the majority (11/17) of sequentially implanted children were also able to detect changes in ITDs. Relative to those who received their devices

29 sequentially and could not detect ITDs, these children had shorter interimplant delays (p < 0.05;

3.33 ± 0.31 vs. 4.40 ± 1.19 years (mean ± SD)) and greater bilateral CI experience (5.66 ± 1.63 vs. 4.83 ± 0.83 years).

The finding that binaural difference amplitudes were largest in the absence of mismatched place of stimulation or perceived lateralization suggests that integration of binaural input may be occurring to some degree in children with bilateral CIs (Gordon et al., 2012). The symmetric development promoted by simultaneous implantation (Gordon et al., 2008, 2010b,

2013b) may underlie enhanced speech detection in noise and sound localization abilities, relative to sequentially implanted peers (Chadha et al., 2011). In fact, abnormal cortical lateralization driven by unilateral CI use was associated with poorer speech perception (Gordon et al., 2013b).

Functional limitations of bilateral implantation in children

While bilateral implantation in children provides numerous listening advantages, deviations from normal still remain. Binaural benefits associated with bilateral implantation are limited by many factors, such as electrical stimulation, speech processing schemes, peripheral and central neural degeneration, and abnormal development. Minimum audible angles were larger and more variable than in NH listeners (~20-40o; Litovsky et al., 2006), loudness growth and speech detection benefits were asymmetric (Chadha et al., 2011), and ITD detection was diminished (Figure 1.11; Salloum et al., 2010). Furthermore, binaural summation was larger for the second device and spatial unmasking was significantly better when noise was moved to CI-2 in sequential users (Chadha et al., 2011).

The finding that sequentially implanted children reported hearing from both devices simultaneously rather than one fused auditory image, as shown in Figure 1.11, most clearly suggests abnormal perception of bilateral CI input (Salloum et al., 2010). Thus, binaural fusion

30 must be assessed more systematically in order to determine whether children with bilateral CIs can achieve true binaural processing (i.e., interaural comparisons required for squelch or ITD detection) or instead shift their attention to the device with the better signal-to-noise ratio.

Figure 1.11. Mean responses from children to ITDs on a four-alternative forced-choice lateralization task (Salloum et al., 2010). Sequentially implanted children with a long-delay between implantation dates were not able to detect changes in ITDs and perceived sound as coming from both devices more frequently than from the middle of their heads. NH children changed their responses from left to middle to right as ITDs changed from left leading to no ITD to right leading.

Chapter 2

Binaural Fusion and Listening Effort

This chapter builds upon the previous chapter and summarizes the literature most relevant to the central study questions concerning perceptual integration or fusion of binaural input and listening effort. The chapter begins with the notion of integration at physiological and perceptual levels in both NH listeners and CI users. The concept of listening effort is then introduced and

31 pupillary physiology is described briefly, along with several studies that used pupillometry to measure mental effort. Subsequently, methods used to assess binaural fusion (behavioural task) and mental effort (pupillometry) are outlined, along with results and interpretations.

2.1. Background

2.1.1. Binaural Integration with Normal Hearing

Binaural fusion was the primary outcome measure of the present study and was defined as the ability to perceptually discriminate single auditory images from paired images, when presented with a sound stimulus in each ear. The integration of binaural input via coincident counters in the SOC (Colburn, 1973, 1977) and/or interaural cross-correlation at multiple levels in the system (Sayers & Cherry, 1957; Stern at el., 1988) may underlie the perception of a fused auditory image. Regardless of the specific mechanism, large mismatches in interaural frequency reduce binaural fusion in NH listeners (Goupell et al., 2013).

Physiology of binaural integration

Integration has been studied at many of the levels in the ascending auditory pathway that have been discussed so far. In general, similar enough areas in the cochleae must first be stimulated (Zhou & Durrant, 2003; Goupell et al., 2013) and at similar times (< 1 ms; Jeffress,

1948; Colburn, 1977) in order for binaural integration to be possible at higher levels.

Physiological integration can be represented at the level of the auditory brainstem by large binaural difference amplitudes when interaural mismatches are minimal (Furst et al., 1985;

McPherson & Starr, 1995). Models of temporal integration in the medial geniculate body of the posit that two sequential stimuli are perceived as one (i.e., fused) when OFF cells are

32 hyperpolarized, because the onset of the second stimulus occurred within the latency period of the OFF response to the first stimulus (Robin & Royer, 1987).

Higher-level cortical activity is also believed to reflect integration (Fiedler et al., 2011).

When cortical activity was recorded with electroencephalography during a passive oddball task, the mismatch negativity waveform was only non-significant for stimuli presented dichotically and with small interaural frequency differences. The mismatch negativity is derived by calculating the difference between event-related brain potentials evoked by standard and deviant stimuli and serves as an indication of deviance detection. The finding that the mismatch negativity is non-significant for similar dichotic stimuli is consistent with single-unit recordings from the cat auditory cortex, which demonstrate that cortical activity remains constant in the presence of dichotic stimuli registered as a single fused image (Mickey & Middlebrooks, 2001).

Matched dichotic stimuli, or those with small interaural differences, thus appear to be integrated throughout the ascending auditory pathway.

Measuring temporal resolution and fusion

Behaviourally, binaural integration has been measured as a form of auditory temporal resolution or, more precisely, auditory fusion. Temporal resolution underlies many auditory and auditory-language processes (Chermak & Lee, 2005) and has been quantified with the use of various psychophysical measures, including: flutter-fusion (Miller & Taylor, 1948), binaural masking level differences (Licklider, 1948), temporal order (Hirsch, 1959), and gap detection

(Plomp, 1964). McKay and colleagues (2012) recently modeled central auditory processing of temporal information as the weighing and summation of peripheral neural impulses in a sliding temporal window.

33

As it stands, the clinical tests most commonly used to assess temporal resolution, are the

Auditory Fusion Test-Revised, the Random Gap Detection Test for Tones/Clicks, the Gaps-In-

Noise for the Right/Left ear, and the Binaural Fusion Test (BFT; Chermak & Lee, 2005). For the

Auditory Fusion Test, the average is taken of the last ascending interpulse interval and first two consecutive descending interpulse intervals perceived as fused, while gap detection determines the smallest interval required for the detection of two separate tones/clicks, Gaps-In-Noise uses constant white noise stimuli, and the BFT utilizes dichotic pairs of noise bursts presented in a randomized fashion. Because dichotic stimuli have been used to assess binaural integration objectively (McPherson & Starr, 1995; Mickey & Middlebrooks, 2001; Fiedler et al., 2011) and require binaural versus monaural processing, psychophysical tests that use dichotic stimuli to assess fusion (e.g., BFT) were considered to be most relevant for the present study. For dichotic stimuli, interaural stimulus intervals (ITDs) greater than 1 ms have been found to decrease fusion in NH children (Chermak et al., 2005). Fusion thresholds increase under conditions of precedence (reverberant environments) and can be as large as 50 ms, but have also been measured at 1 ms (Litovsky et al., 1999). Auditory stream segregation, on the other hand, involves fusion of auditory input over even longer durations (i.e., several seconds). Binaural fusion has not yet been measured in children using bilateral CIs and is likely to be disrupted in these individuals for several reasons, which receive further elaboration in the following section.

2.1.2. Binaural Fusion with Cochlear Implants

Binaural fusion may be compromised in children using bilateral CIs for a number of reasons, such as the use of different devices implanted in different positions on the basilar

34 membrane, the use of electrical stimulation to convey auditory information, and the degree and length of auditory deprivation.

Many sequentially implanted children use different device generations and those who received only one Nucleus Freedom (24RE) device had significantly larger mismatches between

EABRs evoked by their first versus second CIs (Salloum et al., 2010). Bilaterally implanted children may also be using CIs that were inserted to unequal depths and/or are stimulating different populations of surviving nerve fibers. The depth of insertion affects word recognition scores (Finley et al., 2008) and mismatches in interaural place of stimulation or interaural/implant place differences (IPDs) degrade ITD sensitivity (Long et al., 2003; Poon et al., 2009) and binaural difference amplitudes (Gordon et al., 2012). Kan and colleagues (2013) recently showed that ITD sensitivity deteriorates for IPDs larger than 4 electrodes (3 mm in the cochlea), unlike ILD sensitivity, which remains relatively independent of IPD or overlapping areas of excitation between the ears (Hartmann & Constan, 2002). Even when place mismatches are eliminated, CI users have restricted access to fine-grained temporal cues with current processing schemes (Rubinstein, 2004; van Hoesel, 2007). We thus hypothesized that binaural fusion would be best when binaural stimuli are not balanced in level.

Differences in the severity of hearing loss (or residual hearing) between the ears may further disrupt binaural processing, as better residual hearing has been associated with higher comfortably loud levels and better speech perception scores (D’Elia et al., 2012; Gordon et al.,

2001).

Evidence from Dr. Gordon’s laboratory suggests that abnormal reorganization throughout the auditory system may be a key factor contributing to shortcomings in binaural hearing and fusion with bilateral CIs (Gordon et al., 2013b). A sensitive period of about 1.5 years of

35 unilateral implant use exists, beyond which inhibition (e.g., glycinergic; Brand et al., 2002) from the side contralateral to CI-1 may be lost. More than 2 years of unilateral implant use causes prolonged binaural difference potential latencies and wave eV latencies evoked by CI-2 versus

CI-1 that persist for at least 1.5 years of bilateral CI use (Gordon et al., 2007b, 2008, 2012).

These longer latencies may reflect decreased myelination, slower neural conduction, weaker synapses, and/or less synchronous activity (Gordon et al., 2013a). Similarly, at the level of the cortex, long durations of unilateral stimulation (> 1.5 years) drive abnormal strengthening of pathways to the auditory cortex contralateral to the first implant that are not reversed by 3-4 years of bilateral CI use (Gordon et al., 2013b). Abnormal cortical activity evoked by long durations of unilateral CI stimulation may also result in increased mental effort (Jiwani et al.,

2013).

2.1.3. Mental Effort with Cochlear Implants

Nobel Laureate Daniel Kahneman (2003) described two modes of thought: System 1, which subserves automated and effortless cognition, such as simple addition, and System 2, which becomes activated to facilitate more deliberate and effortful decision-making, such as that which is involved in completing working memory tasks (Baddeley, 2012), comparing the pitch between two similar pure tones (Kahneman & Beatty, 1967), or attempting to decode degraded input with a CI (Stenfelt & Ronnberg, 2009; Baskent, 2012). Anecdotally, many parents comment that their children (who use CIs) return from school each day feeling frustrated and exhausted from having to expend considerable focus and concentration when listening in class, relative to NH peers. Not surprisingly, greater listening effort has been documented in school- aged children with hearing loss (Hicks & Tharpe, 2002).

36

Children using one CI and individuals listening to CI simulations have longer reaction times (RTs) than NH peers (Burkholder & Pisoni, 2003; Lyxell et al., 2008; Hughes & Galvin,

2013; Pals et al., 2013). While RTs decrease in NH listeners as they reach adulthood (Casey et al., 2002, 2005), it is not clear whether RTs decrease to the same extent in listeners with CIs.

Longer RTs in CI users may, in part, reflect disrupted/compensatory frontal activity caused by widespread cortical reorganization due to sensory deprivation (Shepherd & Hardie, 2001; Lee et al., 2001; Gordon et al., 2013b). In particular, children with unilateral CIs and more than 10 years of auditory experience had abnormally large P2 peaks (between 100-200 ms post-stimulus) in their cortical waveforms (Jiwani et al., 2013). Larger P2 amplitudes may indicate that listening was more cognitively demanding (Tremblay et al., 2009), required multisensory integration

(Kraus & Chandrasekaran, 2010), and/or simply involved the reticular activating system (Ponton et al., 2000; Eggermont & Ponton, 2003). Increased frontal activation reflecting greater effort has been observed in CI users when listening to speech and music (Naito et al., 2000; Limb et al.,

2010). Therefore, while children using bilateral CIs are able to achieve near-normal perception in some cases, in order to do so, they may need to recruit greater cognitive resources and rely more heavily upon System 2.

Although some evidence suggests that bilateral implantation may reduce listening effort

(Hughes & Galvin, 2013), compared with unilateral implantation, listening effort has not yet been measured objectively in a large number of children with bilateral CIs. We hypothesized that children using bilateral CIs would expend additional mental effort, relative to NH peers, in an attempt to overcome device limitations and developmental abnormalities.

37

2.1.4. Pupillometry

Tursky and colleagues (1969) found that changes in pupil diameter appear to be the most sensitive and reliable objective measure of mental effort. Changes in pupil diameter were more tightly associated with mental task difficulty than heart rate or skin conductance. Subjective measures of listening effort are less reliable than objective measures, because they are unavoidably affected by many factors, such as individual working memory capacity (Rudner et al., 2012; Baddeley, 2012; Picou et al., 2013), and are more prone to error in younger participants.

Anatomy and physiology of the pupil response

Darwin (1965) wrote that fear may trigger increases in pupil dilation; however, even before this suggestion, many considered the pupils to be windows to the soul. The pupil is the opening through which light enters the eye at the center of the iris (Andreassi, 2007, Chapter 12).

Two smooth muscles in the iris control the size of the pupil: the sphincter pupillae, also known as pupillary constrictor muscles or circular fibers, and the dilator pupillae, also referred to as pupillary dilator muscles or radial fibers. These muscles mainly mediate the pupillary light reflex, but their activity has also been found to reflect emotional arousal, such as in response to unexpected gun firing, and the cognitive load generated by various tasks, such as mental multiplication. The pupillary light reflex can be overshadowed by emotional and cognitive responses.

Pupillary constriction and dilation are largely due to autonomic regulation of the circular and radial fibers (Andreassi, 2007, Chapter 12). Parasympathetic fibers project from the Edinger-

Westphal complex in the midbrain via the oculomotor nerve to the ciliary ganglion and subsequently to the circular fibers to cause constriction. As a complementary process,

38 sympathetic fibers travel from posterior hypothalamic nuclei to the superior cervical ganglion via the upper spinal cord and finally to radial fibers to cause dilation. Inhibition of the Edinger-

Westphal complex also causes pupil dilation through relaxation of the circular fibers (Steinhauer et al., 2004). Pupil dilation is associated with activity in the superior temporal gyrus, anterior cingulate cortex, and several frontal cortical areas (Zekveld et al., in press).

The pupil response as a measure of mental effort

The relationship between pupil diameter and mental effort has been well-documented for several decades (Darwin, 1965; Hess et al., 1964; Hamel, 1974; Privitera et al., 2010; Takeuchi et al., 2011; Papesh et al., 2012). Beatty (1982b) concluded that task-evoked pupillary responses provide a reliable and sensitive indication of mental effort within tasks, across tasks, and across individuals. Pupillary changes reflect net mental activity: the recruitment of greater mental resources translates into increases in pupil size. On cognitively demanding tasks, peak dilations as large as 20% of baseline responses typically occur 1-2 seconds after stimulus onset.

Pupillometry has also been used to evaluate listening effort (Kahneman & Beatty, 1967; Beatty,

1982a); for example, peak dilation and latency increase with decreasing speech intelligibility

(Zekveld et al., 2010). Therefore, changes in pupil diameter may be used to quantify the potentially elevated listening effort in children using bilateral CIs.

39

2.1.5. Summary

Binaural hearing is fundamental to the survival of many species and requires precise neural synchronization and integration of level, timing, and frequency information throughout the auditory pathway. Through changes in relative firing rates, phase locking, across-channel coincidence detection, and contralateral inhibition, normal hearing listeners can understand speech in noisy environments, enjoy musical symphonies, and accurately localize sound sources.

While only a small subset of auditory neurons are needed for binaural processing, sensorineural hearing loss causes demyelination, decreased cell size, slower neural conduction, and reduced sensitivity to binaural cues. Cochlear implants electrically restore binaural sensitivity, speech perception, and sound localization, protect against abnormal neural reorganization, and improve quality of life. Despite these substantial benefits, cochlear implant users are limited in their ability to integrate fine timing information and discriminate frequency changes, which translates into greater listening effort. The use of electrical current that is not well-synchronized across the ears to stimulate a smaller number of surviving auditory neurons reduces listening advantages provided by bilateral cochlear implants and potentially compromises binaural fusion in bilaterally implanted children.

This review guided our choice of randomized/constant dichotic stimuli (Chermak & Lee,

2005) for the binaural fusion task and pupillometry (Zekveld et al., 2010) to assess listening effort simultaneously.

40

2.2. Methods

2.2.1. Participants

Forty nine children participated in a binaural fusion task: 25 had deafness (mean age =

11.40 ± 3.49 years) and received bilateral Nucleus 24-channel CIs (Cochlear Corporation) and

24 were age-matched and had NH (mean age = 12.06 ± 3.17 years; t(47) = 0.69, p = 0.50), with pure tone audiometric thresholds confirmed to be ≤ 20 dB HL at 250, 500, 1000, 2000, and 4000

Hz. All participants reported having normal or corrected-to-normal vision, had no known visual or developmental deficits, and were screened for visual acuity sufficient to distinguish the details necessary for performing the task without wearing eyeglasses.

All child CI users were recruited from the Cochlear Implant Program at the Hospital for

Sick Children in Toronto (Table 1) and had bilateral severe-to-profound sensorineural hearing loss that occurred in childhood; hearing loss was progressive in 7 children. Eight children had a period of usable residual hearing (aided or unaided thresholds ≤ 40 dB HL at any two given frequencies) prior to implantation. Duration of time-in-sound (Jiwani et al., 2013) was calculated as the sum of the duration of CI experience and pre-implant residual hearing (time-in-sound =

8.97 ± 2.96 years; bilateral CI experience = 4.77 ± 2.56 years).

Thirteen children previously participated in a behavioural lateralization task (SPEAR processor, CRC-HEAR, Melbourne, Australia) during which they were asked to indicate whether they perceived bilateral input as coming from the left or right; 10 trials of ITDs of 0, ±0.4, and

±1 ms were presented (unpublished data).

41

TABLE 1. CI Participant Demographic Information for the Binaural Fusion Task.

Child Etiology CI1 CI2 Inter- Age at Bilateral CI implant Test Experience Age Ear Device Age Device Delay (years) (years) (years) (years) (years)

CI1 Unknown 5.57 R 24RE 7.11 24RE 1.54 12.14 4.91 CI3 Connexin26 2.27 L 24CA 5.58 24RE 3.31 12.16 6.50 CI4 Usher 1.12 L 24CS 4.90 24RE 3.77 11.80 6.85 CI5 Usher 0.73 R 24RE 1.62 24RE 0.90 9.28 7.59 CI6 Unknown 4.96 L 24CS 15.40 24RE 10.44 17.95 2.50 CI7 Unknown 1.88 R 24CA 4.82 24RE 2.94 10.96 6.07 CI8 Unknown 2.92 R 24RE 14.15 24RE 11.23 17.97 3.76

CI9 Unknown 5.03 L 24RE 9.83 24RE 4.81 10.40 0.45 CI10 Pendred 6.23 R 24RE 11.28 24RE 5.05 11.87 0.45 CI11 Connexin26 1.52 R 24CS 10.88 24RE 9.36 11.46 0.48 CI12 Unknown 6.41 R 24RE 10.02 24RE 3.61 14.61 0.48 CI13 Unknown 4.52 R 24CS 17.08 24RE 12.57 17.97 0.80 CI14 Usher 1.26 R 24CA 2.22 24RE 0.96 10.92 8.62 CI16 Unknown 0.87 Both 24RE 0.87 24RE 0 7.08 6.12 CI17 Unknown 4.05 Both 24RE 4.05 24RE 0 10.59 6.47 CI18 Unknown 1.28 Both 24RE 1.28 24RE 0 8.20 6.82 CI19 Pendred 3.08 Both 24RE 3.08 24RE 0 7.01 3.85 CI21 Connexin26 3.36 Both 24RE 3.36 24RE 0 9.88 6.44 CI22 Ototoxicity 12.15 Both 24RE 12.15 24RE 0 16.97 4.77 CI23 Connexin26 0.79 Both 24RE 0.79 24RE 0 5.95 5.08 CI25 Connexin26 0.95 Both 24CA 0.95 24CA 0 9.45 8.40

CI26 Unknown 0.99 Both 24RE 0.99 24RE 0 7.21 6.14 CI27 Unknown 3.16 Both 24RE 3.16 24RE 0 9.33 6.09 CI28 Ototoxicity 5.55 Both 24RE 5.55 24RE 0 9.97 4.33 CI29 Unknown 8.44 Both 24RE 8.44 24RE 0 13.87 5.31

Data is provided for each CI user in the present study who participated in the binaural fusion task (n=25), including etiology, age at implantation, interimplant delay, age at test, and bilateral CI experience.

Anatomy and etiology of deafness

High resolution computed tomography scans confirmed normal cochlear anatomy in all but 3 children: child CI19 had a Mondini malformation (incomplete partition type II), child CI22 had an enlarged left vestibular aqueduct, and child CI27 presented with an enlarged vestibular aqueduct on the right side. Five children had GJB2 gene mutations causing deficiencies in

Connexin 26 gap junction protein (Propst et al., 2006), while smaller subsets had Usher

Syndrome (n=3), Pendred Syndrome (n=2), and received ototoxic medications at a young age

42

(n=2). The etiology of deafness was unknown in the remaining 13 children. There were no known cognitive issues in any of the children who participated.

CI devices and age at implantation

Children CI1-14 received their first devices at 3.42 ± 2.09 years of age and were provided with second devices after 5.74 ± 4.06 years of unilateral CI stimulation, whereas children CI16-

29 received their implants simultaneously at 3.72 ± 3.51 years of age. Children received different device generations (Nucleus 24CA, CS, or RE; Cochlear Corporation) with different current conversions (see below) depending on when they were implanted. Children were implanted at later times when they had better low frequency residual hearing, as shown in Figure 2.1, by the negative correlation between the age at which the first device was provided and pre-implant aided thresholds at 250 Hz (R = -0.88, p < 0.0001). Outlier CI22, the only child to be implanted after age 10, was excluded from this analysis. This relationship is consistent with previous findings (Hopyan et al., 2012) and was also present at 500 Hz (R = -0.65, p = 0.006), but not

1000 Hz (R = -0.37, p = 0.21), 2000 Hz (R = -0.27, p = 0.40), or 4000 Hz (R = 0.40, p = 0.26).

Residual hearing is typically best at more apical regions (Nadol, 1997).

43

80

70

60 R = -0.88

50

40 (dB HL) (dB 30

20 Implant Implant Thresholds Aided 250atHz

- 10 Pre 0 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 Age at CI-1 (years)

Figure 2.1. Children who were implanted at later ages (n=15) had better residual hearing (i.e., lower aided thresholds) when assessed with standard audiometric testing prior to implantation (R = -0.88, p < 0.0001).

2.2.2. Stimuli and Equipment

For the binaural fusion task, acoustic click-trains for normal listeners were presented via

Matlab Version 2007b (MathWorks., Inc., Natick, Massachusetts, USA) on a Dell Vostro 1510 laptop computer and an AKAI EIE Professional soundcard (96 kHz sampling rate). Click-trains were presented through insert earphones at 250 Hz for 36 ms, once per second. For the CI group, biphasic electrical pulses were delivered via Matlab Version 2012a on a Lenovo ThinkPad Edge

E420 laptop computer and a Nucleus Implant Communicator Version 2.1 system (Figure 2.2) to electrodes 20, 16, and 9 at 250 pps for 36 ms, once per second. Acoustic click-trains were previously considered to be suitable approximations to biphasic electrical pulses (e.g., Salloum et al., 2010). A 250 pps rate was used, because it represents the slowest speech processing rate available in Nucleus devices and neural adaptation occurs at higher rates (van Hoesel, 2007;

Davids et al., 2008b). Short durations of 36 ms were used to obtain the lowest behavioural

44 thresholds (Davids et al., 2008a). All children were tested in a quiet satellite room near the main laboratory at Archie’s CI Laboratory at the Hospital for Sick Children and only the experimenter and parents were present during testing.

a) b)

Figure 2.2 Schematic (Figure 2.2a) and labelled diagram (Figure 2.2b) of the experimental set-up for the binaural fusion task. The testing computer, 2 L34 speech processors, and 2 pods are highlighted in red. Children were asked to indicate whether they heard one or two sounds by clicking on the one or two blue circles shown on the computer monitor as quickly as possible.

2.2.3. Binaural Fusion Task

Monaural sensitivity thresholds (T) were first determined using a bracketing procedure

(Levitt, 1970). Electrode 20 was used for the CI group to obtain lower hearing thresholds than those obtained with basal electrodes (Propst et al., 2006; Gordon et al., 2007a). Binaural stimuli were then presented at levels 40 dB SPL greater than threshold for normal listeners and at levels

10 clinical units (CU) less than (maximum comfort) levels which evoked the most similar wave eV amplitudes between the sides for CI users (Salloum et al., 2010). Balanced levels were obtained by determining the combination of binaural stimuli that did not result in perceived lateralization to either side (Figure 2.3a). Places of stimulation were not determined using

45 subjective pitch matching (Litovsky et al., 2010, 2012), because this method would have been less reliable in some of the younger children tested and spread of electrical current limits the extent to which place mismatches can disrupt binaural processing (van Hoesel, 2004; Hughes &

Abbas, 2006; Gordon et al., 2012). Kan and colleagues (2013) concluded that precise pitch matching may thus be unnecessary.

Levels were balanced for 5 different electrode combinations for CI users {CI-2,CI-1}:

{e20,e20}, {e20,e16}, {e16,e20}, {e20,e9}, and {e9,e20}. Children were subsequently trained to associate the perception of 1 solid sound with unilaterally presented stimuli and the perception of

2 separate sounds with binaural stimuli presented at large ITDs (±24 ms). These control stimuli were used to verify the reliability of responses. Following the training session, it was assumed that children who responded with < 70% accuracy (Salloum et al., 2010) to either of the unilateral presentations or large ITDs did not understand the task and their results were therefore excluded from averaging.

ILD, ITD, and IPD test trials were interleaved and presented in a randomized and automated fashion (method of constant stimuli). Control trials were included in the randomization to gauge participant engagement throughout the task, but were excluded from data analyses. Stimuli varied in terms of interaural level and timing for all children: 7 conditions were presented at ITD = 0 ms (and electrode 20 for CI users) and varied only in terms of level differences ({T+10,0}, {T+10,T}, {T+10,T+10}, {T+10,T+20}, {0,T+10}, {T,T+10},

{T+20,T+10}; Figure 2.3b); 9 conditions were presented with balanced levels (at electrode 20 for CI users) and varied only in terms of ITD (24 ms, 2 ms, 1 ms, 0.4 ms, 0 ms, -0.4 ms, -1 ms, -

2 ms, -24 ms; Figure 2.3c).

46 a) R-5 R R+5

ILD = +10 ILD = -10

L+5 L L-5 Left Leading Right Leading b)

c) ITD (ms): 2 1 0.4 0 -0.4 -1 -2

CI-1

CI-2

d)

47

Figure 2.3a) Illustration of the range over which levels (CU) were typically balanced. Schematic diagrams of all conditions presented: b) binaural input with ILDs, c) binaural input with ITDs, d) binaural input with IPDs. Levels presented to CI-1 or the right ear are shown in red, whereas levels presented to CI-2 or the left ear are indicated in blue. For ILD-containing conditions, higher levels are represented by larger circles.

As shown in Figure 2.3d, an additional 4 IPD conditions varying only in terms of place of stimulation were presented to CI participants ({e20,e16}, {e16,e20}, {e9,e20}, {e20,e9}), to investigate the effects of place mismatches on fusion (Gordon et al., 2012; Kan et al., 2013).

Accordingly, 8 trials comprised each condition for CI users, while 10 trials were presented per condition for their NH peers, to result in a total of 160 automated trials per participant.

Stimulus intensities were presented in CU during the fusion task to reflect the clinical environment, in which CU, instead of μA, is utilized for CI MAP programming. Units were subsequently converted to dB for the purpose of comparison to NH children and to adjust for differences in current across different device generations. While dB changes are a larger part of the dynamic range of CI users than NH peers, we are presently unaware of a more suitable means of comparison between units of stimulation. However, as shown later in the Results section and confirmed by previous investigators (e.g., Furst et al., 1985), fusion does not change as a function of ILD in NH listeners. Furthermore, unpublished data indicates that changes in ILD lateralization in children with bilateral CIs and NH are similar regardless of the scale of stimulation (CU in CI users and dB SPL in peers with normal hearing). Units were defined by these formulae (the factor 10 was used because current is a measure of intensity rather than pressure; Moore, 2008, Chapter 1):

dB = 10 log (current in µA / 100 µA reference) where µA = 10 x 175 CU/255 for 24CS/CA devices and µA = 17.5 x 100 CU/255 for 24RE devices.

Analogous interaural conditions varying in terms of the place of stimulation were not included for the NH group, because broadband click-train stimuli were used to mimic the

48 electrical pulses delivered to the CI listeners. For each CI participant, sensitivity thresholds and all ILDs presented are displayed in Table 2. 12 additional randomized control trials were presented to children in the CI group, preceding the 160 trials, in order to ease children into the task and obtain more data for the control conditions.

TABLE 2. ILDs Presented to CI Users in CU and dB.

Child Thresholds {T+10,T} {T,T+10} {T+10,T+10} {T+20,T+10} {T+10,T+20} ITD-Varying

(CU) Trials

CI-1 CI-2 CU dB CU dB CU dB CU dB CU dB CU dB

CI1 190 160 -20 -1.6 -40 -3.1 -30 -2.4 -20 -1.6 -40 -3.1 -28 -2.2 CI3 185 145 -30 -1.7 -50 -3.3 -40 -2.6 -30 -1.8 -50 -3.4 -20 -1.1 CI4 145 130 -5 0.7 -25 -1.0 -15 -0.2 -5 0.6 -25 -1.1 -12 -0.4 CI5 135 135 10 0.8 -10 -0.8 0 0.0 10 0.8 -10 -0.8 -5 -0.4 CI6 165 145 -10 0.1 -30 -1.6 -20 -0.8 -10 0.0 -30 -1.7 5 1.0 CI7 170 170 10 1.6 -10 -0.1 0 0.7 10 1.5 -10 -0.2 -5 0.3 CI8 140 140 10 0.8 -10 -0.8 0 0.0 10 0.8 -10 -0.8 0 0.0 CI9 120 150 40 3.1 20 1.6 30 2.4 40 3.1 20 1.6 2 0.2 CI10 150 150 10 0.8 -10 -0.8 0 0.0 10 0.8 -10 -0.8 0 0.0 CI11 165 155 0 0.9 -20 -0.8 -10 0.0 0 0.8 -20 -0.9 -40 -2.7 CI12 120 130 20 1.6 0 0.0 10 0.8 20 1.6 0 0.0 5 0.4 CI13 130 140 20 2.8 0 1.1 10 1.9 20 2.7 0 1.0 -20 -0.9 CI14 175 160 -5 0.4 -25 -1.3 -15 -0.5 -5 0.3 -25 -1.4 -15 -0.4 CI16 155 150 5 0.4 -15 -1.2 -5 -0.4 5 0.4 -15 -1.2 0 0.0 CI17 175 145 -20 -1.6 -40 -3.1 -30 -2.4 -20 -1.6 -40 -3.1 -12 -0.9 CI18 140 150 20 1.6 0 0.0 10 0.8 20 1.6 0 0.0 -10 -0.8 CI19 155 170 25 2.0 5 0.4 15 1.2 25 2.0 5 0.4 -18 -1.4 CI21 160 150 0 0.0 -20 -1.6 -10 -0.8 0 0.0 -20 -1.6 -13 -1.0 CI22 140 135 5 0.4 -15 -1.2 -5 -0.4 5 0.4 -15 -1.2 3 0.2 CI23 140 140 10 0.8 -10 -0.8 0 0.0 10 0.8 -10 -0.8 20 1.6 CI25 175 175 10 0.9 -10 -0.9 0 0.0 10 0.9 -10 -0.9 3 0.3 CI26 160 155 5 0.4 -15 -1.2 -5 -0.4 5 0.4 -15 -1.2 -10 -0.8 CI27 150 150 10 0.8 -10 -0.8 0 0.0 10 0.8 -10 -0.8 0 0.0 CI28 135 125 0 0.0 -20 -1.6 -10 -0.8 0 0.0 -20 -1.6 10 0.8 CI29 150 125 -15 -1.2 -35 -2.7 -25 -2.0 -15 -1.2 -35 -2.7 -2 -0.2

For all CI children who participated in the fusion task, thresholds are shown in CU and the ILDs presented in various conditions are shown in both CU and dB. {T+10,T}, for example, represents the condition in which levels were presented at 10 CU above CI-2 threshold and at CI-1 threshold.

In a two-alternative forced choice test, participants were instructed to click on a single circle or pair of circles on a laptop monitor (Figure 2.4), as fast as possible, to indicate whether they heard one or two sounds. RTs were recorded simultaneously from stimulus onset to response. The youngest CI child (CI23; 5.95 years of age) was excluded from RT analyses, as an outlier (mean RT > 10 s). In the case of CI users who were unable to complete the entire task,

49 due to fatigue or equipment malfunction, data were included when responses still met the ≥ 70% accuracy criterion in all 4 control conditions.

2.2.4. Pupillometry

Pupil data from each participant’s better eye (i.e., the eye with the smaller number of missing data points) was measured (Papesh et al., 2012) at 105 Hz using an Interacoustics

VN415/VO425 Videonystagmography (VNG) system (Figure 2.4; DK-5610, Assens, Denmark) and relative pupil size was indicated on the CCD camera chip. The percent of change in pupillary diameter (PCPD) was calculated relative to baseline values, as done previously with similar measurement equipment (Yulek et al., 2008). Outliers in PCPD (greater/less than 3 SDs of the mean) or RT from stimulus onset to response (> 3 SDs of the mean or < 250 ms; Papesh et al., 2012) were excluded from analyses.

Figure 2.4. Complete experimental set-up for the binaural fusion task, including VNG pupillometry equipment. Video goggles were used to measure centisecond changes in pupil diameter while children indicated whether they heard one or two sounds.

50

For each trial, the FeatureFinder Version 2.5 program was used in Matlab to determine the peak pupil diameter automatically during the first 2 seconds following stimulus onset, in order to control for large differences in RTs across participants. Baseline pupil data recorded during the 1 s preceding stimulus presentation, while participants fixated on a plain screen, was subtracted from the peak diameter for each trial to control for individual differences in pupil size and emotional arousal. Monitor brightness and room lighting were held constant through the task. Participants were instructed to inhibit blinks in between trials. Uninhibited blinks were linearly interpolated (when data points differed from adjacent points by > 10 units for < 200 ms) and few trials (approximately 20%) were removed following trial-by-trial visual inspection of the pupil waveform for excessive blinking or error (Zekveld et al., 2010). Pupil data from 9 participants (5 NH, 4 CI) were excluded due to significant error resulting from excessive eye movement and/or poor camera focus.

2.2.5. Electrically Evoked Auditory Brainstem Responses

EABRs recorded previously were compared to behavioural fusion results from the present study. EABRs were previously evoked in 24 CI users by biphasic pulses delivered from electrode 20 at 11 Hz using a SPEAR processor (in collaboration with CRC-HEAR, Melbourne,

Australia) and were measured at a midline cephalic location (Cz) referenced to the ipsilateral earlobe. Data were collected using a Neuroscan system (NSI, Virginia, USA, V4.3) and Synamp

I (AC/DC) amplifier. At least 300 sweeps were recorded and averaged for each trial; those with amplitudes ±30 μV were rejected. Recordings were filtered (10-3000 Hz), responses were averaged in a -5-80 ms time window, and a minimum of two visually replicable recordings were obtained at each presented intensity. This procedure has previously been described in more detail

51

(Gordon et al., 2004). Latencies of the largest and most persistent peak (wave eV) were measured by two blinded independent markers with very good reliability (ICC = 0.86). Absolute wave eV latency differences were then calculated for inclusion in a regression analysis with binaural fusion as the outcome variable. Latency was the single metric chosen for convenience and because interaural latency differences have previously been shown to reflect asymmetries in the auditory brainstem and the precision of binaural integration (Gordon et al., 2008, 2012).

2.2.6. Data Analysis

Repeated-measures Analysis of Variance (ANOVA), Chi-Square (χ2) Tests, Student’s T

Tests, and regression were conducted using IBM SPSS Statistics Version 22 (SPSS Inc.,

Chicago, Illinois, USA). Linear regression analyses were performed for all measures to determine whether demographic factors of interest (age at CI-1, interimplant delay, bilateral CI experience, time-in-sound) carried any predictive value. Binary logistic regression was also used to highlight the variability in the CI group. Pairwise post-hoc analyses were implemented for repeated contrasts and Bonferroni adjustment of the significance level (α = p = 0.05) was used where necessary to correct for multiple comparisons and limit the family-wise error rate.

52

2.3. Results

Overall mean proportions of fusion responses (perception of 1 sound) from all children for the binaural fusion task are shown below (Figure 2.5). Normal listeners perceived one fused auditory image more frequently than their peers with bilateral CIs (χ2(1) = 606.20, p < 0.0001).

In both groups, ITDs interfered with fusion more than ILDs (NH group: χ2(1) = 282.42, p <

0.0001; CI group: χ2(2) = 192.31, p < 0.0001) and were associated with the most variability in responses (NH SD = 0.18; CI SD = 0.29). Mean performance for trials varying in terms of IPDs fell in between mean performance for trials containing an ITD or ILD.

Binaural Fusion 1.0

0.8

0.6 Overall ILDs ITDs 0.4

IPDs Proportion of Response 1

0.2

0.0 NH Group CI Group

Figure 2.5. Mean overall proportions of 1 response are shown, as well as proportions for each subset of conditions tested in each group. Standard error was used for error bars. CI listeners (n=25) perceived 1 sound less frequently than NH peers (n=24; p < 0.0001).

53

2.3.1. Fusion with Level Differences

Figures 2.6a and 2.6b show mean responses in both groups to ILD-varying conditions

(ITD = 0 ms). Responses to unilateral control conditions {T+10,0},{0,T+10} are displayed on the y-axis (NH right (R) level changing = 0.98 ± 0.05; NH left (L) level changing = 0.99 ± 0.04;

C1-1 level changing = 0.89 ± 0.12; CI-2 level changing = 0.91 ± 0.10). Perception of 1 sound on at least 70% of unilateral trials (Salloum et al., 2010) was taken as an indication of task comprehension and all children included in data analyses met this criterion. In Figure 2.6a, fusion of binaural input with level differences is represented as a function of changes in the level provided from CI-1 (or the right side), relative to CI-1 threshold (T), with the level provided from CI-2 (or the left side) held constant at 10 CU or dB above CI-2 or L threshold. Similarly, in

Figure 2.6b, fusion is shown as a function of changes in level provided from CI-2, with the level delivered from CI-1 held constant at 10 CU above the CI-1 threshold.

Normal listeners consistently perceived one fused auditory image (98 ± 4% of trials) when no ITD was introduced. While CI users perceived one image less frequently when ILDs were present (74 ± 19% of trials; χ2(1) = 869.81, p < 0.0001), they still met control criteria (≥

70%) on each condition and therefore appeared to be perceiving fused auditory images, at least overall, as a group.

54

Fusion with Level Differences a)

1.0 1.0

0.8 0.8

0.6 0.6

0.4 0.4 NH Group CI Group

0.2 0.2 ProportionofResponse 1 0.0 0.0 0 T T+10 T+20 0 T T+10 T+20 R/CI-1 Level Changing (dB/CU) L/CI-2 Level Changing (dB/CU)

b)

Figure 2.6. Group performance for conditions containing ILDs (ITD = 0 ms). The NH group (n=24) is indicated in purple (Figure 2.6a), while the CI group (n=25) is highlighted in green. Biphasic pulses were delivered from electrode 20 in the CI group. Mean ILDs (CI-2 – CI-1) presented are shown in Figure 2.6b in dB along with corresponding schematic illustrations. Mean ILDs (dB) as level was increased on the right side or from CI-1 were: 13.1±1.7, 0.6±0.2, -0.2±1.2, and -1.1±.1.2. As level presented to the left ear or from CI-2 was increased, ILDs (dB) were: -12.6±1.6, -1.0±1.2, -0.2±1.2, and 0.6±1.2. CI listeners consistently perceived one image when there were level differences, albeit less frequently than NH peers (p < 0.0001). Negative ILDs indicate levels leading to the right side or CI-1, while positive ILDs indicate levels leading to the left side or CI-2.

Binary logistic regression was used to analyze changes across conditions for each participant in the NH and CI groups for the dichotomous outcome variable (Figures 2.7 and 2.8).

Responses are shown for both sides separately as a function of ILD. All regression functions had relatively horizontal slopes for the NH group (Figure 2.7; p > 0.05) with values well above 0.7

(control criterion) when level was increased on either side; therefore, near-normal functions for

55

CI listeners were defined as those with values equal to or exceeding 0.7 for each condition

(Figure 2.8).

a) R Level Changing - NH Group b) L Level Changing - NH Group 1.0 1.0

0.9 0.9

Mean 0.8 0.8 Response

Non- Significant

0.7 0.7 Response ProportionofResponse 1

0.6 0.6 0 0.5 1 1.5 2 0 0.5 1 1.5 2 ILD (-dB) ILD (dB)

Figure 2.7. Binaural fusion as a function of ILD for individual NH children. Negative values in Figure 2.7a denote ILDs that became increasingly weighted to the right ear (n=6), whereas positive ILDs (Figure 2.7b) signify increasing levels presented to the left ear (n=7). All individual regression functions are shown as broken grey lines, as none of the slopes were significant (p > 0.05). Mean responses are shown in thicker black lines for each side.

To allow for between-group comparisons, the independent variable (ILD) was transformed to dB for the CI group (dB = 10 log (current in µA / 100 µA); Figure 2.8).

Functions with significant slopes are shown in dark grey and were found for only three children

(CI11, CI16, and CI19) when CI-2 level was changed. A subset of CI users seemed to fuse binaural input with level differences in a near-normal fashion: 5/25 children (children CI3, 5, 12,

22, and 25) perceived 1 fused image consistently (≥ 70% of trials) across all conditions.

56 a) CI-1 Level Changing - CI Group b) CI-2 Level Changing - CI Group 1.0 1.0

0.8 0.8

Mean 0.6 0.6 Response

Significant 0.4 0.4 Response

Non- ProportionofResponse 1 0.2 0.2 Significant Response

0.0 0.0 0 0.5 1 1.5 2 0 0.5 1 1.5 2 ILD (-dB) ILD (dB)

Figure 2.8. Individual regression functions for CI users for ILD-varying conditions. Curves in the left panel (Figure 2.8a) were derived for conditions in which CI-1 level was increasing (n=22), whereas curves in the right panel (Figure 2.8b) are shown for increasing levels from CI-2 (n=21). Negative ILDs denote CI-1 leading stimuli. The majority of curves can be seen to be decreasing as a function of increasing ILD. Significant slopes (n=3) are represented by dark grey solid lines and non-significant slopes (p > 0.05) are shown as light grey broken lines.

When there were no level or timing differences, children who exhibited near-normal perception of bilateral input containing level differences (n=5) reported hearing one fused image more frequently than their CI peers with abnormal responses (0.59 ± 0.46 vs. 0.42 ± 0.32).

2.3.2. Fusion with Timing Differences

Figure 2.9 shows mean responses in both groups for ITD-varying conditions. CI users were stimulated by electrode 20. Responses to the ±24 ms control conditions are indicated on the extreme ends of the x-axis (NH right R-leading = 0.04 ± 0.08; NH L-leading = 0.04 ± 0.07; C1-1 leading = 0.07 ± 0.09; CI-2 leading = 0.10 ± 0.09). When presented with balanced levels, NH listeners perceived one image on 78 ± 18% of trials. Conversely, CI users reported hearing one

57 sound almost half as frequently as their NH peers (42 ± 29% of trials) when level was balanced

(χ2(1) = 308.03, p < 0.0001). Furthermore, NH children were more likely to hear two separate sounds as the ITD was increased beyond ±1 ms (L-lead: χ2(1) = 20.75, p < 0.0001; R-lead: χ2(1)

= 21.01, p < 0.0001), in contrast to CI children who did not demonstrate any significant changes in response to increases in ITD beyond the physiological range (CI-2 leading: χ2(1) = 1.94, p =

0.16; CI-1 leading: χ2(1) = 0.73, p = 0.39).

Fusion with Timing Differences 1.0

0.8

0.6

NH Group 0.4 CI Group

ProportionofResponse 1 0.2

0.0 24 2 1 0.4 0 -0.4 -1 -2 -24 ITD (ms)

Figure 2.9. Mean responses in NH (n=24) and CI groups (n=25) for conditions with varying ITDs. Balanced stimuli presented for ITD-varying trials in the CI group contained a small mean ILD of -6.48 ± 12.86 CU = -0.34 ± 0.90 dB. NH children were more likely to hear two separate sounds as the ITD increased to ±2 ms (p < 0.0001), while CI users were not (p > 0.05). As shown in the figure, there was a larger difference between groups at 0 ITD than when level was varied between sides (Figure 2.6), because in the former case, level was balanced perceptually and thus CI users did not have access to the information which they rely on most heavily. In both groups, ITDs interfered more with fusion than ILDs (p < 0.0001). Negative values denote R/CI-1 leading ITDs.

58

Logistic regression functions are shown in Figures 2.10 and 2.11 for individual participants. 10/24 normal listeners exhibited significant changes in perception with increasing

ITDs on at least one side. Both mean response curves (black) and significant curves (dark grey) extended from above the 70% control criterion to below 70%; therefore, regression functions that decreased from values greater than 0.7 at ITD = 0 ms to below 0.7 at ITD = ±2 ms were defined as near-normal for the CI group.

ITDs - NH Group 1.0

0.8

0.6

Mean Response Significant Response 0.4

Non-Significant Response ProportionofResponse 1 0.2

0.0 -2 -1 0 1 2 ITD (ms)

Figure 2.10. Individual regression functions are plotted for the NH group across ITDs ranging from -2 to 2 ms. 10/24 NH listeners were more likely to hear 2 sounds as the ITD increased (p < 0.05). Curves in the left portion of the graph correspond to conditions in which ITDs were negative, i.e., leading to the right (n=20). Positive ITDs were left-leading (n=19).

For the CI group, significant functions are shown in dark grey, in Figure 2.11, and non- significant functions are in light grey with broken lines. In the absence of level or place differences, ITDs did not seem to systematically affect binaural fusion, as only one child using

59

CIs exhibited significant changes on either side and 1 child with CIs had near-normal responses on both sides.

ITDs - CI Group 1.0

0.8

0.6

Mean Response Significant Reponse 0.4

Non-Significant Response ProportionofResponse 1

0.2

0.0 -2 -1 0 1 2 ITD (ms)

Figure 2.11. Individual regression functions for the CI group for ITDs varying from -2 to 2 ms. ITDs did not affect fusion in 24/25 CI users (p > 0.05). Curves in the left portion of the graph correspond to conditions containing negative ITDs, i.e., leading to CI-1 (n=23). Positive ITDs were CI-2 leading (n=22).

2.3.3. Fusion with Place of Stimulation Differences

As a group, CI listeners perceived one image 54 ± 0.19% of the time when no level or timing differences were present and only place of stimulation was changed (Figure 2.12).

60

Fusion and Place of Stimulation 1.0

0.8

0.6

CI1 Changing 0.4 CI2 Changing

ProportionofResponse 1 0.2

0.0 0 4 11 Interaural Place Difference (#electrodes)

Figure 2.12. Binaural fusion in the CI group (n=25) is displayed as a function of increasing difference in the place of stimulation between sides. Mean responses for conditions in which CI-1 place of stimulation was held constant at electrode 20 while CI-2 delivered pulses from more basal electrodes (electrode 16 for IPD = 4 and electrode 9 for IPD = 11) are indicated in blue. Mean performance is highlighted in red for conditions in which CI-2 place of stimulation was held constant at electrode 20 while CI-1 delivered pulses from increasingly basal electrodes (Figure 2.3d). Changes in IPD did not consistently affect binaural fusion (p > 0.05).

As shown in Figure 2.13, 9/25 CI users showed significant changes in perception when the interaural electrode position was varied on at least one side, but as in the case of ITDs, these changes were not consistent or systematic, as only 2 children had significant changes in response to variations in both CI-1 and CI-2 place of stimulation.

61 a) CI-1 Place Changing - CI Group b) CI-2 Place Changing - CI Group 1.0 1.0

0.8 0.8

Mean 0.6 0.6 Response

Significant Response 0.4 0.4

Non- ProportionofResponse 1 Significant 0.2 0.2 Response

0.0 0.0 0 5 10 0 5 10 IPD(-# electrodes) IPD (#electrodes)

Figure 2.13. Logistic regression curves for bilateral CI users for IPD-varying conditions. IPDs did not affect fusion in 16/25 children with CIs (p > 0.05). Figure 2.13a shows responses from CI users when the CI-1 electrode was moved more basally (n=24). For conditions shown in Figure 2.13b, the CI-2 electrode was moved to more basal positions (n=23). Negative values denote place of stimulation differences leading to CI-1.

2.3.4. Predicting Binaural Fusion

Multiple linear regression analysis shows that age at CI-1 and absolute EABR wave eV latency mismatch best predict further lack of fusion in the absence of level or place of stimulation differences (Figure 2.14; R = 0.60, p = 0.01; β for age at CI-1 = 0.41, p = 0.03; β for wave eV mismatch = -0.42, p = 0.03; interimplant delay: R = 0.08, p = 0.72; bilateral CI experience: R = 0.29, p = 0.18; time-in-sound: R = 0.27, p = 0.19). The associations are stronger when outlier CI22 is removed (R = 0.62, p = 0.008). Age at CI-1 was also positively correlated with fusion for place-mismatched conditions (R = 0.40, p = 0.049; interimplant delay: R = 0.13, p = 0.53; bilateral CI experience: R = 0.25, p = 0.24; time-in-sound: R = 0.15, p = 0.47), but not

62 when stimuli contained an ILD (R = 0.10, p = 0.63; interimplant delay: R = 0.14, p = 0.51; bilateral CI experience: R = 0.14, p = 0.50; time-in-sound: R = 0.04, p = 0.89).

EABR wave eV latency mismatches were not analyzed across level and place of stimulation conditions, because electrophysiological data for these conditions were not available.

ILD conditions tested in this study used levels that were presented relative to behavioural thresholds and were therefore considerably lower than maximally comfortable levels used to evoke EABRs in these children previously. Similarly, place of stimulation conditions contained electrode combinations {e20,e16},{e16,e20},{e9,e20},{e20,e9} that were not used to evoke

EABRs previously from the CI children tested.

a) b) 1.0 1.0 β = -0.42 β = 0.41

0.8 0.8

0.6 0.6

0.4 0.4 ProportionofResponse 1 0.2 0.2

0.0 0.0 0.0 0.2 0.4 0.6 0.8 0.0 5.0 10.0 15.0 Wave eV Absolute Latency Difference (ms) Age at CI-1 (years)

Figure 2.14. Multiple regression analysis revealed that mean proportion of 1 response for individual CI users (n=24) when level and place differences were absent can be predicted (p < 0.05) by absolute wave eV latency difference (Figure 2.14a; β = -0.42) and the age at CI-1 (Figure 2.14b; β = 0.41).

63

Children with exceptionally large EABR mismatches (> 0.5 ms) had longer interimplant delays and/or less/inconsistent bilateral CI use. Of the 3 CI users with significant slopes when

ILDs were changed: two (CI11 and CI19) had shorter durations of bilateral CI experience (0.48 and 3.85 years, respectively), while the third child (CI16) had inconsistent bilateral CI use over the first few years of activation. Children who exhibited near-normal fusion of binaural input containing level differences were older when they received CI-1 (4.50 ± 4.85 vs. 3.33 ± 2.16 years), and had shorter interimplant delays (1.56 ± 1.78 vs. 3.13 ± 4.35 years), greater bilateral

CI experience (5.55 ± 3.14 vs. 4.58 ± 2.45 years), more time-in-sound (9.62 ± 1.65 vs. 8.81 ±

3.22 years), and less EABR asymmetry (0.22 ± 0.08 vs. 0.28 ± 0.16 ms). There was no age difference between NH listeners with significant and non-significant responses (L-leading ITDs: t(22) = -0.53, p = 0.60; R-leading ITDs: t(22) = -0.43, p = 0.67). In this small cohort of CI users, lateralization of ITDs (in the absence of ILDs or place mismatches) did not seem to relate directly to fusion, because only 1/7 of CI children who could lateralize sounds on the basis of

ITDs appeared to fuse binaural input similarly to NH peers.

2.3.5. Reaction Time

CI users had longer overall RTs than their NH peers (Figure 2.15; t(38.80) = -6.45, p <

0.0001). Factorial repeated-measures ANOVA revealed main effects of group on RTs for conditions with R/CI-1 level changing (F(1,46) = 40.51, p < 0.0001), L/CI-2 level changing

(F(1,46) = 45.19, p < 0.0001), and ITD changing (F(1,46) = 35.47, p < 0.0001). In the NH group,

RTs were longer with respect to some conditions with larger ITDs (F(6,138) = 2.66, p = 0.02; 0.4 ms ITD vs. -2 ms ITD: p = 0.02) and smaller ILDs (R changing: F(2,46) = 1.92, p = 0.16; L changing: F(2,46) = 3.55, p = 0.04; {T,T+10} vs. {T+20,T+10}: p = 0.04), but these differences

64 were not statistically significant after correcting for multiple comparisons. Similarly for the CI group, there was no effect of any subset of conditions on RT (ITD: F(6,138) = 0.72, p = 0.63; CI-

1 level changing: F(2,46) = 0.51; p = 0.61; CI-2 level changing: F(2,46) = 1.11, p = 0.34; CI-1 place changing: F(2,46) = 0.44, p = 0.65; CI-2 place changing: F(2,46) = 0.90, p = 0.41). In contrast to CI users, normal listeners had significantly faster responses when level differences were present (NH: t(23) = 4.82, p < 0.0001; CI: F(2,46) = 0.61, p = 0.55).

Reaction Time 4.5

4.0

3.5

3.0 Overall 2.5 ILDs 2.0 ITDs

1.5 IPDs Mean Mean Reaction (s) Time 1.0

0.5

0.0 NH Group CI Group

Figure 2.15. Mean overall RTs and RTs for each subset of conditions are displayed for both groups. CI users (n=24) had longer RTs than their peers with NH (n=24; p < 0.0001).

Mean RTs were longer when each group reported hearing two distinct sounds (NH: t(20)

= -2.95, p < 0.01; CI: t(23) = -2.56, p = 0.02). On an individual level, longer mean RTs were associated with poorer fusion in all children (Figure 2.16a - level differences: R = -0.52, p <

0.001; Figure 2.16b - timing differences: R = -0.32, p = 0.03) and younger ages in the NH group

(Figure 2.17a: R = -0.77, p < 0.0001), while no demographic factor predicted RTs of CI children

65

(Figure 2.17b; age at CI-1: R = 0.18, p = 0.40; interimplant delay: R = 0.08, p = 0.73; bilateral

CI experience: R = 0.01, p = 0.97; time-in-sound: R = 0.18, p = 0.40; chronological age: R =

0.19, p = 0.34).

a) Level Cues - Reaction Time b) Timing Cues - Reaction Time 6.0 R = -0.52 6.0 R = -0.32 5.0 5.0

4.0 4.0

3.0 3.0 NH Group CI Group 2.0 2.0

Mean Reaction Time MeanReactionTime (s) 1.0 1.0

0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of 1 Reponse Proportion of 1 Response

Figure 2.16. Mean RTs are shown for individual participants (nNH = 24; nCI = 25) for conditions in which level differences were present (Figure 2.16a) and absent (Figure 2.16b). Poorer fusion predicts longer RTs (R = -0.52, p < 0.001; R = -0.32, p < 0.05). Within-group regressions were not computed due to limited spread of data points in either group alone.

a) b) RT and Age - NH Group RT and Age - CI Group 7.0 7.0 R = -0.77 6.0 6.0 5.0 5.0 4.0 4.0 Time-in-sound 3.0 3.0

Mean RT MeanRT (s) 2.0 2.0 Chronological Age 1.0 1.0 0.0 0.0 0 5 10 15 20 0 5 10 15 20 Chronological Age (years) Years

Figure 2.17. In the NH group (n=24), children achieved faster RTs at older ages (Figure 2.17a; R = -0.77, p < 0.0001), while there was no relationship between RT and chronological age or duration of time-in-sound in the CI group (Figure 2.17b; n=24; p > 0.05).

66

2.3.6. Pupillary Responses

While latencies were similar between groups (t(38) = -0.98, p = 0.34; NH mean = 0.73 ±

0.17 s; CI mean = 0.78 ± 0.15 s) and consistent with pupil physiology (Andreassi, 2007, Chapter

12; Zekveld et al., 2010), CI users had greater overall changes (i.e., differences between baseline and peak dilation) than their NH peers (Figure 2.18; t(38) = -4.84, p < 0.0001). Changes in pupil diameter were also greater in the CI group when each interaural differences were varied independently (R/CI-1 level changing: F(1,38) = 17.54, p < 0.0005; L/CI-2 level changing:

F(1,38) = 29.94, p < 0.0001; timing: F(1,38) = 20.60, p < 0.0001). In both groups, PCPD did not differ across ITDs (NH: F(6,108) = 1.62, p = 0.15; CI: F(6,120) = 0.62, p = 0.72) or ILDs (R changing: F(2,36) = 0.01, p = 0.99; CI-1 changing: F(2,40) = 3.32, p = 0.05 (ns); L changing:

F(2,36) = 0.32, p = 0.73; CI-2 changing: F(2,40) = 2.79, p = 0.07). Additionally, pupillary responses did not change significantly across IPDs (CI-1 changing: F(2,40) = 2.00, p = 0.15; CI-

2 changing: F(2,40) = 0.45, p = 0.64) or subsets of interaural difference conditions (NH: t(18) = -

0.16, p = 0.87; CI: F(2,40) = 0.12, p = 0.89).

Pupillary Changes 7.0

6.0

5.0 Overall 4.0 ILDs 3.0 ITDs

Mean Mean PCPD(%) 2.0 IPDs

1.0

0.0 NH Group CI Group

Figure 2.18. Mean PCPD is shown for each group. Children with bilateral CIs (n=21) had greater changes in pupillary diameter than their NH peers (n=19; p < 0.0001).

67

In contrast to RT analyses, there were no differences between mean pupillary responses when children reported hearing 1 versus 2 sounds (NH: t(15) = -0.56, p = 0.59; CI: t(20) = -0.26, p = 0.80). However, for all children across all conditions, mean PCPD showed a similar trend to

RTs and increased with poorer fusion (Figure 2.19a – level differences: R = -0.59, p < 0.0001;

Figure 2.19b – timing differences: R = -0.38, p = 0.01).

a) Level Cues - Pupil Diameter b) Timing Cues - Pupil Diameter

10.0 R = -0.59 12.0 R = -0.38

10.0 8.0 8.0 6.0 6.0 NH Group 4.0 CI Group

4.0 MeanPCPD (%) 2.0 2.0

0.0 0.0 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Proportion of 1 Response Proportion of 1 Response

Figure 2.19. Greater changes in pupil diameter were associated with poorer binaural fusion (nNH = 19; nCI = 21). The association is shown for both level (Figure 2.19a; R = -0.59, p < 0.0001) and timing differences (Figure 2.19b; R = -0.38, p < 0.05). Within-group regressions were not computed due to limited spread of data points in either group alone.

Older NH children displayed less effort as reflected by smaller PCPDs (Figure 2.20a; R

= -0.66, p < 0.005), whereas no demographic factor was related to pupillary responses in the CI group (Figure 2.20b; age at CI-1: R = 0.13, p = 0.59; interimplant delay: R = 0.35, p = 0.12; bilateral CI experience: R = 0.18, p = 0.44; time-in-sound: R = 0.18, p = 0.44; chronological age:

R = 0.27, p = 0.24). Not surprisingly, as illustrated in Figure 2.21, greater PCPDs predicted longer RTs (R = 0.69, p < 0.0001).

68

a) PCPD and Age - NH Group b) PCPD and Age - CI Group 12.0 12.0

10.0 10.0 R = -0.66 8.0 8.0

6.0 6.0 Time-in-sound

4.0 4.0 Chronological

MeanPCPD (%) Age 2.0 2.0

0.0 0.0 0 5 10 15 20 0 5 10 15 20 Chronological Age (years) Years

Figure 2.20. Older NH children had smaller changes in pupil diameter, reflecting less listening effort (Figure 2.20a; R = -0.66, p < 0.005; n=19), while there was no relationship between pupil diameter and time-in-sound or chronological age in the CI group (Figure 2.20b; p > 0.05).

Pupil Diameter vs. Reaction Time 10.0

R = 0.69 8.0

6.0

NH Group

4.0 CI Group Mean Reaction Time MeanReactionTime (s)

2.0

0.0 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 Mean Percent of Change in Pupil Diameter (%)

Figure 2.21. Mean overall pupillary responses were positively correlated with RTs (R = 0.69, p < 0.0001; n=40). There was also a strong relationship between RT and PCPD within each group (NH: R = 0.78, n=19; CI: R = 0.34, n=21).

69

2.4. Discussion

The main objective of this study was to determine whether children who are deaf are able to perceptually integrate or fuse sounds from two CIs. We were also interested in whether there is a relationship between binaural fusion and listening effort in children who use bilateral CIs.

We hypothesized that:

1) Bilateral CIs would most effectively promote binaural fusion with level differences.

2) Better fusion would be associated with less listening effort.

Across all conditions, CI users perceived one auditory image less frequently than their

NH peers; however, all children could more effectively fuse binaural input when level differences were present. The absence of interaural level information reduced fusion, in particular for children using bilateral CIs. Larger asymmetries at the level of the auditory brainstem (i.e., differences between wave eV latencies) translated into even poorer binaural fusion in the CI group. Children who received their first devices at older ages could achieve fusion to a greater degree than their younger bilaterally implanted peers, likely due to a longer duration of residual hearing prior to implantation, which primed the system for processing bilateral CI stimulation. A reduced ability to fuse binaural input resulted in increased listening effort, which decreased at older ages in NH children.

Compared to NH peers, binaural fusion is abnormal in children using bilateral CIs for many different reasons (Figure 2.5). When both ears are stimulated with acoustic input within an

ITD smaller than ±1 ms, normal listeners are able to perceive one fused auditory image (Figure

2.9). The use of different devices placed in different cochlear locations (Figure 1.10) and stimulating different complements of surviving auditory neurons with pulsatile electrical stimulation reduces the likelihood that children are able to use bilateral CIs to achieve near-

70 normal fusion (Poon et al., 2009; Shepherd & Hardie, 2001; Rubinstein, 2004). Furthermore, a period of unilateral stimulation causes abnormal reorganization throughout the auditory pathway

(Gordon et al., 2008; 2011a, 2013b) which may limit the ability of the central to fuse binaural input.

2.4.1. Children with Bilateral CIs Perceive Distinct Auditory Images

Children with bilateral CIs do not appear to use true binaural (ITD) processing. While a small number of CI children (n=5) in the present study perceived a single image more frequently

(59% vs. 42% of trials) in the absence of any interaural differences, bilateral CI users perceived two distinct auditory images about 50% of the time (Figure 2.5). This is consistent with previous research suggesting that bilateral CI users do not hear a single auditory percept as coming from the center of the head (midline). Salloum and colleagues (2010) found that sequentially implanted CI users who received a second implant after a long delay from the first (i.e., > 1.5 years) rarely heard a single sound in the middle of their heads (Figure 1.11), in contrast to their

NH peers who reported hearing midline auditory percepts, in the absence of ITDs or ILDs. Thus, the current consensus is that the majority of children with bilateral CIs do not localize unitary auditory images, but rather have learned to use interaural differences to attend to the more salient of two monaural images and associate location with salience. Similarly, post-lingually deafened adult CI users could still lateralize binaural stimuli in the absence of fusion (Kan et al., 2013).

However, some results obtained by Kan and colleagues (2013) with adult CI users contradict findings from the present study and from Salloum and colleagues (2010). Adults with bilateral CIs heard a single fused auditory image on a relatively larger proportion of trials (Kan et al., 2013). This discrepancy could be attributed to several differences between these studies: Kan

71 and colleagues (2013) tested post-lingually deafened adults with considerably greater pre- implant acoustic experience, levels were not balanced relative to those used to evoke EABR wave eVs, and no constraint was set on response time. Post-lingually deafened CI users also retain greater ITD sensitivity than pre-lingually deafened peers (Litovky et al., 2010).

2.4.2. The Auditory System Most Effectively Fuses Input with Level Differences

In the absence of ITDs, both normal and CI listeners perceived one fused image more frequently, as expected. In general, NH individuals without sensorineural hearing loss can make greater use of ITDs than their deaf peers with CIs. Depending on the frequency of the acoustic input, mammals use the intensity and time of arrival of sounds to different extents when determining the location of sound sources. Coding of binaural cues (ILDs and ITDs) is facilitated by complex circuits in the auditory brainstem (Figure 1.4; Grothe et al., 2010). It is well- established that ITDs are the dominant binaural cue for NH listeners (Wightman & Kistler, 1992;

Macpherson & Middlebrooks, 2002). In order for humans to achieve such remarkable ITD sensitivity (~10 µs), more temporally precise inhibition is required than in the case of subtractive

ILD processing (Brand et al., 2002; Moore & Caspary, 1983; Grothe et al., 2010). Sensorineural hearing loss reduces the number of ITD-sensitive neurons present throughout the auditory system and degrades their fine tuning (Hancock et al., 2010; Tillein et al., 2010). Furthermore, envelope extraction schemes implemented by current CI devices (Loizou, 1998) discard much of the fine temporal information needed for ITD processing (Figure 1.2; Smith et al., 2002). ITD sensitivity is thus considerably diminished in CI users, thereby increasing their reliance on ILDs to achieve sound localization (Lawson et al., 1998; van Hoesel, 2004; Laback et al., 2004; Salloum et al.,

2010; Litovsky et al., 2012).

72

As confirmed by the present study with NH listeners, large ILDs do not interfere with fusion (Furst et al., 1985), but rather shift perception of a fused image to the contralateral side

(Figure 2.7), presumably with increasing contralateral inhibition in the LSO (Moore & Caspary,

1983). Conversely, in the absence of ILDs, ITDs extended beyond the physiological range (Furst et al., 1985) increase the likelihood of perceiving two separate auditory images (Figure 2.9; L- leading: χ2(1) = 20.75, p < 0.0001; R-leading: χ2(1) = 21.01, p < 0.0001). Though CI users were generally less likely to perceive a fused image than NH peers, they were also more successful at this task when the stimuli contained a level difference than when the stimuli had no level difference (Figure 2.6; χ2(2) = 192.31, p < 0.0001). Recent observations that ITD lateralization/sensitivity is more easily disrupted by interaural frequency or place mismatches than ILD lateralization/sensitivity in both NH individuals and those with bilateral CIs support the notion that binaural input with level differences is more effectively fused by the auditory system

(Goupell et al., 2013; Kan et al., 2013). Thus, ILD processing may not require similar regions to be excited within each ear and instead occurs via a different neural mechanism than ITD processing (Hartmann & Constan, 2002).

Our finding that fusion was less affected by binaural input containing small IPDs (< 6 electrodes) was also in line with other studies (Figure 2.12; van Hoesel, 2004; Hughes & Abbas,

2006; Gordon et al., 2012). In fact, Kan and colleagues (2013) also reported a bias towards the side with more basal stimulation (Figure 2.13). The reason for this is unknown, but a similar result has been observed in NH listeners and may thus reflect individual preferences for higher frequencies (Goupell et al., 2013).

73

2.4.3. Multiple Psychophysical Ranges May Exist

Perception as a function of ITD differs when children are asked to fuse rather than lateralize binaural input. Relative insensitivity to ITDs is well-documented in CI users

(Grantham et al., 2007; Aronoff et al., 2010), but fusion of binaural input containing ITDs has not previously been studied in children using bilateral CIs. As shown in Figure 2.9, CI users were least likely to perceive a fused image (< 50% of the time) when both ILDs and IPDs were removed and only differences in timing were present. Surprisingly, of all the CI children who could lateralize sounds on the basis of ITDs, only one achieved near-normal fusion in the present study. This suggests that CI users respond differently to changes in binaural stimuli with different ranges of ITDs, depending on the psychophysical task. Similarly, normal listeners can detect ITDs as small as 10 µs, but did not all have significant regression functions when fusing binaural input with ITDs as large as ±2 ms (Figure 2.10). Binaural fusion thresholds have previously been measured to be 1 ms in NH children (Chermak & Lee, 2005). The data thus indicate that multiple psychophysical ranges may exist: one for lateralization and a different one for fusion. Kan and colleagues (2013) also found that fusion and lateralization were not directly related, as adult CI users could still lateralize binaural input even when they did not perceive fused images.

Some physiological studies and psychoacoustic models use population-level contralateral inhibition, rather than delay line coincidence detection, to explain coding of ITDs larger than physiologically relevant values (McAlpine et al., 2001; Riedel & Kollmeier, 2006; Breebaart et al., 2001). However, it is also possible that electrophysiological and psychoacoustic responses to large non-physiological ITDs merely reflect redundant activity in the auditory system or “ringing of the cochlear filters” (Riedel & Kollmeier, 2006). Alternatively, a separate binaural processor

74 located at levels higher than the SOC may be involved in fusion (Aharonson & Furst, 2001). This processor may act as a backup system for lateralization with the main function of determining whether monaural or binaural stimuli are presented rather than where sound is coming from.

While an intact backup system (what network) might help listeners determine how many sounds they are hearing and enhance ITD sensitivity, especially in noisy environments, it would not be necessary for lateralization (where network). Data from the present study suggest that determining whether one hears one or two sounds (binaural fusion) may in fact be a more difficult and higher-level decision (Figures 2.16 and 2.19) than determining whether one hears sound coming from the left or right (lateralization). Like adults, children with bilateral CIs may thus lateralize binaural input without necessarily perceiving one auditory image (Kan et al.,

2013).

2.4.4. Bilateral CIs Promote Integration in the Auditory Brainstem

Physiological integration in the auditory brainstem is necessary but not sufficient for perceptual integration or fusion. As shown in Figure 2.14a, EABR asymmetries (i.e., delayed physiological integration) were associated with poorer fusion. The deficiencies in myelination, neural conduction, synaptic function, and synchronous activity associated with mismatched

EABR latencies (Gordon et al., 2008, 2013a) may limit the auditory brainstem’s ability to code and transmit a fused image to higher levels. While CI users do not appear to integrate binaural information perceptually like their NH peers (Figure 2.5), data from Dr. Gordon’s laboratory provide evidence that matched bilateral stimulation promotes integration at the level of the auditory brainstem (Gordon et al., 2012). The binaural difference response was visible in all children with bilateral CIs who were tested when stimulation was delivered to the same apical

75 electrodes. As shown in Figure 2.22, clear peaks in the binaural difference response were present due to reductions in amplitude evoked by bilateral stimulation relative to the sum of unilaterally evoked responses. This response may be indicative of inhibitory activity associated with binaural processing (Dobie & Norton, 1980; Wada & Starr, 1989; Krumbholz et al., 2005).

The finding that children with bilateral CIs integrate binaural input at the level of the brainstem, but are unable to achieve binaural fusion is consistent with Aharonson and Furst’s model (2001) and suggests that processing in the brainstem is insufficient for the perception of a single auditory image. However, in light of the association between EABR wave eV latency differences and binaural fusion, EABRs may be still used to gain some insight into the perceptual experience of younger CI children who are unable to provide reliable behavioural responses.

Figure 2.22. The binaural difference response (at ~4 ms latency) was present in all bilaterally implanted children tested (12/12) with matched apical stimulation and was eliminated by large mismatches in the place of stimulation (Gordon et al., 2012). This constitutes evidence that children with bilateral CIs are able to integrate binaural input in the auditory brainstem. The binaural difference response is obtained by computing the difference between bilaterally evoked EABRs and the sum of unilaterally evoked potentials and is thought to reflect inhibitory processing in the SOC.

76

2.4.5. Binaural Fusion May Depend on Higher-Level Processing

Integration at higher centers in the auditory system may be necessary for the perception of one fused auditory image. Fiedler and colleagues (2011) found that dichotically presented pure tones were both perceived as fused and integrated cortically, as indicated by the absence of a mismatch negativity waveform (Figure 2.23). Despite normal cortical development promoted by CI use (Eggermont & Ponton, 2003; Gordon et al., 2010b, 2013b), deviations from normal remain (Gordon et al., 2005b; Jiwani et al., 2013). Cortical responses would need to be evoked in

CI users during participation in a binaural fusion task in order to determine whether abnormal cortical integration is in some way underlying observed shortcomings in perceptual integration.

Alternatively, near-normal integration at the level of the cortex would suggest that, while CI stimulation promotes physiological integration, fusion is limited more by current processing schemes. CI devices may provide insufficient information for binaural fusion.

Figure 2.23. Grand-average mismatch negativity waveforms were obtained from NH listeners by using electroencephalography to calculate the difference between event-related cortical responses evoked by deviant and standard stimuli. 300 Hz diotic stimuli were used for Frequency-Oddball conditions, while dichotic stimuli (300 Hz and 321/831 Hz) were used for Location-Oddball conditions. The non-significant mismatch negativity is highlighted in red and suggests that binaural input was integrated on a cortical level (Fiedler et al., 2011).

77

2.4.6. Binaural Fusion May Improve with Later Ages at Implantation

Longer periods of acoustic hearing prior to implantation may promote fusion with bilateral CIs by developing the neural pathways in the auditory system that mediate binaural fusion. Earlier ages at implantation are recommended to promote normal development of speech perception skills (Kirk et al., 2002; Manrique et al., 2004; Harrison et al., 2005) and maintain the integrity of the auditory system (Ponton et al., 1996) in children with no residual pre-implant hearing. However, children implanted at older ages have better outcomes when they have access to acoustic input with their hearing aids prior to implantation (Hopyan et al., 2012). Evidence from our cohort of CI users in the present study suggests that later ages at implantation may promote the fusion of binaural input containing either interaural timing or place of stimulation differences (Figure 2.14b). Later ages at implantation were associated with lower pre-implant pure tone audiometric thresholds with hearing aids, indicating better access to acoustic sounds and for longer periods prior to implantation (R = -0.88; Figure 2.1). Thus, the relationship between age at CI-1 and fusion may signify that greater experience integrating timing and pitch information prior to implantation translates into an enhanced ability to process these interaural differences using bilateral CIs. By contrast, better access to acoustic sounds prior to implantation did not seem to confer any benefits in terms of post-implant ILD perception. This result was confirmed by Litovsky and colleagues (2010) in adult CI users. Interimplant delay and time-in- sound, on the other hand, were not predictive of fusion, because more symmetric CI-driven development and increased CI experience may not compensate for insufficient acoustic hearing and pre-implant auditory development required for fusion.

The finding that age at CI-1 was related to fusion of input with ITDs, but not ILDs, can be explained in two different ways: 1) less variable responses to input with ILDs may have

78 limited the strength of the correlation or 2) post-implant processing of fine structure information may be dependent on early acoustic experience. It seems less plausible that lack of variability limited the correlation with age at CI-1, because there was a stronger relationship between listening effort and binaural fusion with ILDs than with ITDs (Figures 2.16 and 2.19).

Moreover, data from the literature offer support for the second explanation. In contrast to ILDs, perception of ITDs and pitch is based on information conveyed in the fine structure of acoustic waveforms (Figure 1.2; Smith et al., 2002); therefore, acoustic experience prior to implantation may promote the development of neural connections underlying ITD and pitch coding (Litovsky et al., 2010; Hopyan et al., 2012). Fine structure information is less relevant for coding ILDs, which are carried mainly in the temporal envelope (Hartmann & Constan, 2002) and are represented more faithfully by envelope-based CI stimulation strategies (e.g., ACE).

Consistent with the positive effect of acoustic experience shown in the present study, pre- implant hearing has also been found to enhance post-implant sound localization and music perception in CI children (Van Deun et al., 2010a; Hopyan et al., 2012) and speech perception, music perception, and ITD (but not ILD) sensitivity in post-lingually deafened adults with CIs

(Rubinstein et al., 1999a; Jung et al., 2012; Litovsky et al., 2010). Greater residual hearing appears to somewhat protect the central auditory system against the deleterious effects of sensorineural hearing loss (Eisenberg, 2007).

While the small subset of CI users who demonstrated near-normal fusion of binaural input with level differences were slightly older than their peers with bilateral CIs (4.50 ± 4.85 vs.

3.33 ± 2.16 years), they also had shorter interimplant delays (1.56 ± 1.78 vs. 3.13 ± 4.35 years), greater bilateral CI experience (5.55 ± 3.14 vs. 4.58 ± 2.45 years) and more time-in-sound (9.62

± 1.65 vs. 8.81 ± 3.22 years). This desirable mix of demographic data was associated with better

79 fusion in the absence of interaural differences (59% vs. 42% of trials) and less EABR asymmetry

(0.22 ± 0.08 vs. 0.28 ± 0.16 ms). It seems that a combination of experience listening with both CI devices and processing acoustic input prior to implantation may contribute to improvements in both perceptual and physiological integration.

2.4.7. Poorer Binaural Fusion Translates into Greater Listening Effort

Increased pupil diameter size when fusion is poor may signify additional mental processing. Effortful tasks involving System 2 place greater demands on the central executive of our working memory system, which is in charge of allocating mental resources (Baddeley,

2012). Bottom-up degradation of incoming speech signals, as in the case of CI simulations, increases mental effort and limits top-down processing (Zekveld et al., 2010; Stenfelt &

Ronnberg, 2009; Pals et al., 2013; Baskent, 2012). Baskent (2012) found that top-down phonemic restoration did not occur when the number of noise-band vocoder processing channels was decreased to 8 to simulate the resolution provided by most CI devices. While bilateral CIs improve speech detection and perception in noise in children (Chadha et al., 2011; Van Deun et al., 2010b), CI users do not receive as many binaural benefits as their NH peers (Litovsky et al.,

2012). As discussed earlier, fusion of binaural input by children with bilateral CIs is impaired relative to NH listeners, which may further complicate bottom-up processing.

Overall, children in the CI group used greater listening effort than NH peers during the binaural fusion task, as shown by longer mean RTs (t(38.80) = -6.45, p < 0.0001) and greater mean differences between baseline and peak pupil dilation (t(38) = -4.84, p < 0.0001). The observed difference in effort between the groups may be mediated largely by the fact that impaired hearing, in general, is more effortful (Hicks & Tharpe, 2002). However, the

80 associations between listening effort and proportion of 1 response across all children tested

(Figures 2.16 and 2.19) suggest that a poorer ability to perceive a fused image may further increase listening effort. We offer several potential explanations for the possible relationship between fusion and effort, which can be classified into three general categories: evolutionary, cognitive, and physiological. While these hypotheses are not mutually-exclusive and may each have a certain degree of validity, the data from the present study offer the most support for the hypothesis concerning cognition.

Binaural fusion may be an adaptive evolutionary skill

It is possible that mammals evolved over the past 200 million years (Grothe et al., 2010) to perceive fused auditory images. The auditory system has evolved to represent the external world (or at least to support action based on its state), rather than to represent proximal stimulation. Neurons in the auditory brainstem and midbrain are tuned to specific ITDs in many birds and mammals (Carr & Konishi, 1990; Brand et al., 2002; Rose et al., 1966). Binaural sensitivity has also evolved independently in amphibians and reptiles (Schnupp & Carr, 2009).

Coincidence detection in the MSO (Jeffress, 1948; Agmon-Snir et al., 1998; Grothe et al., 2010) allows normal listeners to fuse monaural signals from each ear into unitary auditory images that can be accurately localized in space (Furst et al., 1985; Colburn et al., 2006; Goupell et al., 2013;

Litovsky, 1997). The perception of distinct auditory images, especially when binaural stimuli are presented within the physiological range, may not be adaptive from an evolutionary perspective and thus causes listeners with poor fusion to recruit additional mental resources from System 2 to process the novel percept. Indeed, greater pupillary dilation occurs in response to novel stimuli

(e.g., Hamel, 1974). The auditory system of CI users, in particular, may have not yet adjusted to

81 processing distinct images and thus requires greater listening effort. Equal performance to NH peers can be achieved by bringing additional cognitive resources to bear.

Bottom-up processing of non-fused auditory images is more effortful

Alternatively, the connection between fusion and effort may be understood in the context of top-down and bottom-up processing. Distorted auditory input, independent of novelty, causes the central nervous system to exert greater effort in attempting to decode the signal (Zekveld et al., 2010; Pals et al., 2013). As mentioned several times throughout this thesis, monaural input is inevitably degraded by CI stimulation schemes. The fusion of two degraded percepts may elicit additional mental computation and uncertainty that makes bottom-up processing even more effortful for CI users. In contrast to children who consistently perceive one fused image and can more automatically and effortlessly localize or interpret the simpler unitary percept using System

1, those who more frequently perceive distinct images must first dedicate more time and mental resources (System 2) to deciding whether they hear one or two sounds. These children must then determine which features of each image should receive attention and be sent for further processing in higher centers.

Physiological processing of multiple auditory images may require more effort

Finally, it seems plausible that the additional listening effort accompanying poorer fusion is a physiological response to the perception of an additional image, such as a mismatch negativity waveform (Figure 2.23; Fiedler et al., 2011). More novel or distorted percepts may result in even greater listening effort, but perhaps observed differences in effort between groups and across individuals can be largely attributed to the fact that binaural stimuli are more complex and two separate images require a substantial division in attentional resources. The data from this study do not support this hypothesis, because mean pupillary responses were not the largest for

82 the ±24 ms ITD control conditions in which all children consistently (~ 90% of trials) perceived two separate sounds. If the mere perception of two separate images itself was the main reason for increases in mental effort, independent of additional mental computation, then conditions in which children consistently perceived two distinct images should have been associated with significantly greater increases in effort. However, this was not the case. In support of this line of reasoning, Kahneman and Beatty (1967) observed that pupillary changes were the smallest when it was clearest to participants that two different tones were presented in a pitch discrimination task. Pupil diameters increased in size with increasing difficulty of the discrimination.

The findings from the present study thus suggest that the cognitive explanation is the most plausible and that poorer fusion results in greater listening effort because of the mentally taxing decision-making process involved in determining whether one or two sounds were heard prior to response selection. The correlation between RTs and PCPDs supports this assertion

(Figure 2.21): greater effort was associated with longer response times, which were conceivably required for additional mental computation. RTs and PCPDs were also correlated for less effortful trials with unilateral stimuli, though the relationship was slightly weaker (R = 0.40).

Decreases in listening effort may reflect normal cortical development

Listening effort also relates to time-in-sound (chronological age) in NH listeners, but not

CI users who are affected more by the age at implantation (Figure 2.14b). Older children with

NH had shorter RTs and smaller changes in pupil diameter (Figure 2.17a and 2.20a). A similar result has been found at many different stages of cognitive processing and may reflect normal development of cortical connections supporting the formation of cognitive networks (Casey et al., 2002, 2005).

83

Auditory cortical responses continue to mature until approximately age 20 (Ponton et al.,

2000). Children with NH or unilateral CIs show a large and broad positive P1/P2 peak in their cortical evoked waveforms for the first 7 years of time-in-sound (Gordon et al., 2013a; Jiwani et al., 2013). As children reach 12 years of hearing experience, a smaller negative N1 peak bifurcates the positive peak into two separate P1 and P2 components with the development of thalomocortical and cortico-cortical connections in superficial layers of the auditory cortex and more complex auditory skills (Ponton et al., 2000; Eggermont & Ponton, 2003). The polyphasic waveform P1-N1-P2-N2 becomes clearly present in all listeners as listening experience increases beyond 12 years (Jiwani et al., 2013).

Notwithstanding that normal cortical responses emerge with CI use, an abnormally large

P2 amplitude persists (Figure 2.24), which may signify increased attentional demands and/or multisensory integration (Eggermont & Ponton, 2003; Tremblay et al., 2009; Kraus &

Chandrasekaran, 2010) and explain why listening effort did not decrease with longer durations of time-in-sound in CI children. While CI use drives normal developmental changes at the level of the cortex, processing and attempting to fuse spectrally degraded input still requires additional mental resources. Longer periods of auditory experience cannot overcome such device limitations. Analogous compensatory mechanisms are evident during speech and music perception (Naito et al., 2000; Limb et al., 2010).

84

Figure 2.24. Grand mean cortical waveforms of unilateral CI users shown for different durations of time-in-sound. Early implantation leads to cortical response peaks that develop in a normal-like fashion; however, the P2 peak remains abnormally large, relative to NH peers. Larger differences are highlighted in dark red (Jiwani et al., 2013).

Chapter 3

Music Perception with Cochlear Implants

This chapter contains a summary of the literature reviewed and methods (mcMBEA) used to answer the secondary question posed in this study regarding music perception in children who use bilateral CIs. Results and interpretations are also included in this section.

3.1. Background

The final question of this study concerns music perception and whether bilateral versus unilateral implantation enhances musical processing in deaf children. Music is considered by

85 many to be a universal language and has been present throughout history in every culture known to mankind. Musical abilities develop in the early months of life (Trehub, 2001). Infants can detect changes in various aspects of musical stimuli, such as contour (pitch direction), interval

(pitch height), scale (tonality), and rhythm. Despite differences in the acoustic features of music and speech (Smith et al., 2002) and hemispheric specializations for spectral and temporal processing (Zatorre et al., 2002), musical development parallels language development and may be critical to language acquisition (Brandt et al., 2012). In fact, musical training increases cortical plasticity, which can strengthen common subcortical circuits and lead to widespread benefits in diverse non-musical skills, such as speech perception in noise, auditory attention, and auditory working memory (Kraus et al., 2012; Jancke, 2012). The musical aspects of sign language (i.e., rhythm and expression) in young children may serve a similar developmental function as the musical components of speech (prosody).

3.1.1. Limitations of Music Perception with Cochlear Implants

Music perception is a difficult task, which recruits diverse brain regions (Limb, 2006).

The melodic (pitch-based what) and temporal (time-based when) dimensions of music are analyzed in parallel by separate neural subsystems (Peretz et al., 2003). The auditory cortex plays a major role in the processing of pitch relations, while temporal relations are also computed by distinct regions, such as the motor cortex, cerebellum, and basal ganglia (Peretz & Zatorre,

2005). Specifically, the right auditory cortex is specialized for pitch processing, whereas the left auditory cortex plays a more important role in rhythm perception (Zatorre et al., 2002). Accurate music perception in normal listeners is facilitated by place and temporal coding of lower harmonics (Plomp, 1967), which are not well-represented by CI devices with poor frequency and temporal resolution, due to few electrodes, current spread, envelope-based processing, and low

86 stimulation rates (e.g., Zeng, 2002). CIs provide fewer than 8 effective channels, but music perception continues to improve up to ~60 channels in normal listeners (Kong et al., 2004).

Although current spread prevents distortion of binaural processing by small place mismatches, it also reduces the number of independent channels represented by the CI, thereby limiting the capacity for pitch discrimination and music perception (Gordon et al., 2012).

3.1.2. Outcomes of Music Perception with Cochlear Implants

Considering the aforementioned predicted music perception difficulties with CIs, it is not surprising that when music appreciation was evaluated with a self-assessment scale in adults before deafness and after implantation, enjoyment decreased from a mean of 8.7/10 to 2.6/10

(Mirza et al., 2003; Leal et al., 2003). Dr. Eyries’s patient first famously described the pitch associated with CI stimulation as “the turning of a roulette wheel” (Djourno & Eyries, 1957).

Studies since then have shown that CI users tend to perceive rhythm normally and more accurately than pitch or timbre (McDermott, 2004; McDermott & McKay, 1997; Gfeller et al.,

2002b). When presented with pairs of sound sequences varying in rhythm, but not pitch, and asked to determine whether they are the same or different, adult CI users achieved a mean score of 88% (Gfeller & Lansing, 1991). Similarly, melodies with more rhythmic patterns were more easily recognized (Schulz & Kerber, 1994). On the other hand, the minimum interval that CI users can discriminate is larger than 7 semitones on average, compared with well under 1 semitone for NH peers (Gfeller et al., 2002a; Zarate et al., 2012). Increased activation in the frontal cortex during melody versus rhythm perception may reflect greater mental effort (Limb et al., 2010). Children with CIs perform even more poorly than adult CI users (Jung et al., 2012).

87

The Montreal Battery of Evaluation of Amusia

The Montreal Battery of Evaluation of Amusia (MBEA) test (Peretz et al., 2003) has proven to be a sensitive, reliable, and valid tool for assessing music perception in NH listeners, adult CI users, and unilaterally implanted children (Cooper et al., 2008; Hopyan et al., 2012).

Results from MBEA tests have been consistent with previous studies with CI users: performance on rhythm tests is better than performance on pitch tests. Hopyan and colleagues (2012) also noted that pre-implant acoustic hearing promotes music perception in CI children. However, while many adults with bilateral CIs report more positive listening experiences with 2 CIs compared with one (Veekmans et al., 2009), music perception has not yet been evaluated objectively in pre/peri-lingually deaf children with bilateral CIs. Positive listening experiences may also be independent of psychophysical performance. We hypothesized that music perception would be similar between children using unilateral and bilateral CIs, due to device limitations in frequency and temporal resolution.

3.2. Methods

3.2.1. Participants

Thirty seven children took part in a Modified Child’s MBEA test (mcMBEA): 14 with bilateral CIs (mean age = 11.69 ± 3.92 years) and 23 with normal hearing (mean age = 11.89 ±

3.24 years). Both groups were matched in terms of age (mcMBEA: t(35) = 0.17, p = 0.87) and musical training (t(32.47) = 1.35, p = 0.19; NH mean = 3.05 ± 3.39 years; CI mean = 1.96 ± 1.47 years).

All child CI users were recruited from the Cochlear Implant Program at the Hospital for

Sick Children in Toronto (Table 3) and had bilateral severe-to-profound sensorineural hearing

88 loss that occurred in childhood; hearing loss was progressive in 3 children. Two children had a period of usable residual hearing prior to implantation. Duration of time-in-sound was calculated as the sum of the duration of CI experience and pre-implant residual hearing (time-in-sound =

8.94 ± 2.97 years; bilateral CI experience = 5.57 ± 1.55 years).

TABLE 3. CI Participant Demographic Information for the mcMBEA.

Child Etiology CI1 CI2 Inter-implant Age at Test Bilateral CI Delay (years) Experience Age Ear Device Age Device (years) (years) (years) (years)

CI2 Unknown 1.21 R 24CA 8.49 24RE 7.29 13.99 5.39 CI3 Connexin26 2.27 L 24CA 5.58 24RE 3.31 12.14 6.48 CI4 Usher 1.12 L 24CS 4.90 24RE 3.77 11.63 6.67 CI5 Usher 0.73 R 24RE 1.62 24RE 0.90 8.91 7.22 CI6 Unknown 4.96 L 24CS 15.40 24RE 10.44 17.95 2.50 CI8 Unknown 2.92 R 24RE 14.15 24RE 11.23 17.97 3.76 CI15 Connexin26 1.73 Both 24RE 1.73 24RE 0 8.21 6.38 CI18 Unknown 1.28 Both 24RE 1.28 24RE 0 8.20 6.82 CI20 Unknown 4.54 Both 24RE 4.54 24RE 0 9.84 5.19 CI22 Ototoxicity 12.15 Both 24RE 12.15 24RE 0 16.97 4.76 CI23 Connexin26 0.79 Both 24RE 0.79 24RE 0 5.95 5.08 CI24 Unknown 4.51 Both 24RE 4.51 24RE 0 8.62 4.04 CI25 Connexin26 0.95 Both 24CA 0.95 24CA 0 9.45 8.40 CI29 Unknown 8.44 Both 24RE 8.44 24RE 0 13.83 5.27

Data is provided for each CI user in the present study who participated in the mcMBEA (n=14), including etiology, age at implantation, interimplant delay, age at test, and bilateral CI experience.

Anatomy and etiology of deafness

High resolution computed tomography scans confirmed normal cochlear anatomy in all but 2 children: child CI2 had a Mondini malformation (incomplete partition type II) and child

CI22 had an enlarged left vestibular aqueduct. Four children had GJB2 gene mutations causing deficiencies in Connexin 26 gap junction protein (Propst et al., 2006), while smaller subsets had

Usher Syndrome (n=2) and received ototoxic medications at a young age (n=1). The etiology of deafness was unknown in the remaining 7 children.

89

CI devices and age at implantation

Children CI2-8 received their first devices at 2.20 ± 1.58 years of age and were provided with second devices after 6.16 ± 4.15 years of unilateral CI stimulation, whereas children CI15-

29 received their implants simultaneously at 4.26 ± 4.11 years of age. Children received different device generations (Nucleus 24CA, CS, or RE) with different current conversions.

3.2.2. Modified Child’s Montreal Battery of Evaluation of Amusia

A modified version of the child’s MBEA (Lebrun et al., 2012) or mcMBEA was used to evaluate music perception after children were given a sufficient break from the fusion task or on a separate day. The child’s MBEA consists of 5 subtests: Scale, Contour, Interval, Rhythm, and

Memory, with fundamental frequencies ranging from 247-988 Hz; stimuli with lower fundamental frequencies thus fall outside the frequency ranges of most CIs. The 10 test trials, which were composed of piano tones in the original child’s MBEA, were removed, because CI users exhibit poor pitch perception for piano tones (Galvin et al., 2008) and 20 additional trials were added: 10 higher frequency trials and 10 lower frequency trials. 2 high frequency trials and

2 low frequency trials were added to each subtest (Cooper et al., 2008). Sample Manager

(Audiofile) was used to raise the frequency of 10 randomly selected trials by 2 octaves and lower the frequency of 10 additional trials by 1 octave. The high frequency trials were added to determine whether CI users make use of place pitch cues, which are more accessible at higher frequencies (Gfeller et al., 2002a), while the low frequency trials were incorporated to investigate perception when CI users are completely dependent on the temporal envelope in the absence of place pitch cues. Musical stimuli, ranging from 60-70 dB SPL, were presented in a

2.13 m x 2.13 m sound-attenuating booth and played through Windows Media Player on a Dell

90

Vostro 1520 laptop computer and external Centrios speaker system (model no. 1410106) at zero degrees azimuth. Levels were calibrated using a sound-level meter. Listeners were seated 1 m away from the speakers.

Following modifications, the mcMBEA was comprised of 110 test trials and 10 practice trials, and lasted approximately 35 minutes in duration. For the first four subtests, half of the trials contained identical pairs of melodies, while the other half consisted of melodies that differed by one note. Children were asked whether the pairs of melodies that they heard were the same or different (Figure 3.1). Half of the fifth and final subtest, the surprise/incidental memory test, contained melodies previously presented in the first four subtests, while the other half consisted of new melodies. For this subtest, children were asked whether they had heard the melody presented in the preceding subtests or if it was novel. Results from the present study were compared with those obtained from unilateral CI users (n=23; Hopyan et al., 2012).

Figure 3.1. Example of musical stimuli used. The original melody is shown in A, its scale-violated version in B, contour-violated version in C, interval alternate in D, and rhythm alternate in E. Asterisks denote the changed notes (Lebrun et al., 2012).

3.2.3. Data Analysis

Repeated-measures ANOVA, Student’s T Tests, and linear regression were conducted using IBM SPSS Statistics Version 22 (SPSS Inc., Chicago, Illinois, USA). Pairwise post-hoc

91 analyses were implemented for repeated contrasts and Bonferroni adjustment of the significance level (α = p = 0.05) was used where necessary to correct for multiple comparisons and limit the family-wise error rate.

3.3. Results

3.3.1. Performance on Subtests

Factorial repeated-measures ANOVA revealed a significant main effect of mcMBEA subtest (F(4,54) = 8.34, p < 0.0001), a significant effect of group (F(2,57) = 57.52, p < 0.0001), and a significant interaction between subtest and group (F(8,108) = 2.736, p = 0.009; Figure

3.2). Post-hoc tests showed that NH children performed most accurately across subtests (NH vs. bilateral CI users: p < 0.0001; NH vs. CI unilateral users: p < 0.0001) and there was no difference in performance between CI groups (p = 1.0). Bilateral CI listeners performed slightly better on the Memory subtest (71.43 ± 14.64% vs. 63.91 ± 12.06%) than unilaterally implanted peers from a previous study (Hopyan et al., 2012). Within-group analyses indicated that normal listeners performed most accurately on the Rhythm subtest and least accurately on Scale (F(4,88)

= 7.88, p < 0.0001), which was consistent with previous findings, whereas bilateral CI listeners performed most accurately on Rhythm and Memory subtests and least accurately on Scale

(F(4,52) = 3.13, p = 0.02).

92

Music Perception 100

90

80 NH Group Bilateral CI Group 70 Unilateral CI Group

60 MeanMBEAScore (%)

50

40 Overall Scale Contour Interval Rhythm Memory

Figure 3.2. Mean performances across NH (n=23), bilateral CI (n=14), and unilateral CI (n=23) groups are indicated for each mcMBEA subtest. Overall scores are also shown. NH children achieved the highest scores (p < 0.0001) and there was no difference between CI groups (p = 1.0). Data from unilateral users was previously collected and published (Hopyan et al., 2012).

3.3.2. Performance on High and Low Frequency Trials

Whereas the addition of higher frequency trials did not result in improvements in performance for NH or bilaterally implanted children (Figure 3.3; NH: t(22) = 0.12, p = 0.90;

CI: t(13) = 1.62, p = 0.13), adding lower frequency trials diminished accuracy in both groups

(NH: t(22) = 4.22, p < 0.0005; CI: t(13) = 3.52, p < 0.005).

93

NH Group CI Group 94 72

70 92 68 90 66

88 64

86 62 60 84

Mean mcMBEA Score (%) 58 82 56

80 54 Standard High Low Standard High Low Frequencies Frequency Frequency Frequencies Frequency Frequency Added Added Added Added

Figure 3.3. Performance for each NH (n=23) and CI (n=14) groups is shown for standard frequencies (i.e., excluding high/low frequency trials) and when either high or low frequency trials were added. Adding low frequency stimuli reduced accuracy in both groups (p < 0.005).

3.3.3. Predicting Music Perception

As shown in Figures 3.4a and 3.4b, NH children performed more accurately on the mcMBEA at older ages (R = 0.58, p < 0.005), whereas in the bilateral CI group, bilateral CI experience was most predictive of music perception (R = 0.44, p = 0.11; age at CI-1: R = 0.18, p

= 0.55; interimplant delay: R = 0.03, p = 0.92; time-in-sound: R = 0.12, p = 0.69). This relationship improved when outlier CI6, who had the least bilateral CI experience (2.50 years; unfilled point in Figure 3.4b), was excluded (R = 0.61, p = 0.03). Unilateral CI experience was not related to music perception in children with bilateral CIs or those with unilateral CIs from the previous study (p > 0.05; Hopyan et al., 2012).

94

a) Age and Music - NH Group b) Bilateral CI Experience and 100 Music - CI Group 100

90 80

80 60 R = 0.58 R = 0.44 70 40

Overall mcMBEA (%)Score 60 20

50 0 4 9 14 19 0 2.5 5 7.5 10 Chronological Age (years) Bilateral CI Experience (years)

Figure 3.4. NH children (n=23) achieved greater accuracy on the mcMBEA when they were tested at older ages (Figure 3.4a; R = 0.58, p < 0.005). CI children (n=14) improved music perception with greater durations of bilateral CI use (Figure 3.4b; R = 0.44, p = 0.11). Outlier CI6 is indicated by the unfilled point.

3.4. Discussion

The goal of this portion of the present study was to determine whether receiving a second

CI enhances music perception in deaf children. We hypothesized that providing a second CI would promote music perception in children that is not worse than their peers with unilateral devices.

Children with bilateral CIs did not perform significantly worse than their unilaterally implanted peers when evaluated on music perception with a modified version of the child’s

MBEA (mcMBEA). Both CI and NH children were less accurate in perceiving music when low frequency trials were added to the test battery. A greater duration of bilateral CI experience may improve musical abilities in children using 2 CI devices.

95

CI users are inherently unable to perceive music normally. Although music and speech both arise from combinations of fixed numbers of smaller elements (tones and phonemes, respectively), music is more acoustically complex and abstract than speech, has a wider dynamic range and range of frequencies, and is comprised of more dynamic fine spectral and temporal characteristics (Zatorre et al., 2002; Limb, 2006; Limb et al., 2010). Electrical representation of musical stimuli is inherently limited by CI stimulation schemes that are designed to extract the temporal envelope and maximize speech perception (Wilson et al., 2004; Rubinstein & Hong,

2003). Music perception by CI users is further hindered by a suboptimal number of distinct frequency channels (Kong et al., 2004; Cooper et al., 2008) and spread of electrical current in the cochlea (Hughes & Abbas, 2006; Gordon et al., 2012).

3.4.1. Music Perception is Similar in Children Using Bilateral and Unilateral CIs

While a second CI device enhances many binaural listening abilities, such as spatial unmasking, binaural summation, and sound localization (Litovsky et al., 2012), bilateral implantation does not seem to overcome the aforementioned CI device limitations that compromise music perception in deaf children. Veekmans and colleagues (2009) used the

Munich Music Questionnaire to assess music enjoyment in post-lingually deafened adults who used both unilateral and bilateral CIs and found that larger percentages of bilateral CI users reported being able to recognize many elements of music, such as melody and timbre, though the difference was not statistically significant. The authors suggested that bilateral implantation may improve music perception by capturing the better ear and reducing the number of cochlear

(dead) regions (Moore et al., 2001) between the two ears that are not sufficiently stimulated due to lack of neural integrity. However, subjects tested by Veekmans and colleagues (2009) (and

96

Kan and colleagues, 2013) were considerably older than children who participated in the present study and thus had several more decades of pre-implant listening experience. Music enjoyment may also be driven more by sound quality than perceived accuracy. Although small benefits from bilateral implantation were similarly observed for some of the mcMBEA subtests in the present study, these differences also failed to reach significance.

The greatest benefits of bilateral implantation for music perception may be in the cognitive domain, as the largest difference between bilateral and unilateral CI users in the present study was on the Memory subtest (71.43 ± 14.64% vs. 63.91 ± 12.06%). Relatively strong performance on the Memory subtest further supports Hopyan and colleagues’ (2012) argument that superior memory for melodies is a phenomenon unique to CI children, relative to their adult counterparts (Cooper et al., 2008).

3.4.2. CI Users Perceive Musical Rhythm Better than Pitch

Our finding that rhythm perception by CI users was superior to pitch perception (Figure

3.2) has been corroborated by several previous investigations (e.g., Gfeller & Lansing, 1991;

Cooper et al., 2008; Hopyan et al., 2012) and may reflect temporal resolution adequate for detecting rhythmic patterns in music (McDermott, 2004). Lower gap detection thresholds have been associated with better speech perception in CI users (Muchnik et al., 1994). Conversely, neural adaptation to higher pulse presentation rates impairs pitch perception (Davids et al.,

2008b; Zeng, 2002) and causes CI users to rely more on place pitch coding (McDermott &

McKay, 1997), which is itself restricted by dead cochlear regions (Moore et al., 2001) and a lack of independent channels (Cooper et al., 2008). Not surprisingly, CI users are heavily dependent on rhythm when attempting to identify different melodies (Gfeller et al., 2002a) and struggle to

97 recognize melodies in the absence of rhythm cues (Kong et al., 2004). Rhythm perception may also underlie perception of speech and emotions by CI users (Leal et al., 2003; Hopyan et al.,

2011). As Hopyan and colleagues (2012) noted, pitch variations in the Rhythm subtest may account for poorer than normal perception of rhythm on the MBEA.

The finding that performance was least accurate on the Scale subtest in both NH and CI children is consistent with past research (Hopyan et al., 2012) and may reflect the fact that perception of scale/tonality (≤ 1 semitone changes) is a more complex and higher-order task than contour and interval perception and is processed by a specialized system in the prefrontal cortex

(Peretz et al., 2003). One might say that scale is to contour/interval as fusion is to lateralization.

3.4.3. The Addition of Low Frequency Trials Diminishes Music Perception

Administering test batteries with wider ranges of stimulus frequencies may offer insights into music perception, as mcMBEA scores were affected more by lower than higher frequencies.

As already mentioned, CI users primarily use differences in the place of stimulation within the cochlea, as opposed to the rate of neural firing, to code changes in pitch (Moore, 2003). Given that the MBEA stimuli with lower fundamental frequencies fall outside of the range of frequency channels in most CIs, Cooper and colleagues (2008) recommended transposing MBEA melodies up two octaves to maximize place pitch perception. Gfeller and colleagues (2002a) found that pure tone frequency discrimination was best at 1600 Hz, i.e., beyond the upper limit of fundamental frequencies presented in the MBEA. Cooper and colleagues (2008) also pointed out that melody transposition below the lower limit of fundamental frequencies used in the MBEA would help to define the limits of temporal pitch coding in CI users.

98

Data from the present study (Figure 3.3) indicate that, for both NH and CI children, the addition of higher frequency trials did not improve music perception, whereas lower frequency trials significantly reduced accuracy in both groups (NH: t(22) = 4.22, p < 0.0005; CI: t(13) =

3.52, p < 0.005). Higher frequencies may have not improved accuracy for normal listeners due to ceiling effects, while poorer neural survival in more basal cochlear regions may have limited place pitch coding of higher frequencies by CI users (Nadol, 1997). In fact, Gfeller and colleagues (2002a) found greater variability in just noticeable differences at 3200 Hz compared to 1600 Hz. CI users also rated instruments played in higher frequency ranges as having poorer timbre quality (Gfeller et al., 2002b). Alternatively, the addition of a larger number of high frequency trials to the MBEA may reveal a different result. Poorer performance with lower frequencies added to the MBEA is also consistent with results from the study conducted by

Gfeller and colleagues (2002a) and highlights the importance of place coding in music perception for both NH and CI listeners.

3.4.4. Bilateral CI Experience May Improve Music Perception

Older NH children achieved more accurate music perception (Figure 3.4a), likely due to cognitive maturation (Casey et al., 2005) and ongoing musical development (Brandt et al., 2012).

Hopyan and colleagues (2012) found that children with unilateral CIs who were implanted at older ages had better music perception due to better pre-implant acoustic hearing (p < 0.05), independent of duration of CI use (p > 0.05). By contrast, later ages at implantation did not predict better music perception in bilateral CI users in the present study (R = 0.18, p > 0.05). As shown in Figure 3.4b, bilateral CI experience was the best predictor of mcMBEA score (R =

0.61, p < 0.05, without outlier CI6). Some children with greater than 6 years of bilateral

99 experience were even able to achieve near-normal scores. Greater bilateral CI use has also been found to improve subjective perception of music in post-lingually deafened adults (Veekmans et al., 2009). Perhaps with greater experience listening with both CIs, users may uncover some of the potential benefits of bilateral CI use for music perception cited by Veekmans and colleagues

(2009).

Enhanced music perception with greater use of both devices may be mediated by near- normal bilateral cortical activity promoted by 3-4 years of bilateral CI experience (Gordon et al.,

2010b). Music perception depends on activity in both auditory cortices more than speech perception (Peretz & Zatorre, 2005; Limb, 2006). Longer durations of bilateral CI use in deaf children also promote sound localization (Litovsky et al., 2006; Grieco-Calub et al., 2008), lateralization of ITDs (unpublished data), and fusion of binaural input with level differences.

Chapter 4

Conclusions and Future Directions

4.1. Conclusions

Restoration of higher-level binaural fusion may not be necessary to provide some of the benefits in binaural hearing that deaf children receive from bilateral implantation (Figure 4.1).

Bilateral cochlear implantation allows children to fuse bilateral auditory input most similarly to normal when differences in interaural level, the dominant binaural cue for CI users, are present.

Asymmetries in auditory brainstem latencies may underlie poorer fusion of bilateral input without differences in level, reflecting abnormal function in the bilateral auditory pathways.

Fusion of bilateral input with balanced levels is superior in children who received their first

100 cochlear implant at a later age, which may be due to a longer period of acoustic hearing.

Reaction time and pupil diameter show agreement in measuring listening effort in children; as expected, improvements in binaural fusion reduced listening effort, likely as a result of improved bottom-up processing.

While the use of bilateral cochlear implants did not appear to benefit music perception more than the use of a unilateral cochlear implant, further experience listening with both devices may improve music perception through the development of near-normal bilateral cortical activity. Future studies should investigate binaural integration at the cortical level and perception of a wider range of interaural/implant time differences to further define binaural hearing with bilateral cochlear implants.

4.2. Future Directions

The present investigation focused on several diverse areas of psychology in children with bilateral cochlear implants and has implications for clinical procedures and future lines of investigation. In light of the association between binaural fusion and brainstem asymmetries, electrically evoked auditory brainstem responses may be used to gain a sense of binaural hearing in young children who may not provide reliable behavioural responses. Similarly, pupillometry may take the place of subjective measures of listening effort, such as reaction times, which were correlated with pupillary changes.

In order to further define the neural underpinnings of binaural fusion, or lack thereof, in children with cochlear implants, integration should be measured at the cortical level using electroencephalography to calculate mismatch negativity waveforms. On a behavioural level, fusion should be assessed with larger interimplant time differences (< ± 24 ms) and interimplant

101 time difference lateralization should be tested with smaller interimplant time differences (< ±1 ms) to delineate the psychophysical range(s) provided by bilateral CIs.

Regarding cochlear implant engineering: 1) shorter electrode arrays that preserve residual hearing could maximize fusion of input without unbalanced levels, 2) the use of more sophisticated current-focusing schemes could reduce current spread and spectral degradation thereby optimizing bottom-up encoding and reducing effort, and 3) specialized bandpass filters that improve resolution of the fundamental frequency could enhance music perception.

Figure 4.1. Some of the inspiring children with CIs standing with their families. These children participated in studies conducted by the Cochlear Implant Laboratory at the Hospital for Sick Children and without their help, none of the work reviewed in this thesis would have been possible. Thank you. Picture is courtesy of the Cochlear Implant Laboratory.

102

Chapter 5 References 1) Aharonson, V., & Furst, M. (2001). A model for sound lateralization. J Acoust Soc Am, 109(6), 2840-2851.

2) Agmon-Snir, H., Carr, C. E., & Rinzel, J. (1998). The role of dendrites in auditory coincidence detection. Nature, 393(6682), 268-272.

3) Akeroyd, M. A. (2006). The psychoacoustics of binaural hearing. Int J Audiol, 45(2 Suppl), S25-S33.

4) Andreassi, J. L. (2007). Psychophysiology: human behavior & physiological response. (Chapter 12). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.

5) Aronoff, J. M., Yoon, Y. S., Freed, D. J., Vermiglio, A. J., Pal, I., & Soli, S. D. (2010). The use of interaural time and level difference cues by bilateral cochlear implant users. J Acoust Soc Am, 127(3), 87-92.

6) Babkoff, H., & Sutton, S. (1966). End point of lateralization for dichotic clicks. J Acoust Soc Am, 39(1), 87-102.

7) Baddeley, A. (2012). Working memory: theories, models, and controversies. Annu Rev Psychol, 63, 1-29.

8) Baskent, D. (2012). Effect of speech degradation on top-down repair: phonemic restoration with simulations of cochlear implants and combined electric-acoustic stimulation. J Assoc Res Otolaryngol, 13(5), 683-692.

9) Beatty, J. (1982a). Phasic not tonic pupillary responses vary with auditory vigilance performance. Psychophysiology, 19(2), 167-172.

10) Beatty, J. (1982b). Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychol Bull, 91(2), 276-292.

11) Beckius, G. E., Batra, R., & Oliver, D. L. (1999). Axons from anteroventral cochlear nucleus that terminate in medial superior olive of cat: observations related to delay lines. J Neurosci, 19(8), 3146-3161.

12) Bernstein, L. R. (2001). Auditory processing of interaural timing information: new insights. J Neurosci Res, 66(6), 1035-1046.

13) Bess, F. H., & Tharpe, A. M. (1986). An introduction to unilateral sensorineural hearing loss in children. Ear Hear, 7(1), 3-13.

103

14) Blegvad, B. (1975). Binaural summation of surface-recorded electrocochleographic responses normal-hearing subjects. Scand Audiol. 4(4), 233-238.

15) Boex, C., de Balthasar, C., Kos, M. I., & Pelizzone, M. (2003). Electrical field interactions in different cochlear implant systems. J Acoust Soc Am, 114(4 Pt 1), 2049-2057.

16) Brand, A., Behrend, O., Marquardt, T., McAlpine, D., & Grothe, B. (2002). Precise inhibition is essential for microsecond interaural time difference coding. Nature, 417(6888), 543-547.

17) Brandt, A., Gebrian, M., & Slevc, L. R. (2012). Music and early language acquisition. Front Psychol, 3, 327.

18) Breebaart, J., van de Par, S., & Kohlrausch, A. (2001). Binaural processing model based on contralateral inhibition. I. Model structure. J Acoust Soc Am, 110(2), 1074-1088.

19) Bronkhorst, A. W., & Plomp, R. (1988). The effect of head-induced interaural time and level differences on speech intelligibility in noise. J Acoust Soc Am, 83(4), 1508-1516.

20) Burkholder, R. A., & Pisoni, D. B. (2003). Speech timing and working memory in profoundly deaf children after cochlear implantation. J Exp Child Psychol, 85(1), 63-88.

21) Carr, C. E., & Konishi, M. (1990). A circuit for detection of interaural time differences in the brain stem of the barn owl. J Neurosci, 10(10), 3227-3246.

22) Casey, B. J., Galvan, A., & Hare, T. A. (2005). Changes in cerebral functional organization during cognitive development. Curr Opin Neurobiol, 15(2), 239-244.

23) Casey, B. J., Tottenham, N., & Fossella, J. (2002). Clinical, imaging, lesion, and genetic approaches toward a model of cognitive control. Dev Psychobiol, 40(3), 237-254.

24) Chadha, N. K., Papsin, B. C., Jiwani, S., & Gordon, K. A. (2011). Speech detection in noise and spatial unmasking in children with simultaneous versus sequential bilateral cochlear implants. Otol Neurotol, 32(7), 1057-1064.

25) Cheng, A. K., Grant, G. D., & Niparko, J. K. (1999). Meta-analysis of pediatric cochlear implant literature. Ann Otol Rhinol Laryngol Suppl, 177, 124-128.

26) Chermak, G. D., & Lee, J. (2005). Comparison of children's performance on four tests of temporal resolution. J Am Acad Audiol, 16(8), 554-563.

27) Cohen, L. T., Richardson, L. M., Saunders, E., & Cowan, R. S. (2003). Spatial spread of neural excitation in cochlear implant recipients: comparison of improved ECAP method and psychophysical forward masking. Hear Res, 179(1-2), 72-87.

28) Colburn, H. S. (2006). The perceptual consequences of binaural hearing. Int J Audiol, 45(1 Suppl), S34-S44.

104

29) Colburn, H. S. (1973). Theory of binaural interaction based on auditory-nerve data. I. General strategy and preliminary results on interaural discrimination. J Acoust Soc Am, 54(6), 1458-1470.

30) Colburn, H. S. (1977). Theory of binaural interaction based on auditory-nerve data. II. Detection of tones in noise. J Acoust Soc Am, 61(2), 525-533.

31) Cooper, W. B., Tobey, E., & Loizou, P. C. (2008). Music perception by cochlear implant and normal hearing listeners as measured by the Montreal Battery for Evaluation of Amusia. Ear Hear, 29(4), 618-626.

32) Darwin, C. (1965). The expression of the emotions in man and animals. Chicago: University of Chicago Press.

33) Davids, T., Valero, J., Papsin, B. C., Harrison, R. V., & Gordon, K. A. (2008a). Effects of stimulus manipulation on electrophysiological responses in pediatric cochlear implant users. Part I: duration effects. Hear Res, 244(1-2), 7-14.

34) Davids, T., Valero, J., Papsin, B. C., Harrison, R. V., & Gordon, K. A. (2008b). Effects of stimulus manipulation on electrophysiological responses of pediatric cochlear implant users. Part II: rate effects. Hear Res, 244(1-2), 15-24.

35) D'Elia, A., Bartoli, R., Giagnotti, F., & Quaranta, N. (2012). The role of hearing preservation on electrical thresholds and speech performances in cochlear implantation. Otol Neurotol, 33(3), 343-347.

36) Delgutte, B. (1996). Physiological models for basic auditory percepts. In Hawkins, H. L., McMullen, T. A., & Fay, R. R (Eds.), Auditory computation (pp. 185-186). New York, NY: Springer.

37) Djourno, A., & Eyries, C. (1957). [Auditory prosthesis by means of a distant electrical stimulation of the sensory nerve with the use of an indwelt coiling]. Presse Med, 65(63), 1417.

38) Djourno, A., Eyries, C., & Vallancien, B. (1957). [Electric excitation of the cochlear nerve in man by induction at a distance with the aid of micro-coil included in the fixture]. C R Seances Soc Biol Fil, 151(3), 423-425.

39) Dobie, R. A., & Norton, S. J. (1980). Binaural interaction in human auditory evoked potentials. Electroencephalogr Clin Neurophysiol, 49(3-4), 303-313.

40) Eggermont, J. J., & Ponton, C. W. (2003). Auditory-evoked potential studies of cortical maturation in normal hearing and implanted children: correlations with changes in structure and speech perception. Acta Otolaryngol, 123(2), 249-252.

105

41) Eggermont, J. J., & Salamy, A. (1988). Maturational time course for the ABR in preterm and full term infants. Hear Res, 33(1), 35-47.

42) Eisenberg, L. S. (2007). Current state of knowledge: speech recognition and production in children with hearing impairment. Ear Hear, 28(6), 766-772.

43) Fayad, J., Linthicum, F. H., Jr., Otto, S. R., Galey, F. R., & House, W. F. (1991). Cochlear implants: histopathologic findings related to performance in 16 human temporal bones. Ann Otol Rhinol Laryngol, 100(10), 807-811.

44) Fender, D., & Julesz, B. (1967). Extension of Panum's fusional area in binocularly stabilized vision. J Opt Soc Am, 57(6), 819-830.

45) Fiedler, A., Schroter, H., Seibold, V. C., & Ulrich, R. (2011). The influence of dichotical fusion on the redundant signals effect, localization performance, and the mismatch negativity. Cogn Affect Behav Neurosci, 11(1), 68-84.

46) Finley, C. C., Holden, T. A., Holden, L. K., Whiting, B. R., Chole, R. A., Neely, G. J., Hullar, T. E., & Skinner, M. W. (2008). Role of electrode placement as a contributor to variability in cochlear implant outcomes. Otol Neurotol, 29(7), 920-928.

47) Fitzpatrick, D. C., Kuwada, S., & Batra, R. (2002). Transformations in processing interaural time differences between the superior olivary complex and inferior colliculus: beyond the Jeffress model. Hear Res, 168(1-2), 79-89.

48) Furst, M., Levine, R. A., & McGaffigan, P. M. (1985). Click lateralization is related to the beta component of the dichotic brainstem auditory evoked potentials of human subjects. J Acoust Soc Am, 78(5), 1644-1651.

49) Galvin, J. J., 3rd, Fu, Q. J., & Oba, S. (2008). Effect of instrument timbre on melodic contour identification by cochlear implant users. J Acoust Soc Am, 124(4), 189-195.

50) Galvin, K. L., Holland, J. F., & Hughes, K. C. (2014). Longer-term functional outcomes and everyday listening performance for young children through to young adults using bilateral implants. Ear Hear, 35(2), 171-182.

51) Galvin, K. L., Mok, M., & Dowell, R. C. (2007). Perceptual benefit and functional outcomes for children using sequential bilateral cochlear implants. Ear Hear, 28(4), 470- 482.

52) Geers, A. E., & Moog, J. S. (1978). Syntactic maturity of spontaneous speech and elicited limitations of hearing-impaired children. J Speech Hear Disord, 43(3), 380-391.

53) Geers, A. E., Nicholas, J. G., & Sedey, A. L. (2003). Language skills of children with early cochlear implantation. Ear Hear, 24(1 Suppl), S46-S58.

106

54) Gfeller, K., & Lansing, C. R. (1991). Melodic, rhythmic, and timbral perception of adult cochlear implant users. J Speech Hear Res, 34(4), 916-920.

55) Gfeller, K., Turner, C., Mehr, M., Woodworth, G., Fearn, R., Knutson, J. F., Witt, S., & Stordahl, J. (2002a). Recognition of familiar melodies by adult cochlear implant recipients and normal-hearing adults. Cochlear Implants Int, 3(1), 29-53.

56) Gfeller, K., Witt, S., Woodworth, G., Mehr, M. A., & Knutson, J. (2002b). Effects of frequency, instrumental family, and cochlear implant type on timbre recognition and appraisal. Ann Otol Rhinol Laryngol, 111(4), 349-356.

57) Giolas, T. G., & Wark, D. J. (1967). Communication problems associated with unilateral hearing loss. J Speech Hear Disord, 32(4), 336-343.

58) Gordon, K. A., Jiwani, S., & Papsin, B. C. (2013a). Benefits and detriments of unilateral cochlear implant use on bilateral auditory development in children who are deaf. Front Psychol, 4(719), 1-14.

59) Gordon, K. A., Jiwani, S., & Papsin, B. C. (2011a). What is the optimal timing for bilateral cochlear implantation in children? Cochlear Implants Int, 12(2 Suppl), S8-S14.

60) Gordon, K. A., & Papsin, B. C. (2009). Benefits of short interimplant delays in children receiving bilateral cochlear implants. Otol Neurotol, 30(3), 319-331.

61) Gordon, K. A., Papsin, B. C., & Harrison, R. V. (2003). Activity-dependent developmental plasticity of the auditory brain stem in children who use cochlear implants. Ear Hear, 24(6), 485-500.

62) Gordon, K. A., Papsin, B. C., & Harrison, R. V. (2006). An evoked potential study of the developmental time course of the auditory nerve and brainstem in children using cochlear implants. Audiol Neurotol, 11, 7-23.

63) Gordon, K. A., Papsin, B. C., & Harrison, R. V. (2007a). Auditory brainstem activity and development evoked by apical versus basal cochlear implant electrode stimulation in children. Clin Neurophysiol, 118(8), 1671-1684.

64) Gordon, K. A., Papsin, B. C., & Harrison, R. V. (2005a). Effects of cochlear implant use on the electrically evoked middle latency response in children. Hear Res, 204(1-2), 78-89.

65) Gordon, K. A., Papsin, B. C., & Harrison, R. V. (2004). Toward a battery of behavioral and objective measures to achieve optimal cochlear implant stimulation levels in children. Ear Hear, 25(5), 447-463.

66) Gordon, K. A., Salloum, C., Toor, G. S., van Hoesel, R., & Papsin, B. C. (2012). Binaural interactions develop in the auditory brainstem of children who are deaf: effects of place and level of bilateral electrical stimulation. J Neurosci, 32(12), 4212-4223.

107

67) Gordon, K. A., Tanaka, S., & Papsin, B. C. (2005b). Atypical cortical responses underlie poor speech perception in children using cochlear implants. Neuroreport, 16(18), 2041- 2045.

68) Gordon, K. A., Twitchell, K. A., Papsin, B. C., & Harrison, R. V. (2001). Effect of residual hearing prior to cochlear implantation on speech perception in children. J Otolaryngol, 30(4), 216-223.

69) Gordon, K. A., Valero, J., Jewell, S. F., Ahn, J., & Papsin, B. C. (2010a). Auditory development in the absence of hearing in infancy. Neuroreport, 21(3), 163-167.

70) Gordon, K. A., Valero, J., & Papsin, B. C. (2007b). Binaural processing in children using bilateral cochlear implants. Neuroreport, 18(6), 613-617.

71) Gordon, K. A., Valero, J., van Hoesel, R., & Papsin, B. C. (2008). Abnormal timing delays in auditory brainstem responses evoked by bilateral cochlear implant use in children. Otol Neurotol, 29(2), 193-198.

72) Gordon, K. A., Wong, D. D., & Papsin, B. C. (2013b). Bilateral input protects the cortex from unilaterally-driven reorganization in children who are deaf. Brain, 136(Pt 5), 1609- 1625.

73) Gordon, K. A., Wong, D. D., & Papsin, B. C. (2010b). Cortical function in children receiving bilateral cochlear implants simultaneously or after a period of interimplant delay. Otol Neurotol, 31(8), 1293-1299.

74) Goupell, M. J., Stoelb, C., Kan, A., & Litovsky, R. Y. (2013). Effect of mismatched place- of-stimulation on the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening. J Acoust Soc Am, 133(4), 2272-2287.

75) Grantham, D. W., Ashmead, D. H., Ricketts, T. A., Haynes, D. S., & Labadie, R. F. (2008). Interaural time and level difference thresholds for acoustically presented signals in post-lingually deafened adults fitted with bilateral cochlear implants using CIS+ processing. Ear Hear, 29(1), 33-44.

76) Grantham, D. W., Ashmead, D. H., Ricketts, T. A., Labadie, R. F., & Haynes, D. S. (2007). Horizontal-plane localization of noise and speech signals by postlingually deafened adults fitted with bilateral cochlear implants. Ear Hear, 28(4), 524-541.

77) Grieco-Calub, T. M., & Litovsky, R. Y. (2010). Sound localization skills in children who use bilateral cochlear implants and in children with normal acoustic hearing. Ear Hear, 31(5), 645-656.

78) Grieco-Calub, T. M., Litovsky, R. Y., & Werner, L. A. (2008). Using the observer-based psychophysical procedure to assess localization acuity in toddlers who use bilateral cochlear implants. Otol Neurotol, 29(2), 235-239.

108

79) Grothe, B., Pecka, M., & McAlpine, D. (2010). Mechanisms of sound localization in mammals. Physiol Rev, 90(3), 983-1012.

80) Hamel, R. F. (1974). Female subjective and pupillary reaction to nude male and female figures. J Psychol, 87(2d Half), 171-175.

81) Hancock, K. E., Noel, V., Ryugo, D. K., & Delgutte, B. (2010). Neural coding of interaural time differences with bilateral cochlear implants: effects of congenital deafness. J Neurosci, 30(42), 14068-14079.

82) Harrison, R. V., Gordon, K. A., & Mount, R. J. (2005). Is there a critical period for cochlear implantation in congenitally deaf children? Analyses of hearing and speech perception performance after implantation. Dev Psychobiol, 46(3), 252-261.

83) Hartmann, R., & Kral, A. (2004). Central responses to electrical stimulation. In Zeng, F. G., Popper, A. N., & Fay, R (Eds.), Cochlear implants: auditory prostheses and electric hearing. New York, NY: Springer.

84) Hartmann, W. M., & Constan, Z. A. (2002). Interaural level differences and the level-meter model. J Acoust Soc Am, 112(3 Pt 1), 1037-1045.

85) He, S., Brown, C. J., & Abbas, P. J. (2012). Preliminary results of the relationship between the binaural interaction component of the electrically evoked auditory brainstem response and interaural pitch comparisons in bilateral cochlear implant recipients. Ear Hear, 33(1), 57-68.

86) Henry, B. A., & Turner, C. W. (2003). The resolution of complex spectral patterns by cochlear implant and normal-hearing listeners. J Acoust Soc Am, 113(5), 2861-2873.

87) Hess, E. H., & Polt, J. M. (1964). Pupil Size in Relation to Mental Activity during Simple Problem-Solving. Science, 143(3611), 1190-1192.

88) Hicks, C. B., & Tharpe, A. M. (2002). Listening effort and fatigue in school-age children with and without hearing loss. J Speech Lang Hear Res, 45(3), 573-584.

89) Hinojosa, R., & Marion, M. (1983). Histopathology of profound sensorineural deafness. Ann N Y Acad Sci, 405, 459-484.

90) Hirsch, I. J. (1959). Auditory perception of temporal order. J Acoust Soc Am, 31(6), 759- 767.

91) Hopyan, T., Gordon, K. A., & Papsin, B. C. (2011). Identifying emotions in music through electrical hearing in deaf children using cochlear implants. Cochlear Implants Int, 12(1), 21-26.

109

92) Hopyan, T., Peretz, I., Chan, L. P., Papsin, B. C., & Gordon, K. A. (2012). Children using cochlear implants capitalize on acoustical hearing for music perception. Front Psychol, 3, 425.

93) Hudspeth A. J. (1989). How the ear’s works work. Nature, 341(6241), 397–404.

94) Hughes, K. C., & Galvin, K. L. (2013). Measuring listening effort expended by adolescents and young adults with unilateral or bilateral cochlear implants or normal hearing. Cochlear Implants Int, 14(3), 121-129.

95) Hughes, M. L., & Abbas, P. J. (2006). Electrophysiologic channel interaction, electrode pitch ranking, and behavioral threshold in straight versus perimodiolar cochlear implant electrode arrays. J Acoust Soc Am, 119(3), 1538-1547.

96) Jancke, L. (2012). The Relationship between Music and Language. Front Psychol, 3(123), 1-2.

97) Jeffress, L. A. (1948). A place theory of sound localization. J Comp Physiol Psychol, 41(1), 35-39.

98) Jewett, D. L., Romano, M. N., & Williston, J. S. (1970). Human auditory evoked potentials: possible brain stem components detected on the scalp. Science, 167(3924), 1517-1518.

99) Jiwani, S., Papsin, B. C., & Gordon, K. A. (2013). Central auditory development after long-term cochlear implant use. Clin Neurophysiol, 124(9), 1868-1880.

100) Johnson, B. W., & Hautus, M. J. (2010). Processing of binaural spatial information in human auditory cortex: neuromagnetic responses to interaural timing and level differences. Neuropsychologia, 48(9), 2610-2619.

101) Joris, P. X. (1996). Envelope coding in the lateral superior olive. II. Characteristic delays and comparison with responses in the medial superior olive. J Neurophysiol, 76(4), 2137- 2156.

102) Joris, P. X., Carney, L. H., Smith, P. H., & Yin, T. C. (1994). Enhancement of neural synchronization in the anteroventral cochlear nucleus. I. Responses to tones at the characteristic frequency. J Neurophysiol, 71(3), 1022-1036.

103) Jung, K. H., Won, J. H., Drennan, W. R., Jameyson, E., Miyasaki, G., Norton, S. J., & Rubinstein, J. T. (2012). Psychoacoustic performance and music and speech perception in prelingually deafened children with cochlear implants. Audiol Neurootol, 17(3), 189-197.

104) Kahneman, D. (2003). A perspective on judgment and choice: mapping bounded rationality. Am Psychol, 58(9), 697-720.

110

105) Kahneman, D. L., & Beatty, J. (1967). Pupillary responses in a pitch-discrimination task. Percept Psychophys, 2, 101-105.

106) Kaga, M. (1992). Development of sound localization. Pediatr Int, 34(2), 134-138.

107) Kan, A., Stoelb, C., Litovsky, R. Y., & Goupell, M. J. (2013). Effect of mismatched place- of-stimulation on binaural fusion and lateralization in bilateral cochlear-implant users. J Acoust Soc Am, 134(4), 2923-2936.

108) Kandler, K., & Gillespie, D. C. (2005). Developmental refinement of inhibitory sound- localization circuits. Trends Neurosci, 28(6), 290-296.

109) Kirk, K. I., & Hill-Brown, C. (1985). Speech and language results in children with a cochlear implant. Ear Hear, 6(3 Suppl), 36S-47S.

110) Kirk, K. I., Miyamoto, R. T., Lento, C. L., Ying, E., O'Neill, T., & Fears, B. (2002). Effects of age at implantation in young children. Ann Otol Rhinol Laryngol Suppl, 189, 69- 73.

111) Klumpp, R. G. & Eady, H. R. 1956. Some measurements of interaural time difference thresholds. J Acoust Soc Am, 28(5), 859-860.

112) Kong, Y. Y., Cruz, R., Jones, J. A., & Zeng, F. G. (2004). Music perception with temporal cues in acoustic and electric hearing. Ear Hear, 25(2), 173-185.

113) Kral, A. (2007). Unimodal and cross-modal plasticity in the 'deaf' auditory cortex. Int J Audiol, 46(9), 479-493.

114) Kraus, N., & Chandrasekaran, B. (2010). Music training for the development of auditory skills. Nat Rev Neurosci, 11(8), 599-605.

115) Kraus, N., Strait, D. L., & Parbery-Clark, A. (2012). Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory. Ann N Y Acad Sci, 1252, 100-107.

116) Krumbholz, K., Schonwiesner, M., Rubsamen, R., Zilles, K., Fink, G. R., & von Cramon, D. Y. (2005). Hierarchical processing of sound location and motion in the human brainstem and planum temporale. Eur J Neurosci, 21(1), 230-238.

117) Laback, B., Pok, S. M., Baumgartner, W. D., Deutsch, W. A., & Schmid, K. (2004). Sensitivity to interaural level and envelope time differences of two bilateral cochlear implant listeners using clinical sound processors. Ear Hear, 25(5), 488-500.

118) Lavigne-Rebillard, M., & Pujol, R. (1988). Hair cell innervation in the fetal human cochlea. Acta Otolaryngol, 105(5-6), 398-402.

111

119) Lawson, D. T., Wilson, B. S., Zerbi, M., van den Honert, C., Finley, C. C., Farmer, J. C., Jr., McElveen, J. T., Jr., & Roush, P. A. (1998). Bilateral cochlear implants controlled by a single speech processor. Am J Otol, 19(6), 758-761.

120) Leal, M. C., Shin, Y. J., Laborde, M. L., Calmels, M. N., Verges, S., Lugardon, S., Andrieu, S., Deguine, O., & Fraysse, B. (2003). Music perception in adult cochlear implant recipients. Acta Otolaryngol, 123(7), 826-835.

121) Lebrun, M. A., Moreau, P., McNally-Gagnon, A., Mignault Goulet, G., & Peretz, I. (2012). Congenital amusia in childhood: a case study. Cortex, 48(6), 683-688.

122) Lee, D. J., Cahill, H. B., & Ryugo, D. K. (2003). Effects of congenital deafness in the cochlear nuclei of Shaker-2 mice: an ultrastructural analysis of synapse morphology in the endbulbs of Held. J Neurocytol, 32(3), 229-243.

123) Lee, D. S., Lee, J. S., Oh, S. H., Kim, S. K., Kim, J. W., Chung, J. K., Lee, M. C., & Kim, C. S. (2001). Cross-modal plasticity and cochlear implants. Nature, 409(6817), 149-150.

124) Levine, R. A., & Davis, P. J. (1991). Origin of the click-evoked binaural interaction potential, beta, of humans. Hear Res, 57(1), 121-128.

125) Levitt, H. (1970). Transformed up-down methods in psychoacoustics. J Acoust Soc Am, 49(2), 467-477.

126) Liberman, M. C. (1978). Auditory-nerve response from cats raised in a low-noise chamber. J Acoust Soc Am, 63(2), 442-455.

127) Liberman, M. C. (1982). Single-neuron labeling in the cat auditory nerve. Science, 216(4551), 1239-1241.

128) Licklider, J. C. R. (1948). The influence of interaural phase relations upon the masking of speech by white noise. J Acoust Soc Am, 20(2), 150-159.

129) Lieu, J. E. (2004). Speech-language and educational consequences of unilateral hearing loss in children. Arch Otolaryngol Head Neck Surg, 130(5), 524-530.

130) Limb, C. J. (2006). Structural and functional neural correlates of music perception. Anat Rec A Discov Mol Cell Evol Biol, 288(4), 435-446.

131) Limb, C. J., Molloy, A. T., Jiradejvong, P., & Braun, A. R. (2010). Auditory cortical activity during cochlear implant-mediated perception of spoken language, melody, and rhythm. J Assoc Res Otolaryngol, 11(1), 133-143.

132) Litovsky, R. Y. (1997). Developmental changes in the precedence effect: estimates of minimum audible angle. J Acoust Soc Am, 102(3), 1739-1745.

112

133) Litovsky, R. Y., Colburn, H. S., Yost, W. A., & Guzman, S. J. (1999). The precedence effect. J Acoust Soc Am, 106(4), 1633-1654.

134) Litovsky, R. Y., Goupell, M. J., Godar, S., Grieco-Calub, T., Jones, G. L., Garadat, S. N., Agrawal, S., Kan, A., Todd, A., Hess, C., & Misurelli, S. (2012). Studies on bilateral cochlear implants at the University of Wisconsin's Binaural Hearing and Speech Laboratory. J Am Acad Audiol, 23(6), 476-494.

135) Litovsky, R. Y., Johnstone, P. M., Godar, S., Agrawal, S., Parkinson, A., Peters, R., & Lake, J. (2006). Bilateral cochlear implants in children: localization acuity measured with minimum audible angle. Ear Hear, 27(1), 43-59.

136) Litovsky, R. Y., Jones, G. L., Agrawal, S., & van Hoesel, R. (2010). Effect of age at onset of deafness on binaural sensitivity in electric hearing in humans. J Acoust Soc Am, 127(1), 400-414.

137) Litovsky, R. Y., Parkinson, A., Arcaroli, J., Peters, R., Lake, J., Johnstone, P., & Yu, G. (2004). Bilateral cochlear implants in adults and children. Arch Otolaryngol Head Neck Surg, 130(5), 648-655.

138) Loeb, G. E., White, M. W., & Merzenich, M. M. (1983). Spatial cross-correlation. A proposed mechanism for acoustic pitch perception. Biol Cybern, 47(3), 149-163.

139) Loizou, P. C. (1998). Mimicking the human ear. IEEE Signal Processing Mag, 15(5), 101- 130.

140) Lomber, S. G., Meredith, M. A., & Kral, A. (2010). Cross-modal plasticity in specific auditory cortices underlies visual compensations in the deaf. Nat Neurosci, 13(11), 1421- 1427.

141) Long, C. J., Eddington, D. K., Colburn, H. S., & Rabinowitz, W. M. (2003). Binaural sensitivity as a function of interaural electrode position with a bilateral cochlear implant user. J Acoust Soc Am, 114(3), 1565-1574.

142) Lyxell, B., Sahlen, B., Wass, M., Ibertsson, T., Larsby, B., Hallgren, M., & Maki-Torkko. (2008). Cognitive development in children with cochlear implants: Relations to reading and communication. Int J Audiol, 47(2 Suppl), S47-S52.

143) Macpherson, E. A, & Middlebrooks, J. C. (2002). Listener weighting of cues for lateral angle: The duplex theory of sound localization revisited. J Acoust Soc Am, 111(5), 2219- 2236.

144) Manrique, M., Cervera-Paz, F. J., Huarte, A., & Molina, M. (2004). Advantages of cochlear implantation in prelingual deaf children before 2 years of age when compared with later implantation. Laryngoscope, 114(8), 1462-1469.

113

145) Mary Zarate, J., Ritson, C. R., & Poeppel, D. (2012). Pitch-interval discrimination and musical expertise: is the semitone a perceptual boundary? J Acoust Soc Am, 132(2), 984- 993.

146) McAlpine, D. (2005). Creating a sense of auditory space. J Physiol, 566(Pt 1), 21-28.

147) McAlpine, D., Jiang, D., & Palmer, A. R. (2001). A neural code for low-frequency sound localization in mammals. Nat Neurosci, 4(4), 396-401.

148) McDermott, H. J. (2004). Music perception with cochlear implants: a review. Trends Amplif, 8(2), 49-82.

149) McDermott, H. J., & McKay, C. M. (1997). Musical pitch perception with electrical stimulation of the cochlea. J Acoust Soc Am, 101(3), 1622-1631.

150) McKay, C. M., Lim, H. H., & Lenarz, T. (2013). Temporal processing in the auditory system: insights from cochlear and auditory midbrain implantees. J Assoc Res Otolaryngol, 14(1), 103-124.

151) McPherson, D. L., & Starr, A. (1995). Auditory time-intensity cues in the binaural interaction component of the auditory evoked potentials. Hear Res, 89(1-2), 162-171.

152) Meredith, M. A., & Lomber, S. G. (2011). Somatosensory and visual crossmodal plasticity in the anterior auditory field of early-deaf cats. Hear Res, 280(1-2), 38-47.

153) Mickey, B. J., & Middlebrooks, J. C. (2001). Responses of auditory cortical neurons to pairs of sounds: correlates of fusion and localization. J Neurophysiol, 86(3), 1333-1350.

154) Middlebrooks, J. C., & Snyder, R. L. (2010). Selective electrical stimulation of the auditory nerve activates a pathway specialized for high temporal acuity. J Neurosci, 30(5), 1937-1945.

155) Miller, G. A., & Taylor, W. G. (1948). The perception of repeated bursts of noise. J Acoust Soc Am, 20(2), 171-82.

156) Miyamoto, R. T., Kirk, K. I., Robbins, A. M., Todd, S., & Riley, A. (1996). Speech perception and speech production skills of children with multichannel cochlear implants. Acta Otolaryngol, 116(2), 240-243.

157) Mirza, S., Douglas, S. A., Lindsey, P., Hildreth, T., & Hawthorne, M. (2003). Appreciation of music in adult patients with cochlear implants: a patient questionnaire. Cochlear Implants Int, 4(2), 85-95.

158) Mok, M., Galvin, K. L., Dowell, R. C., & McKay, C. M. (2007). Spatial unmasking and binaural advantage for children with normal hearing, a cochlear implant and a hearing aid, and bilateral implants. Audiol Neurootol, 12(5), 295-306.

114

159) Moller, A. R. (2006). Hearing: anatomy, physiology, and disorders of the auditory system (2nd ed). (Chapter 6). San Diego: Academic Press.

160) Moore, B. C. J. (2008). An introduction to the psychology of hearing (5th ed.). (Chapter 1). Bingley, UK: Emerald Group Publishing Limited.

161) Moore, B. C. J. (2003). Coding of sounds in the auditory system and its relevance to signal processing and coding in cochlear implants. Otol Neurotol, 24(2), 243-254.

162) Moore, B. C. J. (2001). Dead regions in the cochlea: diagnosis, perceptual consequences, and implications for the fitting of hearing aids. Trends Amplif, 5(1), 1–34.

163) Moore, J. K., Niparko, J. K., Miller, M. R., & Linthicum, F. H. (1994). Effect of profound hearing loss on a central auditory nucleus. Am J Otol, 15(5), 588-595.

164) Moore, J. K., Perazzo, L. M., & Braun, A. (1995). Time course of axonal myelination in the human brainstem auditory pathway. Hear Res, 87(1-2), 21-31.

165) Moore, M. J., & Caspary, D. M. (1983). Strychnine blocks binaural inhibition in lateral superior olivary neurons. J Neurosci, 3(1), 237-242.

166) Muchnik, C., Taitelbaum, R., Tene, S., & Hildesheimer, M. (1994). Auditory temporal resolution and open speech recognition in cochlear implant recipients. Scand Audiol, 23(2), 105-109.

167) Nadol, J. B., Jr. (1997). Patterns of neural degeneration in the human cochlea and auditory nerve: implications for cochlear implantation. Otolaryngol Head Neck Surg, 117(3 Pt 1), 220-228.

168) Naito, Y., Tateya, I., Fujiki, N., Hirano, S., Ishizu, K., Nagahama, Y., Fukuyama, H., & Kojima, H. (2000). Increased cortical activation during hearing of speech in cochlear implant users. Hear Res, 143(1-2), 139-146.

169) Niparko, J. K., & Agrawal, Y. (2009). The epidemiology of hearing loss: how prevalent is hearing loss? In: Niparko, J. K. (Ed.), Cochlear implants: principles & practices (pp 39- 41). Philadelphia, PA: Lippincott Williams & Wilkins.

170) Nuetzel, J. M., & Hafter, E. R. (1976). Lateralization of complex waveforms: effects of fine structure, amplitude, and duration. J Acoust Soc Am, 60(6), 1339-1346.

171) O'Neil, J. N., Limb, C. J., Baker, C. A., & Ryugo, D. K. (2010). Bilateral effects of unilateral cochlear implantation in congenitally deaf cats. J Comp Neurol, 518(12), 2382- 2404.

115

172) Pals, C., Sarampalis, A., & Baskent, D. (2013). Listening effort with cochlear implant simulations. J Speech Lang Hear Res, 56(4), 1075-1084.

173) Papsin, B. C., & Gordon, K. A. (2008). Bilateral cochlear implants should be the standard for children with bilateral sensorineural deafness. Curr Opin Otolaryngol Head Neck Surg, 16(1), 69-74.

174) Papsin, B. C., & Gordon, K. A. (2007). Cochlear implants for children with severe-to- profound hearing loss. N Engl J Med, 357(23), 2380-2387.

175) Papesh, M. H., Goldinger, S. D., & Hout, M. C. (2012). Memory strength and specificity revealed by pupillometry. Int J Psychophysiol, 83(1), 56-64.

176) Pasanisi, E., Bacciu, A., Vincenti, V., Guida, M, Berghenti, M. T., Barbot, A., Panu, F., & Bacciu, S. (2002). Comparison of speech perception benefits with SPEAK and ACE coding strategies in pediatric Nucleus CI24M cochlear implant recipients. Int J Pediatr Otorhinolaryngol, 64(2): 159-163.

177) Peretz, I., Champod, A. S., & Hyde, K. (2003). Varieties of musical disorders. The Montreal Battery of Evaluation of Amusia. Ann N Y Acad Sci, 999, 58-75.

178) Peretz, I., & Zatorre, R. J. (2005). Brain organization for music processing. Annu Rev Psychol, 56, 89-114.

179) Picou, E. M., Ricketts, T. A., & Hornsby, B. W. Y. (2013). How hearing aids, background noise, and visual cues influence objective listening effort. Ear Hear, 34(5), 52-64.

180) Plomp, R. (1967). Pitch of complex tones. J Acoust Soc Am, 41(6), 1526-1533.

181) Plomp, R. (1964). Rate of decay of auditory sensation. J Acoust Soc Am, 36(2), 277-282.

182) Polley, D. B., Thompson, J. H., & Guo, W. (2013). Brief hearing loss disrupts binaural integration during two early critical periods of auditory cortex development. Nat Commun, 4, 2547.

183) Ponton, C. W., Don, M., Eggermont, J. J., Waring, M. D., & Masuda, A. (1996). Maturation of human cortical auditory function: differences between normal-hearing children and children with cochlear implants. Ear Hear, 17(5), 430-437.

184) Ponton, C. W., Eggermont, J. J., Kwong, B., & Don, M. (2000). Maturation of human central auditory system activity: evidence from multi-channel evoked potentials. Clin Neurophysiol, 111(2), 220-236.

185) Poon, B. B., Eddington, D. K., Noel, V., & Colburn, H. S. (2009). Sensitivity to interaural time difference with bilateral cochlear implants: Development over time and effect of interaural electrode spacing. J Acoust Soc Am, 126(2), 806-815.

116

186) Privitera, C. M., Renninger, L. W., Carney, T., Klein, S., & Aguilar, M. (2010). Pupil dilation during visual target detection. J Vis, 10(10), 1-14.

187) Propst, E. J., Papsin, B. C., Stockley, T. L., Harrison, R. V., & Gordon, K. A. (2006). Auditory responses in cochlear implant users with and without GJB2 deafness. Laryngoscope, 116(2), 317-327.

188) Pulsifer, M. B., Salorio, C. F., & Niparko, J. K. (2003). Developmental, audiological, and speech perception functioning in children after cochlear implant surgery. Arch Pediatr Adolesc Med, 157(6), 552-558.

189) Purves, D., Augustine, G. J., Fitzpatrick, D., Hall, W. C., LaMantia, A. S., McNamara, J. O., & White, L. E. (2008). Neuroscience (4th ed.). (Chapter 13). Sunderland, MA: Sinauer Associates.

190) Rauschecker, J. P. (1999). Auditory cortical plasticity: a comparison with other sensory systems. Trends Neurosci, 22(2), 74-80.

191) Riedel, H., & Kollmeier, B. (2006). Interaural delay-dependent changes in the binaural difference potential of the human auditory brain stem response. Hear Res, 218(1-2), 5-19.

192) Robin, D. A., & Royer, F. L. (1987). Auditory temporal processing: two-tone flutter fusion and a model of temporal integration. J Acoust Soc Am, 82(4), 1207-1217.

193) Rose, J. E., Gross, N. B., Geisler, C. D., & Hind, J. E. (1966). Some neural mechanisms in the inferior colliculus of the cat which may be relevant to localization of a sound source. J Neurophysiol, 29(2), 288-314.

194) Rubinstein, J. T. (2004). How cochlear implants encode speech. Curr Opin Otolaryngol Head Neck Surg, 12(5), 444-448.

195) Rubinstein, J. T., & Hong, R. (2003). Signal coding in cochlear implants: exploiting stochastic effects of electrical stimulation. Ann Otol Rhinol Laryngol Suppl, 191, 14-19.

196) Rubinstein, J. T., Parkinson, W. S., Tyler, R. S., & Gantz, B. J. (1999a). Residual speech recognition and cochlear implant performance: effects of implantation criteria. Am J Otol, 20(4), 445-452

197) Rubinstein, J. T., Wilson, B. S., Finley, C. C., & Abbas, P. J. (1999b). Pseudospontaneous activity: stochastic independence of auditory nerve fibers with electrical stimulation. Hear Res, 127(1-2), 108-118.

198) Rudner, M., Lunner, T., Behrens, T., Thoren, E. S., & Ronnberg, J. (2012). Working memory capacity may influence perceived effort during aided speech recognition in noise. J Am Acad Audiol, 23(8), 577-589.

117

199) Ryugo, D. K., Pongstaporn, T., Huchton, D. M., & Niparko, J. K. (1997). Ultrastructural analysis of primary endings in deaf white cats: morphologic alterations in endbulbs of Held. J Comp Neurol, 385(2), 230-244.

200) Sachs, M. B., & Abbas, P. J. (1974). Rate versus level functions for auditory-nerve fibers in cats: tone-burst stimuli. J Acoust Soc Am, 56(6), 1835-1847.

201) Salamy, A., & McKean, C. M. (1976). Postnatal development of human brainstem potentials during the first year of life. Electroencephalogr Clin Neurophysiol, 40(4), 418- 426.

202) Salloum, C. A., Valero, J., Wong, D. D., Papsin, B. C., van Hoesel, R., & Gordon, K. A. (2010). Lateralization of interimplant timing and level differences in children who use bilateral cochlear implants. Ear Hear, 31(4), 441-456.

203) Sayers, B. M., & Cherry, E. C. (1957). Mechanism of binaural fusion in the hearing of speech. J Acoust Soc Am, 29(9), 973-987.

204) Schnupp, J. W., & Carr, C. E. (2009). On hearing with more than one ear: lessons from evolution. Nat Neurosci, 12(6), 692-697.

205) Schulz, E., & Kerber, M. (1994). Music perception with the MED-EL implants. In: Hochmair-Desoyer, I. J., & Hochmair, E. S. (Eds.), Advances in cochlear implants (pp 326-332). Vienna: Manz.

206) Schwartz, I. R., & Higa, J. F. (1982). Correlated studies of the ear and brainstem in the deaf white cat: changes in the spiral ganglion and the medial superior olivary nucleus. Acta Otolaryngol, 93(1-2), 9-18.

207) Seeber, B. U., & Fastl, H. (2008). Localization cues with bilateral cochlear implants. J Acoust Soc Am, 123(2), 1030-1042.

208) Shallop, J. K., Beiter, A. L., Goin, D. W., & Mischke, R. E. (1990). Electrically evoked auditory brain stem responses (EABR) and middle latency responses (EMLR) obtained from patients with the nucleus multichannel cochlear implant. Ear Hear, 11(1), 5-15.

209) Shannon, R. V., Zeng, F. G., Kamath, V., Wygonski, J., & Ekelid, M. (1995). Speech recognition with primarily temporal cues. Science, 270(5234), 303-304.

210) Sharma, A., Dorman, M. F., & Spahr, A. J. (2002). A sensitive period for the development of the central auditory system in children with cochlear implants: implications for age of implantation. Ear Hear, 23(6), 532-539.

211) Shaw, E. A. C. (1974). Transformation of sound pressure level from the free field to the in the horizontal plane. J Acoust Soc Am, 56: 1848-1861.

118

212) Shepherd, R. K., & Hardie, N. A. (2001). Deafness-induced changes in the auditory pathway: implications for cochlear implants. Audiol Neurootol, 6(6), 305-318.

213) Shin, M. S., Kim, S. K., Kim, S. S., Park, M. H., Kim, C. S., & Oh, S. H. (2007). Comparison of cognitive function in deaf children between before and after cochlear implant. Ear Hear, 28(2 Suppl), 22S-28S.

214) Smith, Z. M., & Delgutte, B. (2007a). Sensitivity to interaural time differences in the inferior colliculus with bilateral cochlear implants. J Neurosci, 27(25), 6740-6750.

215) Smith, Z. M., & Delgutte, B. (2007b). Using evoked potentials to match interaural electrode pairs with bilateral cochlear implants. J Assoc Res Otolaryngol, 8(1), 134-151.

216) Smith, Z. M., Delgutte, B., & Oxenham, A. J. (2002). Chimaeric sounds reveal dichotomies in auditory perception. Nature, 416(6876), 87-90.

217) Snyder, R. L., Rebscher, S. J., Leake, P. A., Kelly, K., & Cao, K. (1991). Chronic intracochlear electrical stimulation in the neonatally deafened cat. II. Temporal properties of neurons in the inferior colliculus. Hear Res, 56(1-2), 246-264.

218) Sparreboom, M., Snik, A. F., & Mylanus, E. A. (2012). Sequential bilateral cochlear implantation in children: quality of life. Arch Otolaryngol Head Neck Surg, 138(2), 134- 141.

219) Steel, M. M., Abbasalipour, P., Salloum, C. A. M., Hasek, D., Papsin, B. C., & Gordon, K. A. (In press). Unilateral cochlear implant use promotes normal-like loudness perception in adolescents with childhood deafness. Ear Hear.

220) Steinhauer, S. R., Siegle, G. J., Condray, R., & Pless, M. (2004). Sympathetic and parasympathetic innervation of pupillary dilation during sustained processing. Int J Psychophysiol, 52(1), 77-86.

221) Stenfelt, S., & Ronnberg, J. (2009). The signal-cognition interface: interactions between degraded auditory signals and cognitive processes. Scand J Psychol, 50(5), 385-393.

222) Stern, R. M., Zeiberg, A. S., & Trahiotis, C. (1988). Lateralization of complex binaural stimuli: a weighted-image model. J Acoust Soc Am, 84(1), 156-165.

223) Stevens, S. S. (1957). On the psychophysical law. Psychol Rev, 64(3), 153-181.

224) Takeuchi, T., Puntous, T., Tuladhar, A., Yoshimoto, S., & Shirama, A. (2011). Estimation of mental effort in learning visual search by measuring pupil response. PLoS One, 6(7), 1- 5.

119

225) Terayama, Y., Kaneko, Y., Kawamoto, K., & Sakai, N. (1977). Ultrastructural changes of the nerve elements following disruption of the organ of Corti. I. Nerve elements in the organ of Corti. Acta Otolaryngol, 83(3-4), 291-302.

226) Thai-Van, H., Cozma, S., Boutitie, F., Disant, F., Truy, E., & Collet, L. (2007). The pattern of auditory brainstem response wave V maturation in cochlear-implanted children. Clin Neurophysiol, 118(3), 676-689.

227) Thompson, S. K., von Kriegstein, K., Deane-Pratt, A., Marquardt, T., Deichmann, R., Griffiths, T. D., & McAlpine, D. (2006). Representation of interaural time delay in the human auditory midbrain. Nat Neurosci, 9(9), 1096-1098.

228) Tillein, J., Hubka, P., Syed, E., Hartmann, R., Engel, A. K., & Kral, A. (2010). Cortical representation of interaural time difference in congenital deafness. Cereb Cortex, 20(2), 492-506.

229) Tirko, N. N., & Ryugo, D. K. (2012). Synaptic plasticity in the medial superior olive of hearing, deaf, and cochlear-implanted cats. J Comp Neurol, 520(10), 2202-2217.

230) Trehub, S. E. (2001). Musical predispositions in infancy. Ann N Y Acad Sci, 930, 1-16.

231) Tremblay, K. L., Shahin, A. J., Picton, T., & Ross, B. (2009). Auditory training alters the physiological detection of stimulus-specific cues in humans. Clin Neurophysiol, 120(1), 128-135.

232) Tsuchitani, C. (1988). The inhibition of cat lateral superior olive unit excitatory responses to binaural tone bursts. I. The transient chopper response. J Neurophysiol, 59(1), 164-183.

233) Tursky, B., Shapiro, D., Crider, A., & Kahneman, D. (1969). Pupillary, heart rate, and skin resistance changes during a mental task. J Exp Psychol, 79(1), 164-167.

234) Uziel, A. S., Sillon, M., Vieu, A., Artieres, F., Piron, J. P., Daures, J. P., & Mondain, M. (2007). Ten-year follow-up of a consecutive series of children with multichannel cochlear implants. Otol Neurotol, 28(5), 615-628.

235) Van Deun, L., van Wieringen, A., Francart, T., Scherf, F., Dhooge, I. J., Deggouj, N., Desloovere, C., Van de Heyning, P. H., & Offeciers, F. E., De Raeve, L., & Wouters, J. (2009a). Bilateral cochlear implants in children: binaural unmasking. Audiol Neurootol, 14(4), 240-247.

236) Van Deun, L., van Wieringen, A., Scherf, F., Deggouj, N., Desloovere, C., Offeciers, F. E., Van de Heyning, P. H., Dhooge, I. J., & Wouters, J. (2010a). Earlier intervention leads to better sound localization in children with bilateral cochlear implants. Audiol Neurootol, 15(1), 7-17.

120

237) Van Deun, L., van Wieringen, A., Van den Bogaert, T., Scherf, F., Offeciers, F. E., Van de Heyning, P. H., Desloovere, C., Dhooge, I. J., Deggouj, N., & Wouters, J. (2009b). Sound localization, sound lateralization, and binaural masking level differences in young children with normal hearing. Ear Hear, 30(2), 178-190.

238) Van Deun, L., van Wieringen, A., & Wouters, J. (2010b). Spatial speech perception benefits in young children with normal hearing and cochlear implants. Ear Hear, 31(5), 702-713.

239) van Hoesel, R. J. (2004). Exploring the benefits of bilateral cochlear implants. Audiol Neurootol, 9(4), 234-246.

240) van Hoesel, R. J., (2007). Sensitivity to binaural timing in bilateral cochlear implant users. J Acoust Soc Am, 121(4), 2192-2206.

241) van Hoesel, R. J., Jones, G. L., & Litovsky, R. Y. (2009). Interaural time-delay sensitivity in bilateral cochlear implant users: effects of pulse rate, modulation rate, and place of stimulation. J Assoc Res Otolaryngol, 10(4), 557-567.

242) van Hoesel, R. J., Ramsden, R., & Odriscoll, M. (2002). Sound-direction identification, interaural time delay discrimination, and speech intelligibility advantages in noise for a bilateral cochlear implant user. Ear Hear, 23(2), 137-149.

243) van Hoesel, R. J., & Tyler, R. S. (2003). Speech perception, localization, and lateralization with bilateral cochlear implants. J Acoust Soc Am, 113(3), 1617-1630.

244) Veekmans, K., Ressel, L., Mueller, J., Vischer, M., & Brockmeier, S. J. (2009). Comparison of music perception in bilateral and unilateral cochlear implant users and normal-hearing subjects. Audiol Neurootol, 14(5), 315-326.

245) Wada, S., & Starr, A. (1989). Anatomical bases of binaural interaction in auditory brain- stem responses from guinea pig. Electroencephalogr Clin Neurophysiol, 72(6), 535-544.

246) Wightman, F. L., & Kistler, D. J. (1992). The dominant role of low-frequency interaural time differences in sound localization. J Acoust Soc Am, 91(3), 1648-1661.

247) Wilson, B. S. (2004). Engineering design of cochlear implants. In Zeng, F. G., Popper, A. N., & Fay, R (Eds.), Cochlear implants: auditory prostheses and electric hearing. New York, NY: Springer.

248) Yulek, F., Konukseven, O. O., Cakmak, H. B., Orhan, N., Simsek, S., & Kutluhan, A. (2008). Comparison of the pupillometry during videonystagmography in asymmetric pseudoexfoliation patients. Curr Eye Res, 33(3), 263-267.

249) Zatorre, R. J., Belin, P., & Penhune, V. B. (2002). Structure and function of auditory cortex: music and speech. Trends Cogn Sci, 6(1), 37-46.

121

250) Zatorre, R. J., & Penhune, V. B. (2001). Spatial localization after excision of human auditory cortex. J Neurosci, 21(16), 6321-6328.

251) Zekveld, A. A., Heslenfeld, D. J., Johnsrude, I. S., Versfeld, N. J., & Kramer, S. E. (2014). (In press). The eye as a window to the listening brain: Neural correlates of pupil size as a measure of cognitive listening load. Neuroimage.

252) Zekveld, A. A., Kramer, S. E., & Festen, J. M. (2010). Pupil response as an indication of effortful listening: the influence of sentence intelligibility. Ear Hear, 31(4), 480-490.

253) Zeng, F. G. (2002). Temporal pitch in electric hearing. Hear Res, 174(1-2), 101-106.

254) Zhou, J., & Durrant, J. D. (2003). Effects of interaural frequency difference on binaural fusion evidenced by electrophysiological versus psychoacoustical measures. J Acoust Soc Am, 114(3), 1508-1515.

122