Begault 2004 Intellibility in Au

Total Page:16

File Type:pdf, Size:1020Kb

Begault 2004 Intellibility in Au BINAURAL HEARING and INTELLIGIBILITY in AUDITORY DISPLAYS _________________________________________________________ Durand R. Begault Human Factors Research & Technology Division NASA Ames Research Center Moffett Field, California 1. Binaural hearing phenomena 2. Newly developed auditory displays that exploit spatial hearing for improving -speech intelligibility -alarm intelligibility in aviation applications Physical characteristics of sound and perceived attributes • Frequency (perceived pitch) • Intensity (loudness) • Spectral content (timbre) • FIS, plus binaural differences (localization) Physical characteristics of sound and perceived attributes • Frequency (perceived pitch) • Intensity (loudness) • Spectral content (timbre) • FIS, plus binaural differences (localization) ** All characteristics are important in the identification and discrimination of auditory signals and for speech intelligibility in communication contexts Two important functions of the binaural hearing system • Localization (lateral and 3-dimensional) • Binaural release from masking: Echo supression, room perception Psychologically Model of the binaural hearing system - driven Binaural hearing (localization; signal separation & detection): forming spatial auditory events from acoustical (bottom-up) and psychological driven - (top-down) inputs Acoustic signal Psychologically Model of the binaural hearing system - driven Binaural hearing (localization; signal separation & detection) driven - Acoustic signal Filtering of acoustic signal by pinnae, ear canal Psychologically Model of the binaural hearing system - driven Binaural hearing (localization; signal separation & detection) driven - Filtering by inner ear; frequency-specific neuron firings Acoustic signal Filtering of acoustic signal by pinnae, ear canal Psychologically Model of the binaural hearing system - driven Binaural hearing (localization; signal separation & detection) Physiological evaluation of interaural timing and level differences driven - Filtering by inner ear; frequency-specific neuron firings Acoustic signal Filtering of acoustic signal by pinnae, ear canal Psychologically Model of the binaural hearing system Multi-sensory information; cognition - driven Binaural hearing (localization; signal separation & detection) Physiological evaluation of interaural timing and level differences driven driven - - Filtering by inner ear; frequency-specific neuron firings Acoustic signal Acoustic signal Filtering of acoustic signal by pinnae, ear canal. Lateral localization of auditory images “Duplex” theory of localization • ILD (interaural level difference) • ITD (interaural time difference) Lateral spatial image shift • ILD (interaural level difference) caused by head shadow of wavelengths > 1.5 kHz Level difference (dB) Level difference (dB) Lateral image shift • ITD (interaural time difference) Head-related transfer function cues (HRTFs) provide cues for front-back discrimination and elevation 10 Right 30°, elevated 0 -10 Right 90°, ear level Basis of 3-D audio -20 Right 120, below signal processing in auditory displays Log Magnitude (dB) -30 -40 -50 100 2000 4000 6000 8000 10000 12000 14000 16000 Frequency 45°, 0° 135°, 0° Vibration source (internal, external) Sound sources can be ‘felt’ Ground, Structure response and ‘seen’ as well as heard Airborne Walls, Chairs, Walls, sound Windows, Tables, Windows, objects floor plants Head- 3-D audio mounted hearing feeling seeing display visual display Expectation Inter-modal coordination Identification Experience-adaptation Response: qualitative assessment Performance metric Applications of spatial sound for improving intelligibility in auditory displays Using binaural hearing advantage for separating multiple auditory “streams” (simultaneous sources) 3-D communication system patented, developed for NASA-KSC Speech Intelligibility advantage compared to one-ear listening 8.0 Full frequency bandwidth 6.0 4.0 Advantage (dB) 2.0 Telephone bandwidth 0.0 Azimuth of spatialized signal (mean of left & right sides) Hearing loss for target users: 64 active commercial airline pilots 100% 75% Question 1 ("Told by a doctor") 50% Question 2 ("Personally suspect...) General Population 25% Percentage of "yes" answers 0% 35-44 (11) 45-54 (23) 55-64 (30) Age range of pilots (no. of subjects) Audiogram data summary for 20 active commercial pilots (age range 35-64; not corrected for presbycusis) Audiogram data 100 75 % > 25 dB HL 50 % > 20 dB HL 25 % of 20 pilots evaluated 0 Frequency (kHz) Use of auditory icons (AI) and left-right spatialization for information redundancy, situational awareness of actions of crew (CRM) and haptic feedback substitution Oil Pressure Identify Which Throttle (Lft. or Rgt).......................___ Check Oil Press.........................Color___, Value___ 30-90 Retard Throttle.........___ Retarded “Page-through” "Page Through" 0-30 Engine Shutdown.....................O.K. & “switch”auditory icon AIs for touch(For selecting screen menu pages from checklisttabs) "Click" auditory icon (For selecting items NASA ARC advanced cab that change orientation Disc. Menu simulator Rgt FUEL Lft within the menu display) Eng Eng “Mechanical latch” Shut Off 30.2 Shut Off Valve Totalizer Valve AIs for actions F. Press 28 APU F. Press 28 "Latch" auditory icon corresponding(For actions that to correspond Auto Auto electrical, fuel, Man Man to changes in the aircraft Off Off electrical, hydraulic, or LOW LOW hydraulic systems P P high high P P engine systems) Fuel Heat 15.1 Lbs 15.1 Lbs ISO VALVES Head up auditory display for TCAS 3-D audio alert 60 45 30 15 15 30 45 60 60 50 40 30 20 10 0 10 20 30 40 50 60 Visual field of view CAPTAIN'S COMMON FIRST SCREEN SCREEN OFFICER'S SCREEN Application of 3-D audio head-up display for Traffic Collision Avoidance System (TCAS II) investigated. Target acquisition times can decrease from 0.5 – 2.2 sec. Mean target acquisition Mean target acquisition times times (4.7 vs. 2.5 s) and (2.63 vs. 2.13 s) and standard deviations for first standard deviations for TCAS experiment. The 3-D audio second TCAS experiment. The cues were exaggerated in azimuth relative 3-D audio cues were not exaggerated, and to the visual target, and no elevation cues there were three categories of elevation were supplied. cues. 7 5 6 4 5 QuickTime™ and a TIFF (LZW) decompressor are needed to see this picture. 4 3 3 2 2 1 1 0 0 Monotic Audio 3-D Audio head-down head-up No Map Display No Map Display map display 3-D audio display Head-up auditory display with head-up visual display Time (sec): mean of 3 routes -20 -40 -60 20 40 60 0 1 2 3 Advantage of3-Daudio Reduction intaxitime: 4 5 Crew 6 7 8 9 10 11 12 Spatially-modulated auditory alerts In an auditory display, how to insure that an alarm is audible? -“Common sense” engineering approach: make the alarm a lot louder than the background noise for wide-area coverage Fire alarm and horn from ca. 1933 In an auditory display, how to insure that an alarm is audible? -ISO 7731 (“Danger signals Signal for work places-Auditory danger signals”) specifies signal to be >= 13 dB Noise re masked threshold in a 1/3 octave band (0.3-3.0 kHz) -Recipe for “startle effect”, high overall SPLs, and potentially low performance in a high-stress environment Current approach -Improve detection of an alarm (signal) against ambient sound (noise) using signal processing techniques other than level increase Requirement / Caveat -Technique should apply to currently-used alarms (to avoid “relearning” semantic content of new auditory signals). Technique -Three methods addressed in patent application (pending) for accomplishing this. QuickTime™ and a TIFF (LZW) decompressor are needed to see this picture. Alarm (basic stimulus) 737-300 alarm: Two successive square waves (preceding verbal “wind sheer” alert) 200 Hz 764 Hz 300 ms 300 ms Stimuli x 2 R ear 0 Hz jitter 1.6 Hz jitter 3.3 Hz jitter Amplitude L ear Time Summed L+R RMS levels equivalent for all stimuli; but jittered stimuli have + 5 dB peaks re unjittered due to HRTF. Results (1) Headphone with jittered signal has 13.4 dB advantage over monaural loudspeaker (existing condition on aircraft), partly due to attenuation of noise by headphone -10 0 -2 -15 -4 -6 -20 -8 -10 -25 -12 Attenuation dB .... -14 -30........................ -16 -18 -35 -20 Threshold re noise level (dB) 2000 2500 3150 4000 5000 6300 8000 10000 12500 16000 Octave band Center Frequency 1.6 Hz 3.3 Hz trajectory trajectory (existing) Headphone, Headphone, Loudspeaker Headphone attenuation Results Sennheiser HD 480 vented Results (2) Headphone with jittered signal has significant (p < .000) 7.8 dB advantage over headphone without jittered signal. No significant difference between 1.6 and 3.3 Hz modulation. Condition 4, 5, 6 (headphones) 0 9 8 -5 7 6 -10 5 4 -15 "Spatial unmasking" 3 dB advantage 2 Peak level due to 90 -20 deg. HRTF 1 Threshold re noise level (dB) 0 -25 1 0.0 Hz 1.6 Hz 3.3 Hz Trajectory velocity results source of unmasking (?) Conclusions A new approach to designing alerts for auditory displays in high-stress interfaces: use of spatial modulation for improved detection. Headphones + spatial modulation lower threshold by 13.4 dB. Spatial modulation lowers threshold by 7.8 dB. 5 dB is due to HRTF interaural level difference if instantaneous (peak) level differences are assumed. This amount is reduced as a function of longer temporal integration periods. Remaining advantage is due to time varying interaural cross-correlation. • IMMEDIATE SITUATIONAL AWARENESS (WITH HEADS-UP ADVANTAGE) BINAURAL LOCALIZATION • ALTERNATIVE or REDUNDANT DISPLAY for VISUALLY-ACQUIRED INFORMATION + • INTELLIGIBILITY IMPROVEMENT THE ''COCKTAIL Binaural release from masking PARTY'' EFFECT • DISCRIMINATION and SELECTIVE + ATTENTION IMPROVEMENT ACTIVE NOISE HEARING CONSERVATION CANCELLATION Benefits: increased = aviation safety & efficiency .
Recommended publications
  • Head-Related Transfer Functions and Virtual Auditory Display
    Chapter 6 Head-Related Transfer Functions and Virtual Auditory Display Xiao-li Zhong and Bo-sun Xie Additional information is available at the end of the chapter http://dx.doi.org/10.5772/56907 1. Introduction 1.1. Sound source localization and HRTFs In real environments, wave radiated by sound sources propagates to a listener by direct and reflected paths. The scattering, diffraction and reflection effect of the listener’s anatomical structures (such as head, torso and pinnae) further disturb the sound field and thereby modify the sound pressures received by the two ears. Human hearing comprehen‐ sively utilizes the information encoded in binaural pressures and then forms various spatial auditory experiences, such as sound source localization and subjective perceptions of environmental reflections. Psychoacoustic experiments have proved that the following cues encoded in the binaural pressures contribute to directional localization [1]: 1. The interaural time difference (ITD), i.e., the arrival time difference between the sound waves at left and right ears, is the dominant directional localization cue for frequencies approximately below 1.5 kHz. 2. The interaural level difference (ILD), i.e., the pressure level difference between left and right ears caused by scattering and diffraction of head etc., is the important directional localization cue for frequencies approximately above 1.5 kHz. 3. The spectral cues encoded in the pressure spectra at ears, which are caused by the scattering, diffraction, and reflection of anatomical structures. In particular, the pinna- caused high-frequency spectral cue above 5 to 6 kHz is crucial to front-back disambiguity and vertical localization. © 2014 The Author(s).
    [Show full text]
  • Signal Processing in High-End Hearing Aids: State of the Art, Challenges, and Future Trends
    EURASIP Journal on Applied Signal Processing 2005:18, 2915–2929 c 2005 V. Hamacher et al. Signal Processing in High-End Hearing Aids: State of the Art, Challenges, and Future Trends V. Hamacher, J. Chalupper, J. Eggers, E. Fischer, U. Kornagel, H. Puder, and U. Rass Siemens Audiological Engineering Group, Gebbertstrasse 125, 91058 Erlangen, Germany Emails: [email protected], [email protected], [email protected], eghart.fi[email protected], [email protected], [email protected], [email protected] Received 30 April 2004; Revised 18 September 2004 The development of hearing aids incorporates two aspects, namely, the audiological and the technical point of view. The former focuses on items like the recruitment phenomenon, the speech intelligibility of hearing-impaired persons, or just on the question of hearing comfort. Concerning these subjects, different algorithms intending to improve the hearing ability are presented in this paper. These are automatic gain controls, directional microphones, and noise reduction algorithms. Besides the audiological point of view, there are several purely technical problems which have to be solved. An important one is the acoustic feedback. Another instance is the proper automatic control of all hearing aid components by means of a classification unit. In addition to an overview of state-of-the-art algorithms, this paper focuses on future trends. Keywords and phrases: digital hearing aid, directional microphone, noise reduction, acoustic feedback, classification, compres- sion. 1. INTRODUCTION being close to each other. Details regarding this problem and possible solutions are discussed in Section 5. Note that Driven by the continuous progress in the semiconductor feedback suppression can be applied at different stages of technology, today’s high-end digital hearing aids offer pow- the signal flow dependent on the chosen strategy.
    [Show full text]
  • Bimodal Hearing Or Bilateral Cochlear Implants: a Review of the Research Literature
    Bimodal Hearing or Bilateral Cochlear Implants: A Review of the Research Literature Carol A. Sammeth, Ph.D.,1,2 Sean M. Bundy,1 and Douglas A. Miller, M.S.E.E.3 ABSTRACT Over the past 10 years, there have been an increasing number of patients fitted with either bimodal hearing devices (unilateral cochlear implant [CI], and hearing aid on the other ear) or bilateral cochlear implants. Concurrently, there has been an increasing interest in and number of publications on these topics. This article reviews the now fairly voluminous research literature on bimodal hearing and bilateral cochlear implantation in both children and adults. The emphasis of this review is on more recent clinical studies that represent current technology and that evaluated speech recognition in quiet and noise, localization ability, and perceived benefit. A vast majority of bilaterally deaf subjects in these studies showed benefit in one or more areas from bilateral CIs compared with listening with only a unilateral CI. For patients who have sufficient residual low-frequency hearing sensitivity for the provision of amplification in the nonimplanted ear, bimodal hearing appears to provide a good nonsurgical alternative to bilateral CIs or to unilateral listening for many patients. KEYWORDS: Bimodal hearing, bimodal devices, bilateral cochlear Downloaded by: SASLHA. Copyrighted material. implants, binaural hearing Learning Outcomes: As a result of this activity, the participant will be able to (1) compare possible benefits of bimodal hearing versus bilateral cochlear implantation based on current research, and (2) identify when to consider fitting of bimodal hearing devices or bilateral cochlear implants. Multichannel cochlear implants (CIs) tion for persons with hearing loss that is too have proven to be a highly successful interven- severe for good performance with conventional 1Cochlear Americas, Centennial, Colorado; 2Division of Centennial, CO 80111 (e-mail: [email protected]).
    [Show full text]
  • Audio Engineering Society Convention Paper Presented at the 113Th Convention 2002 October 5–8 Los Angeles, CA, USA
    ALGAZI ET AL. HAT MODELS FOR SYNTHESIS Audio Engineering Society Convention Paper Presented at the 113th Convention 2002 October 5{8 Los Angeles, CA, USA This convention paper has been reproduced from the author's advance manuscript, without editing, corrections, or consideration by the Review Board. The AES takes no responsibility for the contents. Additional papers may be obtained by sending request and remittance to Audio Engineering Society, 60 East 42nd Street, New York, New York 10165-2520, USA; also see www.aes.org. All rights reserved. Reproduction of this paper, or any portion thereof, is not permitted without direct permission from the Journal of the Audio Engineering Society. THE USE OF HEAD-AND-TORSO MODELS FOR IMPROVED SPATIAL SOUND SYNTHESIS V. Ralph Algazi1, Richard O. Duda1, and Dennis M. Thompson1 1CIPIC Interface Laboratory, University of California, Davis CA, 95616-8553, USA Correspondence should be addressed to Richard O. Duda ([email protected]) ABSTRACT This paper concerns the use of a simple head-and-torso model to correct de¯ciencies in the low-frequency behavior of experimentally measured head-related transfer functions (HRTFs). This so-called \snowman" model consists of a spherical head located above a spherical torso. In addition to providing improved low- frequency response for music reproduction, the model provides the major low-frequency localization cues, including cues for low-elevation as well as high-elevation sources. The model HRTF and the measured HRTF can be easily combined by using the phase response of the model at all frequencies and by \cross-fading" between the dB magnitude responses of the model and the measurements.
    [Show full text]
  • Starkey Professional Training & Education Newsletter | Issue 3
    Starkey Professional Training & Education Newsletter | Issue 3 Welcome to Starkey Hearing Technologies Welcome to issue 3 of our training newsletter. We have a 3D Ear Model - 3 times life size - worth £64 to give away to a lucky winner so Thank you to everyone who entered our be sure to click and enter for a chance to win. competition for the dementia lounge kit in the last issue. We are delighted to announce At Starkey Hearing Technologies we believe to Jane McCann from Salford Royal Audiology hear better is to live better and we are proud Department as the winner of the dementia kit. to offer you our Summit and Kinnect hearing Congratulations Jane! technology that connects and engages your patients into their world in more ways than In this issue we explore CROS and BICROS ever before. solutions. Starkey Hearing Technologies have a strong reputation for delivering exceptional We hope this information is useful and we look CROS and BICROS hearing solutions to support forward to supporting in clinic training. patients with single-sided hearing and unequal Best wishes, loss. In this issue our Education and Training The Starkey Team team look at this hearing technology in more detail alongside some of the benefits and challenges that are presented when fitting these types of solutions. STARKEY HEARING TECHNOLOGIES: SUMMIT CROS/BICROS SYSTEMS Introducing Summit CROS/BICROS Systems Starkey offer 3 Summit wireless CROS/ fitted to just the better hearing ear may be BICROS systems to our health service insufficient. The head shadow effect occurs customers, they allow a wide range of hearing when sound arriving on one side of the head is levels in the better ear to be appropriately physically obstructed by the head itself, which fitted whilst providing a robust connection for causes attenuation and filtering of sounds wireless transmission from an unaidable ear.
    [Show full text]
  • Technical Advances for Bilateral Implantation
    Hearing with Two Ears: Technical Advances for Bilateral Cochlear Implantation Advanced Bionics® Corporation the brain. Thus, the right and left auditory cortices can Valencia, CA 91355 develop in a more normal sequence. On the other hand, if a child is implanted on only one side, the parts of the brain The benefits of cochlear implantation have increased that would have been stimulated by the non-implanted ear progressively with improvements in technology and in will not develop, and eventually plasticity will be greatly clinical practice over the last two decades. Most of the diminished. The long-term consequence may be that the 60,000 people who have received cochlear implants pathways that would have been stimulated by the non- worldwide have received substantial communication stimulated ear will be permanently unresponsive to sound benefit from implantation in only one ear. However, and not available for potential future restoration of hearing advancements in technology now have the potential to add through advances in microbiology or genetic engineering. significant benefit through implantation of both ears. Second, a child who is deafened by meningitis and receives In people with normal hearing, having two ears provides only one implant runs the risk of ossification of the two major benefits: unimplanted cochlea, thereby compromising the ability to 1. The BILATERAL BENEFIT is the ability to listen benefit from future implantation in that ear. with the ear that has a better signal-to-noise ratio. For the first time, the technology used in the The bilateral benefit comes into play when speech HiResolution™ Bionic Ear has the potential to deliver the and noise come from different directions, and is critical sound information that can provide improved primarily a result of the head shadow effect.
    [Show full text]
  • "Rationale for Bilateral Cochlear Implantation In
    2 January 2006 Rationale for Bilateral Cochlear Implantation in Children and Adults of bilateral cochlear implantation in ability.6-9 Binaural hearing is made of a second ear receiving a poorer SNR. appropriate candidates.4 This article will possible by several sub-phenomena: the An improvement of 3dB on average is review research and presentations of head shadow effect, binaural redundancy, possible when compared to just listening import on the issue of bilateral cochlear and the binaural squelch effect. to the ear receiving the better SNR.13 In implantation in children and adults. effect, the SNR can be about 3dB worse The “head shadow effect” is a physical for a binaural listener and still achieve BINAURAL HEARING: phenomenon that is the result of the head the same speech understanding as a SUBJECTIVE IMPREssIONS AND acting as an acoustic barrier to sounds monaural listener via the squelch effect. ObJECTIVE BENEFITS and noise coming from different locations Through these binaural mechanisms in space. The ear furthest from the noise The expected benefits of bilateral (head shadow, redundancy, and source will have a more advantageous cochlear implantation are based on squelch) the binaural listener is able to signal-to-noise ratio (SNR) than the ear known deficits experienced by both understand speech with a more adverse closest to the noise source. This effect 9,14 unilateral cochlear implant users as well SNR than a monaural listener. results in an average of 6.4dB of noise as individuals with unilateral profound attenuation but can be as high as 20dB Sound localization ability is made hearing loss (single sided deafness- for high frequency speech sounds.10,11 This possible by the central auditory system’s SSD).
    [Show full text]
  • Bone-Anchored Hearing Aid (BAHA) Nfortunately, Hearing Loss Is a Com- Mark J
    Copyright © 2004, Barrow Neurological Institute Bone-Anchored Hearing Aid (BAHA) nfortunately, hearing loss is a com- Mark J. Syms, MD Umon presenting symptom or out- come of therapy for acoustic neuromas. Single-sided deafness is a common presenting symptom or outcome of thera- The resultant hearing loss is called single- py for acoustic neuromas. Single-sided deafness manifests with an inability to sided deafness. Alternative causes of sin- hear on the side of the loss, decreased ability to hear in background noise and gle-sided deafness are viral infections, loss of the ability to determine the location of the origin of a sound. The bone- Ménière’s disease, and head and ear trau- anchored hearing aid (BAHA) is effective for the rehabilitation of single sided- ma. Each year in the United States, an es- deafness. The BAHA is composed of an osseointegrated abutment, which is timated 60,000 people acquire single- placed during a 45-minute outpatient surgery, and an external processor. Com- sided deafness. pared to individuals who wear hearing aids, patients with the BAHA system re- port greater comfort and often comment that they forget that they are wearing the device. Clinical Manifestations of Single-Sided Deafness Key Words: BAHA device, bone conduction, hearing aid, hearing loss, The manifestations of single-sided single-sided deafness deafness are threefold. The most obvious problem is that a person with single-sided deafness is unable to hear on the side of the deaf ear. When hearing is lost in one ear, the ability to perceive the direction- ality of sound is also lost.
    [Show full text]
  • Spatial Hearing in Children with Hearing Aids
    UNILATERAL HEARING LOSS IN CHILDREN: MANAGEMENT AND OUTCOME MEASURES PATTI M. JOHNSTONE, PH.D. CCC- A/SLP ERIN PLYLER, AU.D. CCC- A VELMA SUE ROBERTSON, PH.D. CCC- A UTHSC - KNOXVILLE, TN MANAGEMENT CHALLENGES IN PEDIATRIC UHL Will intervention delay have irreversible consequences to a child with UHL? Is there a way to know a priori that a child with UHL will experience academic failure or binaural deprivation in a way that affects daily living activities? Does the implementation of a holistic UHL evaluation protocol help in determining appropriate outcomes for children with unilateral hearing loss? PREVALENCE OF PEDIATRIC UHL • Prevalence of UHL is of the order of 1 in 1000 in neonates, but increases to as high as 1–5% in school-aged children, though reports vary widely (Boyd 2015; Gordon et al., 2015). • Data on severity of hearing losses are generally lacking, but the proportion of severe- profound cases is probably around 30–50%. • The majority of cases have a genetic origin or are from perinatal complications, but include some acquired losses (largely from meningitis, head trauma, and viral infections) in older children. • A significant proportion of children with severe/profound UHL have sensorineural deafness due to structural abnormalities of the cochlea and IAC (reportedly between 29 and 67%), and a smaller number have outer and middle ear malformations with severe conductive hearing loss. • Newborn hearing screening programs have decreased the age at diagnosis of hearing loss. When hearing loss occurs in only 1 ear, the screening result may be overlooked or dismissed as unimportant, particularly when hearing in the opposite ear is normal (Gordon et al., 2015).
    [Show full text]
  • Chronic Monaural Sound Localization
    The Journal of Neuroscience, April 28, 2004 • 24(17):4163–4171 • 4163 Behavioral/Systems/Cognitive Contribution of Head Shadow and Pinna Cues to Chronic Monaural Sound Localization Marc M. Van Wanrooij and A. John Van Opstal Department of Medical Physics and Biophysics, University of Nijmegen, 6525 EZ Nijmegen, The Netherlands Monaurally deaf people lack the binaural acoustic difference cues in sound level and timing that are needed to encode sound location in the horizontal plane (azimuth). It has been proposed that these people therefore rely on spectral pinna cues of their normal ear to localize sounds. However, the acoustic head-shadow effect (HSE) might also serve as an azimuth cue, despite its ambiguity when absolute sound levels are unknown. Here, we assess the contribution of either cue in the monaural deaf to two-dimensional (2D) sound localization. In a localization test with randomly interleaved sound levels, we show that all monaurally deaf listeners relied heavily on the HSE, whereas binaural control listeners ignore this cue. However, some monaural listeners responded partly to actual sound-source azimuth, regard- less of sound level. We show that these listeners extracted azimuth information from their pinna cues. The better monaural listeners were able to localize azimuth on the basis of spectral cues, the better their ability to also localize sound-source elevation. In a subsequent localization experiment with one fixed sound level, monaural listeners rapidly adopted a strategy on the basis of the HSE. We conclude that monaural spectral cues are not sufficient for adequate 2D sound localization under unfamiliar acoustic conditions. Thus, monaural listeners strongly rely on the ambiguous HSE, which may help them to cope with familiar acoustic environments.
    [Show full text]
  • Binaural Amplification
    WIDEXPRESS no.29 march 2011 By robert W. Sweetow, Ph.D. University of california, San Francisco EvidEncE for thE BEnEfitS of BinauRal AmplIfIcAtIon scene analysis, improved understanding of speech in adverse acoustic environments, better management of bilateral tinnitus, and greater user satisfaction. Yet despite these advantages, a significant number of hearing health care professionals still do not, or feel they cannot, abide by the general consensus that un- less a significant asymmetry exists between the ears in either sensitivity or word recognition ability, the standard should be trial with binaural amplification. Furthermore, much of this disparity in usage of binau- ral amplification and recommendation of two hearing aids, is geographic specific. For example, in the United States, approximately 80% of hearing aid users who have bilateral hearing loss, own two aids1, but the same can be said for only 40–65% for western european nations 2 and unconfirmed data estimate binaural usage to be as low as 30% in certain Asian regions 3. in this issue of widexPress, this perplexing matter will be confronted by 1) exploring reasons, both justifiable The benefits of binaural hearing have been known for and unjustifiable, for not regularly utilizing binaural decades. They include, but are likely not limited to, amplification when a bilateral hearing loss exists; 2) elimination of the head shadow effect, binaural sum- examining evidence supporting binaural superiority; mation, binaural squelch, reduction of central auditory 3) discussing whether hearing aids take advantage of sensory deprivation, utilization of the unique charac- binaural cues; and 4) briefly describing new strate- teristics of certain neurons to encode both arrival time gies designed to maintain those cues for the hearing and details about the shape of the acoustic input in impaired listener.
    [Show full text]
  • Fitting Strategies for Patients with Unilateral Hearing Loss
    12843.C09.PGS 3/8/02 11:11 AM Page 253 9 Fitting Strategies for Patients with Unilateral Hearing Loss MICHAEL VALENTE, MAUREEN VALENTE, JANE ENRIETTO, KAREN M. LAYTON Introduction Definition of Unilateral Hearing Loss Patients with unilateral hearing loss typi- For the purposes of this chapter, unilateral cally have difficulty with (1) locating the hearing loss is defined as unaidable hearing sources of sound, (2) understanding speech in one ear and normal hearing [15 dB hear- when the signal arrives on the side of the ing level (HL) or better at 250 to 8000 Hz] in poorer ear, and (3) understanding speech in the opposite ear. Unaidable hearing is de- background noise (especially if the noise is fined as an ear having one or more of the fol- arriving on the side of the better ear). Pa- lowing characteristics: tients with unilateral hearing loss can pres- 1. Profound sensorineural hearing loss so ent a challenge to the dispensing audiolo- that amplified sound cannot be heard gist. The dispenser can choose a traditional with any degree of usefulness. dispensing model, which suggests provid- 2. Very poor word recognition score. ing extensive counseling on the communica- 3. Marked intolerance for amplified tion problems likely to occur as a result of sounds. unilateral hearing loss, or recommend con- tralateral routing of the signal (CROS) am- plification to the good ear. On the other Incidence and Prevalence of Unilateral hand, the dispenser could explore with the Hearing Loss patient several alternate fitting strategies. This chapter focuses on the problems as- Although information is available regarding sociated with unilateral hearing loss and the incidence of unilateral hearing loss in provides information beneficial for dis- children (Bess and Tharpe, 1986; Kielmovitch pensers who want to explore alternative fit- and Friedman, 1988; Oyler et al, 1988; Brook- ting strategies for patients with unilateral houser et al, 1991), there is less information hearing loss.
    [Show full text]