Modeling of the Head-Related Transfer Functions for Reduced Computation and Storage Charles Duncan Lueck Iowa State University

Total Page:16

File Type:pdf, Size:1020Kb

Modeling of the Head-Related Transfer Functions for Reduced Computation and Storage Charles Duncan Lueck Iowa State University Iowa State University Capstones, Theses and Retrospective Theses and Dissertations Dissertations 1995 Modeling of the head-related transfer functions for reduced computation and storage Charles Duncan Lueck Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/rtd Part of the Electrical and Electronics Commons Recommended Citation Lueck, Charles Duncan, "Modeling of the head-related transfer functions for reduced computation and storage " (1995). Retrospective Theses and Dissertations. 10929. https://lib.dr.iastate.edu/rtd/10929 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Retrospective Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. INFORMATION TO USERS This mamiscr^t has been reproduced fFom the microfilin master. UMI films the text directly from the origmal or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from ai^ type of con^ter printer. Hie qnallQr of this repfodnction is dqiendent opon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, prim bleedthroug^ substandard marginc^ and in^^rqper alignment can adverse^ affect rq)roduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyri^t material had to be removed, a note will indicate the deletion. Oversize materials (e.g^ maps, drawings, charts) are reproduced sectioning the original, beginning at the upper left-hand comer and continuing from left to light in equal sections with small overk^ Each original is also photogrs^hed in one exposure and is included in reduced form at the bade of the book. Photographs induded in the original manuso^ have been reproduced xerogrsphically in this copy. Higher quality 6" x 9" black and white photognphic prints are available for aiqr photognqphs or illustrations q>pearing in this copy for an additional charge. Contact UMI directfy to order. A Bell & Howell Information Company 300 North Zeeb Road. Ann Arbor. Ml 48106-1346 USA 313.'761-4700 800/521-0600 Modeling of the head-related transfer functions for reduced computation and storage by Charles Duncan Lueck A Dissertation Submitted to the Graduate Faculty in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY Department; Electrical and Computer Engineering Major: Electrical Engineering (Communications and Signal Processing) Signature was redacted for privacy. Signature was redacted for privacy. Signature was redacted for privacy. For Ae MajofDepStment Signature was redacted for privacy. For the Graduate College Iowa State University Ames, Iowa 1995 Copyright © Charles Duncan Lueck, 1995. All rights reserved. DHI Number: 9531765 Copyright 1995 by Lueck, Charles Duncan All eights reserved. DHI Microform 9531765 Copyright 1995, by OKI Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. UMI 300 North Zeeb Road Ann Arbor, MI 48103 TABLE OF CONTENTS 1. INTRODUCTION 1 1.1 Applications 2 1.2 Objectives 4 1.3 Summary 5 2. SOUND LOCALIZATION THEORY 11 2.1 Background 13 2.2 Measurement of the Head-Related Transfer Functions 16 2.3 Synthesis of Localized Sound 18 2.4 Performance and Implementation Concerns 20 3. INTRODUCTION TO SYSTEM MODELING 22 3.1 System Modeling Problem 22 3.2 Parameter Estimation 25 3.2.1 Direct modeling 25 3.2.2 Indirect modeling 26 3.2.2.1 Least-squares (LS) method 28 3.2.2.2 Modified Prony's metiiod 31 3.2.2.3 Iterative prefiltering method 34 3.2.3 Balanced model reduction 38 iii 3.3 Order Estimation 43 4. MODELING OF THE HEAD-RELATED TRANSFER FUNCTIONS 50 4.1 Minimum-Phase Approximation 50 4.2 Model Order Estimation 55 4.2.1 Application of minimum eigenvalue order estimation to HRTFs 55 4.2.2 The diffuse field and directional transfer functions (DTFs) 58 4.2.3 Application of minimum eigenvalue order estimation to DTFs 60 4.3 Perceptual Enhancement 63 4.3.1 Least-squares method 63 4.3.2 The hearing process and psychoacoustics 63 4.3.3 Critical band smoothing 66 4.3.4 Critical band distance measurement 71 4.3.5 Weighted least-squares error method 71 4.3.6 Iterative prefiltering method 76 4.3.7 Weighted iterative prefiltering method 77 4.3.8 Pole-zero cancellation method 80 4.3.9 Balanced model reduction method 84 4.4 Interpolation 87 5. RESULTS 91 5.1 Critical Band Distance Measurements 92 5.1.1 Effect of smoothing 92 iv 5.1.2 Comparison of modeling techniques 96 5.2 Computation 101 5.3 Localization Tests 107 5.3.1 Objective 107 5.3.2 Synthesis 107 5.3.3 Procedure 108 5.3.4 Analysis 109 6. CONCLUSIONS 119 6.1 Discussion 119 6.2 Future Work 122 ACKNOWLEDGMENTS 124 BIBLIOGRAPHY 125 APPENDIX A 129 APPENDIX B 136 APPENDIX C 145 1 1. INTRODUCTION Recent advances in audio and video processing have accelerated the emergence and development of the three-dimensional virtual display. The main objective of the virtual display is to present aural and visual information in such a manner that it is realistically perceived by the user. Hence, the term "virtual reality" has become a popular buzzword used to describe the technology. Still in its infancy, the virtual display shows signs of great potential, opening the door to a host of innovative applications in areas ranging from control systems to education to entertainment However, the technology required to fully achieve this potential has not yet been realized, and will not be realized without significant additional research. The research presented in this dissertation will be directed toward the implementation of the virtual acoustic display, a device for the synthesis of three-dimensional sound via headphones. The virtual acoustic display typically involves the implementation of digital filters which emulate the head-related transfer functions (HRTFs) [1-3]. These functions represent the transformation of sound pressure from a sound source in free space to the eardrums of the listener. Current implementations generally rely on FIR filtering techniques which result in high computation and storage demands, particularly for real-time systems [2]. Because these demands directiy translate into increased cost of implementation, reduction of computation and storage requirements is of great concem. 2 1.1 Applications Thiee-dimensional audio technology has generally not received the attention of its three-dimensional visual counterpart However, this is also an area of great importance and intensive research. Scientists and engineers have been working to understand the complex physical and psychological principles involved in human sound localization and to apply this understanding toward the development of a virtual acoustic display for conveying three- dimensional sound through headphones. Such an acoustic display would prove beneficial to a wide variety of applications. Among these might be: • Entertainment/Recording [4,5]. Undoubtedly, virtual acoustics will find its way into entertainment applications, both at the consumer level and commercially. Potential exists for the enhancement of movies, musical recordings, video games, and other multimedia applications. True three-dimensional acoustic reproduction would create a level of realism unmatched by today's audio reproduction techniques. • Tool for Scientific Research [2,3]. Headphone-delivered stimulus allows researchers precise control over the sound pressure at the eardrums of the listener. The ability to provide simulated signals equivalent to those heard in a realistic environment is unquestionably a valuable scientific tool in the study of the physical and psychological aspects of sound localization. • Aircraft control and communications [1,4,6]. Aviation applications would particularly benefit from the incorporation of three-dimensional acoustic displays. In air traMc control and collision-avoidance systems, where the operator's spatial awareness is critically 3 impoitant, thiee-dimensional acoustic cues would likely increase efficiency and quicken response times. In the cockpit, localized acoustic cues would instantly convey to the pilot information regarding the proximity and location of other aircraft, obstacles, and targets. • Teleconferencing/Telecoininunieations [S]. The effectiveness of remote communication could certainly be enhanced by the application of three-dimensional acoustics. By providing directional cues to speech, individuals would have the perception of being in the same room as other participants, when, in fact, they might actually be separated geographically by large distances. • Guidance systems for the blind [4]. Another application which has been proposed is the incorporation of a virtual acoustic display as a guidance system for the blind. In this system, the global positioning system (GPS) would be used to provide location information to the system. The guidance system would then provide audible cues indicating one's position in relation to landmarks, to the location of buildings, or to the proximity of a busy street or intersection. In general, three-dimensional acoustic displays would be valuable in any situation in which one's spatial awareness is important, or in any situation in which the added dimension of localized
Recommended publications
  • 1 Study Guide for Unit 5 (Hearing, Audition, Music and Speech). Be
    Psychology of Perception Lewis O. Harvey, Jr.–Instructor Psychology 4165-100 Andrew J. Mertens –Teaching Assistant Spring 2021 11:10–12:25 Tuesday, Thursday Study Guide for Unit 5 (hearing, audition, music and speech). Be able to answer the following questions and be familiar with the concepts involved in the answers. Review your textbook reading, lectures, homework and lab assignments and be familiar with the concepts included in them. 1. Diagram the three parts of the auditory system: Outer, middle and inner ear. How is sound mapped onto the basilar membrane? Know these terms: pinna, concha, external auditory canal, tympanic membrane, maleus, incus, stapes, semicircular canals, oval and round windows, cochlea, basilar membrane, organ of Corti, Eustachian tube, inner, outer hair cells. 2. What are the three main physical dimensions of the sound stimulus? What are the three main psychological dimensions of the sound experience? What are the relationships and interdependencies among them? 3. How does the basilar membrane carry out a frequency analysis of the physical sound stimulus? 4. What are roles of the inner and the outer hair cells on the organ of Corti? 5. What is the critical band? Describe three different methods for measuring the critical band. 6. According to Plomp and Levelt (1965), how far apart in frequency must two sine wave tones be in order to sound maximally unpleasant? Why do some musical notes (e.g., the octave or the fifth) sound consonant when played together with the tonic and some other notes (e.g., the second or the seventh) sound dissonant when played with the tonic? 7.
    [Show full text]
  • The Human Ear  Hearing, Sound Intensity and Loudness Levels
    UIUC Physics 406 Acoustical Physics of Music The Human Ear Hearing, Sound Intensity and Loudness Levels We’ve been discussing the generation of sounds, so now we’ll discuss the perception of sounds. Human Senses: The astounding ~ 4 billion year evolution of living organisms on this planet, from the earliest single-cell life form(s) to the present day, with our current abilities to hear / see / smell / taste / feel / etc. – all are the result of the evolutionary forces of nature associated with “survival of the fittest” – i.e. it is evolutionarily{very} beneficial for us to be able to hear/perceive the natural sounds that do exist in the environment – it helps us to locate/find food/keep from becoming food, etc., just as vision/sight enables us to perceive objects in our 3-D environment, the ability to move /locomote through the environment enhances our ability to find food/keep from becoming food; Our sense of balance, via a stereo-pair (!) of semi-circular canals (= inertial guidance system!) helps us respond to 3-D inertial forces (e.g. gravity) and maintain our balance/avoid injury, etc. Our sense of taste & smell warn us of things that are bad to eat and/or breathe… Human Perception of Sound: * The human ear responds to disturbances/temporal variations in pressure. Amazingly sensitive! It has more than 6 orders of magnitude in dynamic range of pressure sensitivity (12 orders of magnitude in sound intensity, I p2) and 3 orders of magnitude in frequency (20 Hz – 20 KHz)! * Existence of 2 ears (stereo!) greatly enhances 3-D localization of sounds, and also the determination of pitch (i.e.
    [Show full text]
  • A Biological Rationale for Musical Consonance Daniel L
    PERSPECTIVE PERSPECTIVE A biological rationale for musical consonance Daniel L. Bowlinga,1 and Dale Purvesb,1 aDepartment of Cognitive Biology, University of Vienna, 1090 Vienna, Austria; and bDuke Institute for Brain Sciences, Duke University, Durham, NC 27708 Edited by Solomon H. Snyder, Johns Hopkins University School of Medicine, Baltimore, MD, and approved June 25, 2015 (received for review March 25, 2015) The basis of musical consonance has been debated for centuries without resolution. Three interpretations have been considered: (i) that consonance derives from the mathematical simplicity of small integer ratios; (ii) that consonance derives from the physical absence of interference between harmonic spectra; and (iii) that consonance derives from the advantages of recognizing biological vocalization and human vocalization in particular. Whereas the mathematical and physical explanations are at odds with the evidence that has now accumu- lated, biology provides a plausible explanation for this central issue in music and audition. consonance | biology | music | audition | vocalization Why we humans hear some tone combina- perfect fifth (3:2), and the perfect fourth revolution in the 17th century, which in- tions as relatively attractive (consonance) (4:3), ratios that all had spiritual and cos- troduced a physical understanding of musi- and others as less attractive (dissonance) has mological significance in Pythagorean phi- cal tones. The science of sound attracted been debated for over 2,000 years (1–4). losophy (9, 10). many scholars of that era, including Vincenzo These perceptual differences form the basis The mathematical range of Pythagorean and Galileo Galilei, Renee Descartes, and of melody when tones are played sequen- consonance was extended in the Renaissance later Daniel Bernoulli and Leonard Euler.
    [Show full text]
  • Laminar Fine Structure of Frequency Organization in Auditory Midbrain
    letters to nature ICC from edge to edge17–19. Anatomical studies indicate that a prominent structural feature of the ICC is a preferred orientation of Laminar fine structure of its principal neurons which are arranged in distinctive, parallel fibrodendritic laminae with a thickness of 120 to 180 mm (refs 20– frequency organization 25). In the cat ICC, this allows for 30 to 45 stacked anatomical laminae. They seem to be of functional significance because laminae in auditory midbrain defined by electrophysiology17–19,25,26, by activity-dependent markers27,28, or by neuroanatomical methods20–25 are of similar Christoph E. Schreiner* & Gerald Langner† spatial orientation and extent. * W. M. Keck Center for Integrative Neuroscience, Sloan Center for Theoretical We consider that a layered but largely continuous frequency Neuroscience, Coleman Laboratory, 513 Parnassus Avenue, University of organization exists in the ICC and that it is the neuronal substrate California at San Francisco, San Francisco, California 94143-0732, USA at which critical bands emerge. The spatial relationship of a † Zoologisches Institut, Technische Hochschule Darmstadt, D-64287 Darmstadt, laminated anatomical structure and a specific distribution of func- Germany tional properties within and across laminae, such as complex local ......................................................................................................................... inhibition and integration networks, appears to be particularly The perception of sound is based on signal processing by a bank
    [Show full text]
  • The Speech Critical Band (S-Cb) in Cochlear Implant Users: Frequency Resolution Employed During the Reception of Everyday Speech
    THE SPEECH CRITICAL BAND (S-CB) IN COCHLEAR IMPLANT USERS: FREQUENCY RESOLUTION EMPLOYED DURING THE RECEPTION OF EVERYDAY SPEECH Capstone Project Presented in Partial Fulfillment of the Requirements for the Doctor of Audiology In the Graduate School of The Ohio State University By Jason Wigand, B.A. ***** The Ohio State University 2013 Capstone Committee: Approved by: Eric W. Healy, Ph.D. - Advisor Eric C. Bielefeld, Ph.D. Christy M. Goodman, Au.D. _______________________________ Advisor © Copyright by Jason Wigand 2013 Abstract It is widely recognized that cochlear implant (CI) users have limited spectral resolution and that this represents a primary limitation. In contrast to traditional measures, Healy and Bacon [(2006) 119, J. Acoust. Soc. Am.] established a procedure for directly measuring the spectral resolution employed during processing of running speech. This Speech-Critical Band (S-CB) reflects the listeners’ ability to extract spectral detail from an acoustic speech signal. The goal of the current study was to better determine the resolution that CI users are able to employ when processing speech. Ten CI users between the ages of 32 and 72 years using Cochlear Ltd. devices participated. The original standard recordings from the Hearing In Noise Test (HINT) were filtered to a 1.5-octave band, which was then partitioned into sub-bands. Spectral information was removed from each partition and replaced with an amplitude-modulated noise carrier band; the modulated carriers were then summed for presentation. CI subject performance increased with increasing spectral resolution (increasing number of partitions), never reaching asymptote. This result stands in stark contrast to expectation, as it indicates that increases in spectral resolution up to that of normal hearing produced increases in performance.
    [Show full text]
  • Psychoacoustics: a Brief Historical Overview
    Psychoacoustics: A Brief Historical Overview William A. Yost From Pythagoras to Helmholtz to Fletcher to Green and Swets, a centu- ries-long historical overview of psychoacoustics. Postal: Speech and Hearing Science 1 Arizona State University What is psychoacoustics? Psychoacous- PO Box 870102 tics is the psychophysical study of acous- Tempe, Arizona 85287 tics. OK, psychophysics is the study of USA the relationship between sensory per- ception (psychology) and physical vari- Email: ables (physics), e.g., how does perceived [email protected] loudness (perception) relate to sound pressure level (physical variable)? “Psy- chophysics” was coined by Gustav Fech- ner (Figure 1) in his book Elemente der Psychophysik (1860; Elements of Psycho- physics).2 Fechner’s treatise covered at least three topics: psychophysical meth- ods, psychophysical relationships, and panpsychism (the notion that the mind is a universal aspect of all things and, as Figure 1. Gustav Fechner (1801-1887), “Father of Psychophysics.” such, panpsychism rejects the Cartesian dualism of mind and body). Today psychoacoustics is usually broadly described as auditory perception or just hearing, although the latter term also includes the biological aspects of hearing (physiological acoustics). Psychoacoustics includes research involving humans and nonhuman animals, but this review just covers human psychoacoustics. With- in the Acoustical Society of America (ASA), the largest Technical Committee is Physiological and Psychological Acoustics (P&P), and Psychological Acoustics is psychoacoustics. 1 Trying to summarize the history of psychoacoustics in about 5,000 words is a challenge. I had to make difficult decisions about what to cover and omit. My history stops around 1990. I do not cover in any depth speech, music, animal bioacoustics, physiological acoustics, and architectural acoustic topics because these may be future Acoustics Today historical overviews.
    [Show full text]
  • Psychoacoustics & Hearing
    Abstract: In this lecture, the basic concepts of sound and acoustics are introduced as they relate to human hearing. Basic perceptual hearing effects such as loudness, masking, and binaural localization are described in simple terms. The anatomy of the ear and the hearing function are explained. The causes and consequences of hearing loss are shown along with strategies for compensation and suggestions for prevention. Examples are given throughout illustrating the relation between the signal processing performed by the ear and relevant consumer audio technology, as well as its perceptual consequences. Christopher J. Struck CJS Labs San Francisco, CA 19 November 2014 Frequency is perceived as pitch or tone. An octave is a doubling of frequency. A decade is a ten times increase in frequency. Like amplitude, frequency perception is also relative. Therefore, a logarithmic frequency scale is commonly used in acoustics. The normal range for healthy young humans is 20 –20 kHz. The frequency range of fundamental tones on the piano is 27 to 4186 Hz, as shown. The ranges of other instruments and voices are shown. The Threshold of Hearing at the bottom of the graph is the lowest level pure tone that can be heard at any given frequency. The Threshold of Discomfort, at the top of the graph represents the loudest tolerable level at any frequency. The shaded centre area is the range of level and frequency for speech. The larger central area is the range for music. Level Range is also commonly referred to as Dynamic Range. The subjective or perceived loudness of sound is determined by several complex factors, primarily Masking and Hearing Acuity.
    [Show full text]
  • Musical Acoustics the Human
    Musical Acoustics Lecture 11 The Human Ear - I Musical Acoustics, C. Bertulani 1 Anatomy of Ear The human ear is a highly sensitive sound receptor in which pressure fluctuations in the outer ear are transformed into vibrations of small bones (the ossicles) in the middle ear that are ultimately communicated to the cochlea located in the inner ear, where the vibrations are further transformed by stereocilia (hair cells) into neural impulses distributed by frequency. Middle Ear Outer Ear Inner Ear 2 Outer ear Pinna – (the feather) Pinna matches ear canal to outside world. Meatus – ( the passageway) conducts sound into head. Meatus Tympanum – (the drum) Tympanic transforms pressure membrane fluctuations into displacement. 3 Middle ear – The ossicles (little bones) Malleus ― (the hammer) Incus moved by Tympanum. Malleus Incus ― (the anvil) supported by ligaments that protect against loud percussion. Stapes ― (the stirrup) force multiplied by 1.3 Stapes because of lever action. 4 Tympanum (ear drum) (view from inside) Tympamium Malleus and ligaments 5 The Ossicles Malleus Incus Malleus Stapes Tympanum Incus Stapes Malleus Incus Stapes 6 Inner Ear Cochlea – (the Snail) converts displacement into neural impulses. Semicircular Canals Auditory Nerve – neural impulses to brain Cochlea Oval Window Semicircular canals – detect motion and orientation Round Window Cochlea (micrograph) “The Snail” • o~ oval window • r~ round window 2 mm 7 Structure of Cochlea 1. Spiral cone 2. Divided by Basilar Membrane 3. In on top half 4. Out on bottom 5. “Sloshing “ Basilar Membrane ↑Organ of Corti Auditory Nerve 8 ↑ Structure of Cochlea Stereocilia (Hair Cells) Outer Hair Cell 9 ↑ Structure of Cochlea Connection to nerves Synapse ⇘ Afferent Efferent Outer Vibration Hair Cells Organ of Corti and Basilar Membrane ↑ Inner Hair Cells 10 Detail of Hair Cell Hair cells have nothing to do with hair that grows in the skin.
    [Show full text]
  • Binaural Response Organization Within a Frequency-Band Representation of the Inferior Colliculus: Implications for Sound Localization
    The Journal of Neuroscience April 1986, 6(4): 962-973 Binaural Response Organization Within a Frequency-Band Representation of the Inferior Colliculus: Implications for Sound Localization Jeffrey J. Wenstrup, Linda S. Ross, and George D. Pollak Department of Zoology, University of Texas at Austin, Austin, Texas 78712 The auditory system of the mustache hat (Pteronotus parnellii) 1968; Guinan et al., 1972; Rose et al., 1959; Tsuchitani, 1977) contains a disproportionately large representation of a narrow yet their topographically ordered projections converge on a sin- frequency band, corresponding to the dominant, 60 kHz com- gle tonotopic organization in the ICC (Fitzpatrick, 1975; Mer- ponent of its echolocation signal. In the inferior colliculus (IC), zenith and Reid, 1974; Semple and Aitkin, 1979; Serviere et the 60 kHz representation comprises an architectonically dis- al., 1984). Collicular neurons that are most sensitive to nearly tinct region called the dorsoposterior division (DPD), which is the same frequency of sound are grouped within characteristic accessible for detailed physiological study. We examined the laminae of the ICC (Merzenich and Reid, 1974; Morest, 1964; topographic distribution of binaural responses within this one Oliver and Mores& 1984; Rockel and Jones, 1973; Semple and frequency-band representation of the inferior colliculus. Aitkin, 1979; Serviere et al., 1984). These frequency-band lam- We describe two primary results. First, neurons with different inae receive convergent input from the corresponding frequen- binaural response properties are spatially segregated into one cy-band representations of brain stem auditory nuclei (Brunso- of four binaural response-specific regions of the DPD: a large Bechtold et al., 1981; Roth et al., 1978; Zook and Casseday, region of monaural (EO) responses; two regions containing neu- 1982b).
    [Show full text]
  • COMPLEX SOUNDS and CRITICAL BANDS1 BERTRAM SCHARF Northeastern University
    Psychological Bulletin 1961, Vol. 58, No. 3, 205-217 COMPLEX SOUNDS AND CRITICAL BANDS1 BERTRAM SCHARF Northeastern University Studies of the responses of human given by the top curve in Figure 1. observers to bands of noise and other The ordinate gives the width (AF), complex sounds have led to the meas- in cycles per second, of the critical ure of what appears to be a basic unit band; the abscissa gives the center of hearing, the critical band. When frequency. As the frequency at the the frequency spectrum of a stimu- center of a complex sound increases, lating sound is narrower than the the critical band that is measured critical band, the ear reacts one way; around the center frequency becomes when the spectrum is wider, it reacts wider. another way. For example, experi- Not only does the critical band ments show that at values less than have the same values when measured the critical bandwidth, both loud- for several kinds of auditory response, ness and absolute threshold are in- it is also independent of such stimu- dependent of bandwidth; only when lus parameters as the number of the critical bandwidth is exceeded components in the complex (Scharf, do the loudness and the absolute 1959b) and the sound pressure level threshold increase with the width (Feldtkeller, 1955; Feldtkeller & (Gassier, 1954; Zwicker & Feldtkel- Zwicker, 1956). ler, 1955; Zwicker, Flottorp, & Prior to the experimental measures Stevens, 1957). of the critical band, Fletcher (1940) The critical band has also been had hypothesized the existence of a measured in experiments on auditory critical band for masking.
    [Show full text]
  • Neurophysiological Correlates of the Critical Bandwidth in the Human Auditory System
    Brigham Young University BYU ScholarsArchive Theses and Dissertations 2015-11-01 Neurophysiological Correlates of the Critical Bandwidth in the Human Auditory System Grace Ann Bentley Brigham Young University - Provo Follow this and additional works at: https://scholarsarchive.byu.edu/etd Part of the Communication Sciences and Disorders Commons BYU ScholarsArchive Citation Bentley, Grace Ann, "Neurophysiological Correlates of the Critical Bandwidth in the Human Auditory System" (2015). Theses and Dissertations. 5619. https://scholarsarchive.byu.edu/etd/5619 This Thesis is brought to you for free and open access by BYU ScholarsArchive. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of BYU ScholarsArchive. For more information, please contact [email protected], [email protected]. Neurophysiological Correlates of the Critical Bandwidth in the Human Auditory System Grace Ann Bentley A thesis submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for the degree of Master of Science David McPherson, Chair Richard Harris Martin Fujiki Department of Communication Disorders Brigham Young University November 2015 Copyright © 2015 Grace Ann Bentley All Rights Reserved ABSTRACT Neurophysiological Correlates of the Critical Bandwidth in the Human Auditory System Grace Ann Bentley Department of Communication Disorders, BYU Master of Science The critical bandwidth (CBW) is an auditory phenomenon that has been used to study various aspects of auditory processing, including auditory masking, complex tone processing, and loudness perception. Although the psychoacoustic aspects of the CBW have been well studied, the underlying neurophysiology of the CBW has not been as thoroughly examined. The current study examined the neurophysiology of the CBW in young adults, as well as loudness perception in response to the CBW.
    [Show full text]
  • The Spiral Staircase: Tonotopic Microstructure and Cochlear Tuning
    The Journal of Neuroscience, March 18, 2015 • 35(11):4683–4690 • 4683 Systems/Circuits The Spiral Staircase: Tonotopic Microstructure and Cochlear Tuning X Christopher A. Shera Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts 02114 Although usually assumed to be smooth and continuous, mammalian cochlear frequency-position maps are predicted to manifest a staircase-like structure comprising plateaus of nearly constant characteristic frequency separated by abrupt discontinuities. The height and width of the stair steps are determined by parameters of cochlear frequency tuning and vary with location in the cochlea. The step height is approximately equal to the bandwidth of the auditory filter (critical band), and the step width matches that of the spatial excitation pattern produced by a low-level pure tone. Stepwise tonotopy is an emergent property arising from wave reflection and interference within the cochlea, the same mechanisms responsible for the microstructure of the hearing threshold. Possible relationships between the microstructure of the cochlear map and the tiered tonotopy observed in the inferior colliculus are explored. Key words: basilar membrane; cochlear tonotopy; frequency selectivity; inferior colliculus; otoacoustic emissions Introduction Materials and Methods The cochlear tonotopic map relates the local characteristic fre- BM transfer functions were computed using an active, transmission-line quency (CF) to position within the cochlea. In many animals, model of the cochlea based on the “double-pole” form of the BM admit- including humans, the CF, defined as the frequency of maximum tance obtained by Zweig (1991). Parameter values and other details of the sensitivity (minimum threshold) at low sound levels, is thought implementation have been described previously (Shera and Cooper, to vary quasiexponentially with position, with higher frequencies 2013).
    [Show full text]