Auditory Illusion Through Headphones: History, Challenges and New Solutions

Total Page:16

File Type:pdf, Size:1020Kb

Auditory Illusion Through Headphones: History, Challenges and New Solutions The Technology of Binaural Listening & Understanding: Paper ICA2016-363 Auditory illusion through headphones: History, challenges and new solutions Karlheinz Brandenburg(a);(b), Stephan Werner(b), Florian Klein(b), Christoph Sladeczek(a) (a)Fraunhofer IDMT, Germany, [email protected], [email protected] (b)TU Ilmenau, Germany, [email protected], [email protected], fl[email protected] Abstract The dream of perfect recreation of sound has always consisted of two parts: Reproduction of monaural sounds such that they seem to be exact copies of an original signal and the plausi- ble recreation of complex sound environments, the possibility to be immersed in sound. The latter goal seems to be much more difficult, especially if we consider reproduction over head- phones. From standard two-channel sounds reproduced over headphones through artificial head recordings, the inclusion of HRTF and binaural room impulse responses, always something was missing to create a perfect auditory illusion. Depending on refinements like individually adapted HRTF etc. these methods work for many people, but not for everybody. As we know now, in addition to the static, source and listener dependent modifications to headphone sound we need to pay attention to cognitive effects: The perceived presence of an acoustical room rendering changes depending on our expectations. Prominent context effects are for example acoustic di- vergence between the listening room and the synthesized scene, visibility of the listening room, and prior knowledge triggered by where we have been before. Furthermore, cognitive effects are mostly time variant which includes anticipation and assimilation processes caused by training and adaptation. We present experiments proving some of these well-known contextual effects by investigating features like distance perception, externalization, and localization. These features are shifted by adaptation and training. Furthermore, we present some proposals how to get to a next level of fidelity in headphone listening. This includes the use of room simulation software and the adaptation of its auralization to different listening rooms by changing acoustical parameters. Keywords: immersive sound via headphones, room simulation Auditory illusion through headphones: History, challenges and new solutions 1 Introduction As long as there has been recording of sound, people have been dreaming about the perfect sound reproduction enabling the real illusion of artists being in the room. There are reports of even Edison, when marketing early phonograph systems, emphasized audio quality, even over the artistic quality of the recording [9]. He organized demos around the world where people where asked whether they were actually listening to the live artist or a recording. With the continued improvement of recording, amplifier technology and loudspeakers, today we can get a quite faithful reproduction of monaural signals. For reproducing sound from multiple sources in a room, we still cannot say that the task of a plausible recreation of the sound in a different room has been completely solved. Difficult as this task is for reproduction using loudspeakers, there is even more of a problem when using headphones. The usual result if using recordings which have been mixed to enable playing the sound via two loudspeakers is sound which seems to come from within the head. What we do desire is to hear the sound coming from a stage before us (or around us). If we succeed with this, we call the result an externalized sound. Externalization describes the perception of the position of an auditory event outside or inside the head of the listener [29, 15]. Externalization is a crucial feature to reach a plausible spatial auditory illusion with binaural headphone systems. In the following chapters, we will first look at the main reason for non-externalized sound and present older efforts to get around these problems. We then present more current work ex- plaining the extent of the difficulties and look into some newer proposals to enable perfect audio illusion via headphones. 2 Earlier work In the field of electrical sound reproduction headphones always played a major role. In the early days due technical constrains, the development of a headphone speaker was much simpler than the development of a loudspeaker. Therefore the first electrical devices allowing listening to recorded audio were based on headphone-like technology. One of the first systems that was available to a broader audience was the "Théâtrophone" developed by Clément Ader in 1881 [8]. This device allowed a transmission of a two-channel audio signal to different receiver stations were the user needed to put two earcups to its ears to listen to concerts or plays. As the recording was realized using two microphones placed in a distance to each other, the users reported a spatial impression because of the inequalities of the sound at the two ears [22]. However, as these signals do not really represent the sound pressure at the ear drums as it would occur in natural listening, researchers have been working since more than 40 years to re- semble the "real world" signals. Major cues for directional hearing are time (phase) differences 2 (Interaural Level Differences - ILD , Interaural Time Differences - ITD) and the direction and frequency dependent transfer function from sound source to ear drum (Head Related Trans- fer Functions - HRTF) [27]. To simulate the HRTFs in an audio recording situation so-called dummy heads are used [26]. One of the first demonstrations made to a public audience was the presentation of a mechanical man called "Oscar" with microphone ears by AT&T at the Chicago World Fair in 1933 [13]. This was the starting point for the development of different dummy heads, which are still used today [21, 14]. With the availability of dummy-heads it was possible to record sound as placing a head on a specific position. However, a true spatial impression of the recorded scenes was not perceiv- able for every listener, since there are large differences between the HRTFs of different test subjects and we usually are not very good with listening with somebody else’s ears. Therefore a next big step towards more realistic auralization was the introduction of individualized HRTFs. Over the years, different methods have been used among others: In-ear measurement using small probes in the blocked or unblocked ear canal [26]; On-ear measurements including some correction factors; Selection of HRTFs (without actual measurement) of "what works best" [33] and optical measurement of ear and ear canal geometry and calculation of an HRTF using numerical simulations [18, 19]. Another question in the usage of individualized HRTFs is the needed accuracy. There is more research than would fit in this paper on such questions. There are publicly available databases of HRTF measurements. When we look at the amount of research going into the different methods, HRTF individualization has clearly got a lion’s share of the research. Since actual HRTFs depend not only on the individual, but on the position of the sound, an im- portant cue for auditory illusion is the change of the sound when somebody moves the head or the listeners moves within space. In literature this is typically called dynamic binaural synthesis [32]. It has been implemented using head trackers and the selection including interpolation of actual HRTFs [10]. Such systems are known for a much better auditory illusion and external- ization. Another cue to help with externalization is the actual reflection pattern in a room [3]. This founded in the fact, that reflections have a major influence on distance perception [24]. To include room acoustics into the sound of a headphone Binaural Room Impulse Responses (BRIR) are used. These transfer functions can either be determined by room acoustic mea- surement or room acoustic simulation. Regarding the lack of externalization, we find different theories in the literature. If we include the newer results on room divergence and adaptation (see the following chapters), the authors favor the following explanation: Sounds in a room are localized via a complex interaction of simple auditory cues, the expectation in higher layers of the brain and the recognition of sound patterns (including reflections) of known signals. Whenever there is too much divergence be- tween the expected sound pattern and the actual sound delivered to the inner ear, there is an decreased probability of externalization. Context-dependent quality parameters like room di- vergence, presence of visual cues, and personalization of the system influence the perception of externalization [38]. From this it is clear that externalization is not just the result of using 3 correct HRTFs etc., but also the result of complex cognitive interactions in the brain. It is hypothesized that the build-up of the experienced quality is a cognitive process which includes expectations and preknowledge of the listener. Jekosch [16, 17], Blauert [5, 4], Raake [30, 31], and others [25] propose a quality formation process with two essential processes. The quality perception path is driven by the physical nature of the event which reaches the sensory organs. The perceived auditory event is created or constructed with respect to the an internal reference path. This process includes a comparison and judgment with the internal reference (or expectation) of each individual person. The reference path describes the time dependent, context dependent, multi-sensory, and cognitive influencing factors on the quality formation process. To transfer this knowledge in new applications an extension of this process is proposed. The extension includes on the one hand the technical system to build-up the perceived quality and on the other hand feedback mechanisms from the perceived quality to the technical elements of the system [35]. The quality of the system can be described by the technical quality elements and the context of use of the system (context-dependent quality parameters like room divergence or personalization of the system).
Recommended publications
  • The Perception of Melodic Consonance: an Acoustical And
    The perception of melodic consonance: an acoustical and neurophysiological explanation based on the overtone series Jared E. Anderson University of Pittsburgh Department of Mathematics Pittsburgh, PA, USA Abstract The melodic consonance of a sequence of tones is explained using the overtone series: the overtones form “flow lines” that link the tones melodically; the strength of these flow lines determines the melodic consonance. This hypothesis admits of psychoacoustical and neurophysiological interpretations that fit well with the place theory of pitch perception. The hypothesis is used to create a model for how the auditory system judges melodic consonance, which is used to algorithmically construct melodic sequences of tones. Keywords: auditory cortex, auditory system, algorithmic composition, automated com- position, consonance, dissonance, harmonics, Helmholtz, melodic consonance, melody, musical acoustics, neuroacoustics, neurophysiology, overtones, pitch perception, psy- choacoustics, tonotopy. 1. Introduction Consonance and dissonance are a basic aspect of the perception of tones, commonly de- scribed by words such as ‘pleasant/unpleasant’, ‘smooth/rough’, ‘euphonious/cacophonous’, or ‘stable/unstable’. This is just as for other aspects of the perception of tones: pitch is described by ‘high/low’; timbre by ‘brassy/reedy/percussive/etc.’; loudness by ‘loud/soft’. But consonance is a trickier concept than pitch, timbre, or loudness for three reasons: First, the single term consonance has been used to refer to different perceptions. The usual convention for distinguishing between these is to add an adjective specifying what sort arXiv:q-bio/0403031v1 [q-bio.NC] 22 Mar 2004 is being discussed. But there is not widespread agreement as to which adjectives should be used or exactly which perceptions they are supposed to refer to, because it is difficult to put complex perceptions into unambiguous language.
    [Show full text]
  • Music: Broken Symmetry, Geometry, and Complexity Gary W
    Music: Broken Symmetry, Geometry, and Complexity Gary W. Don, Karyn K. Muir, Gordon B. Volk, James S. Walker he relation between mathematics and Melody contains both pitch and rhythm. Is • music has a long and rich history, in- it possible to objectively describe their con- cluding: Pythagorean harmonic theory, nection? fundamentals and overtones, frequency Is it possible to objectively describe the com- • Tand pitch, and mathematical group the- plexity of musical rhythm? ory in musical scores [7, 47, 56, 15]. This article In discussing these and other questions, we shall is part of a special issue on the theme of math- outline the mathematical methods we use and ematics, creativity, and the arts. We shall explore provide some illustrative examples from a wide some of the ways that mathematics can aid in variety of music. creativity and understanding artistic expression The paper is organized as follows. We first sum- in the realm of the musical arts. In particular, we marize the mathematical method of Gabor trans- hope to provide some intriguing new insights on forms (also known as short-time Fourier trans- such questions as: forms, or spectrograms). This summary empha- sizes the use of a discrete Gabor frame to perform Does Louis Armstrong’s voice sound like his • the analysis. The section that follows illustrates trumpet? the value of spectrograms in providing objec- What do Ludwig van Beethoven, Ben- • tive descriptions of musical performance and the ny Goodman, and Jimi Hendrix have in geometric time-frequency structure of recorded common? musical sound. Our examples cover a wide range How does the brain fool us sometimes • of musical genres and interpretation styles, in- when listening to music? And how have cluding: Pavarotti singing an aria by Puccini [17], composers used such illusions? the 1982 Atlanta Symphony Orchestra recording How can mathematics help us create new of Copland’s Appalachian Spring symphony [5], • music? the 1950 Louis Armstrong recording of “La Vie en Rose” [64], the 1970 rock music introduction to Gary W.
    [Show full text]
  • The Octave Illusion and Auditory Perceptual Integration
    The Octave Illusion and Auditory Perceptual Integration DIANA DEUTSCH University of California, San Diego, La Jolla, California I. Introduction . 1 II. The Octave Illusion . 2 A. The Basic Effect . 2 B. Handedness Correlates . 4 C. Further Complexities: Ears or Auditory Space? . 6 D. Dependence of the Illusion on Sequential Interactions . 6 III. Parametric Studies of Ear Dominance . 7 A. Apparatus . 7 B. Experiment 1 . 7 C. Experiment 2 . 9 D. Experiment 3 . 10 E. Experiment 4 . .11 F. Hypothesized Basis for Ear Dominance . 13 G. Discussion . 13 IV. Parametric Studies of Lateralization by Frequency . .15 A. Experiment 1 . 15 B. Experiment 2. 16 C. Experiment 3 . 16 D. Experiment 4 . 16 E. Discussion . 18 V. The What–Where Connection . 18 Discussion . 19 VI. Conclusion . 19 References . 20 I. INTRODUCTION hemisphere damage to Brodmann’s areas 39 and 40 has been found to produce deficits in visual perceptual clas- A philosophical doctrine stemming from the empiri- sification (Warrington and Taylor, 1973). Further, vari- cists of the seventeenth and eighteenth centuries is that ous studies on human and subhuman species point to objects manifest themselves simply as bundles of attrib- an anatomical separation between the pathways medi- ute values. This doctrine has had a profound influence ating pattern discrimination on the one hand and local- on the thinking of sensory psychologists and neurophys- ization on the other (Ingle et al., 1967-1968). For exam- iologists. For example, it is assumed that when we see an ple, Schneider found that ablation of visual cortex in object, we separately appreciate its color, its shape, its hamsters led to an inability to discriminate visual pat- location in the visual field, and so on.
    [Show full text]
  • The Shepard–Risset Glissando: Music That Moves
    University of Wollongong Research Online Faculty of Social Sciences - Papers Faculty of Social Sciences 2017 The hepS ard–Risset glissando: music that moves you Rebecca Mursic University of Wollongong, [email protected] B Riecke Simon Fraser University Deborah M. Apthorp Australian National University, [email protected] Stephen Palmisano University of Wollongong, [email protected] Publication Details Mursic, R., Riecke, B., Apthorp, D. & Palmisano, S. (2017). The heS pard–Risset glissando: music that moves you. Experimental Brain Research, 235 (10), 3111-3127. Research Online is the open access institutional repository for the University of Wollongong. For further information contact the UOW Library: [email protected] The hepS ard–Risset glissando: music that moves you Abstract Sounds are thought to contribute to the perceptions of self-motion, often via higher-level, cognitive mechanisms. This study examined whether illusory self-motion (i.e. vection) could be induced by auditory metaphorical motion stimulation (without providing any spatialized or low-level sensory information consistent with self-motion). Five different types of auditory stimuli were presented in mono to our 20 blindfolded, stationary participants (via a loud speaker array): (1) an ascending Shepard–Risset glissando; (2) a descending Shepard–Risset glissando; (3) a combined Shepard–Risset glissando; (4) a combined-adjusted (loudness-controlled) Shepard–Risset glissando; and (5) a white-noise control stimulus. We found that auditory vection was consistently induced by all four Shepard–Risset glissandi compared to the white-noise control. This metaphorical auditory vection appeared similar in strength to the vection induced by the visual reference stimulus simulating vertical self-motion.
    [Show full text]
  • Notices of the American Mathematical
    ISSN 0002-9920 Notices of the American Mathematical Society AMERICAN MATHEMATICAL SOCIETY Graduate Studies in Mathematics Series The volumes in the GSM series are specifically designed as graduate studies texts, but are also suitable for recommended and/or supplemental course reading. With appeal to both students and professors, these texts make ideal independent study resources. The breadth and depth of the series’ coverage make it an ideal acquisition for all academic libraries that of the American Mathematical Society support mathematics programs. al January 2010 Volume 57, Number 1 Training Manual Optimal Control of Partial on Transport Differential Equations and Fluids Theory, Methods and Applications John C. Neu FROM THE GSM SERIES... Fredi Tro˝ltzsch NEW Graduate Studies Graduate Studies in Mathematics in Mathematics Volume 109 Manifolds and Differential Geometry Volume 112 ocietty American Mathematical Society Jeffrey M. Lee, Texas Tech University, Lubbock, American Mathematical Society TX Volume 107; 2009; 671 pages; Hardcover; ISBN: 978-0-8218- 4815-9; List US$89; AMS members US$71; Order code GSM/107 Differential Algebraic Topology From Stratifolds to Exotic Spheres Mapping Degree Theory Matthias Kreck, Hausdorff Research Institute for Enrique Outerelo and Jesús M. Ruiz, Mathematics, Bonn, Germany Universidad Complutense de Madrid, Spain Volume 110; 2010; approximately 215 pages; Hardcover; A co-publication of the AMS and Real Sociedad Matemática ISBN: 978-0-8218-4898-2; List US$55; AMS members US$44; Española (RSME). Order code GSM/110 Volume 108; 2009; 244 pages; Hardcover; ISBN: 978-0-8218- 4915-6; List US$62; AMS members US$50; Ricci Flow and the Sphere Theorem The Art of Order code GSM/108 Simon Brendle, Stanford University, CA Mathematics Volume 111; 2010; 176 pages; Hardcover; ISBN: 978-0-8218- page 8 Training Manual on Transport 4938-5; List US$47; AMS members US$38; and Fluids Order code GSM/111 John C.
    [Show full text]
  • Functional Transfer of Musical Training to Speech Perception in Adverse Acoustical Situations
    Functional transfer of musical training to speech perception in adverse acoustical situations THESIS Presented in Partial Fulfillment of the Requirements for the Degree Master of Arts in the Graduate School of The Ohio State University By Jianming Shen Graduate Program in Speech and Hearing Science The Ohio State University 2014 Master's Examination Committee: Dr. Lawrence L. Feth, Advisor Dr. Antoine J. Shahin Copyrighted by Jianming Shen 2014 Abstract Listeners can perceive interrupted speech as continuous, provided that the gap is masked by another extraneous sound such as white noise or a cough. This phenomenon, known as the continuity illusion or phonemic restoration, is an adaptive function of our auditory system that facilitates speech comprehension in adverse acoustic situations. In this study, we examined the hypothesis that the effect of music training, as manifested in one’s enhanced ability to anticipate envelope variation and thus perceive continuity in degraded music, can transfer to phonemic restoration. We posited that this cross-domain extension is largely due to the overlapping neural networks associated with rhythm processing in the lower-level central auditory system. Musicians and non-musicians listened to physically interrupted short music tunes and English words which contained a segment that was replaced by white noise, and judged whether they heard the stimuli as interrupted or continuous through the noise. Their perceptual threshold of continuity— here defined as the interruption duration at which they perceived the sound as continuous by a 50% chance—for each session was measured and calculated based on an adaptive procedure. Results revealed that musicians tolerated longer interruptions than non-musicians during the speech session, but not during the music session.
    [Show full text]
  • The Cognitive Neuroscience of Music
    THE COGNITIVE NEUROSCIENCE OF MUSIC Isabelle Peretz Robert J. Zatorre Editors OXFORD UNIVERSITY PRESS Zat-fm.qxd 6/5/03 11:16 PM Page i THE COGNITIVE NEUROSCIENCE OF MUSIC This page intentionally left blank THE COGNITIVE NEUROSCIENCE OF MUSIC Edited by ISABELLE PERETZ Départment de Psychologie, Université de Montréal, C.P. 6128, Succ. Centre-Ville, Montréal, Québec, H3C 3J7, Canada and ROBERT J. ZATORRE Montreal Neurological Institute, McGill University, Montreal, Quebec, H3A 2B4, Canada 1 Zat-fm.qxd 6/5/03 11:16 PM Page iv 1 Great Clarendon Street, Oxford Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide in Oxford New York Auckland Bangkok Buenos Aires Cape Town Chennai Dar es Salaam Delhi Hong Kong Istanbul Karachi Kolkata Kuala Lumpur Madrid Melbourne Mexico City Mumbai Nairobi São Paulo Shanghai Taipei Tokyo Toronto Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries Published in the United States by Oxford University Press Inc., New York © The New York Academy of Sciences, Chapters 1–7, 9–20, and 22–8, and Oxford University Press, Chapters 8 and 21. Most of the materials in this book originally appeared in The Biological Foundations of Music, published as Volume 930 of the Annals of the New York Academy of Sciences, June 2001 (ISBN 1-57331-306-8). This book is an expanded version of the original Annals volume. The moral rights of the author have been asserted Database right Oxford University Press (maker) First published 2003 All rights reserved.
    [Show full text]
  • Perceived Intensity Effects in the Octave Illusion
    Perception & Psychophysics 2005, 67 (4), 648-658 Perceived intensity effects in the octave illusion RANIL R. SONNADARA and LAUREL J. TRAINOR McMaster University, Hamilton, Ontario, Canada We showed that there is an intensity aspect to the octave illusion in addition to the pitch and loca- tion aspects originally reported by Deutsch (1974). In Experiment 1, we asked participants to directly compare the stimulus giving rise to the illusion (ILLU) with one mimicking its most commonly reported percept (illusion consistent; IC) and showed that they were easily able to distinguish between the two. In Experiment 2, we demonstrated a clear difference between the perceived loudness of ILLU and IC when IC follows ILLU, but not when IC precedes ILLU. In Experiments 3 and 4, we showed that this ef- fect depends on the alternation of high and low tones between the ears in an extended pattern. In Ex- periment 5, we showed that this difference in perceived loudness disappears if the interval between the ILLU and IC stimuli is sufficiently large. Complex sounds contain energy at a number of differ- might be due to hemispheric dominance, with the high ent frequencies. In a typical environment, several objects tones perceived as coming from the dominant ear. emit complex sounds overlapping in time. The inner ear The octave illusion presents a paradox. Although each performs a frequency analysis over time on the sum of ear receives a sequence of high and low tones (see Fig- these incoming sound waves, leaving it to higher stages ures 1A and 1B), Deutsch found that the most commonly of processing to appropriately recombine the various reported percept was an alternating pattern with the high components into an accurate representation of the origi- tones heard as though they were being presented solely nal environment.
    [Show full text]
  • Shepard Tones
    Shepard Tones Ascending and Descending by M. Escher 1 If you misread the name of this exhibit, you may have expected to hear some pretty music played on a set of shepherd’s pipes. Instead what you heard was a sequence of notes that probably sounded rather unmusical, and may have at first appeared to keep on rising indefinitely, “one note at a time”. But soon you must have noticed that it was getting nowhere, and eventually you no-doubt realized that it was even “cyclic”, i.e., after a full scale of twelve notes had been played, the sound was right back to where it started from! In some ways this musical paradox is an audible analog of Escher’s famous Ascending and Descending drawing. While this strange “ever-rising note” had precursors, in the form in which it occurs in 3D-XplorMath, it was first described by the psychologist Roger N. Shepard in a paper titled Circularity in Judgements of Relative Pitch published in 1964 in the Journal of the Acoustical Society of America. To understand the basis for this auditory illusion, it helps to look at the sonogram that is shown while the Shepard Tones are playing. What you are seeing is a graph in which the horizontal axis represents frequency (in Hertz) and the vertical axis the intensity at which a sound at a given frequency is played. Note that the Gaussian or bell curve in this diagram shows the inten- sity envelope at which all sounds of a Shepard tone are played. A single Shepard tone consists of the same “note” played in seven different octaves—namely the octave containing A above middle C (the center of our bell curve) and three octaves up and three octaves down.
    [Show full text]
  • Chapter 8 the Perception of Auditory Patterns Diana Deutsch University of California, San Diego
    Chapter 8 The Perception of Auditory Patterns Diana Deutsch University of California, San Diego 1 INTRODUCTION Over the last half-century a considerable body of knowledge has accumulated concerning visual shape perception, based on findings in experimental psychology, neurophysiology, artificial intelligence and related fields. These developments have not, however, been paralleled by analogous developments in audition. It is inter- esting, therefore, that the foundations of pattern perception were laid by scientists who were as much concerned with auditory and musical phenomena as they were with visual ones. Wertheimer (1924/1938) described the beginnings of Gestalt theory thus: 'Historically, the most important impulse came from von Ehrenfels who raised the following problem. Psychology has said that experience is a compound of elements; we hear a melody and then, upon hearing it again, memory enables us to recognize it. But what is it that enables us to recognize the melody when it is played in a new key? The sum of its elements is different, yet the melody is the same; indeed, one is often not even aware that a transposition has been made.' (p. 4) In another paper in which he proposed the Gestalt principles of perceptual organization, Wertheimer (1923/1955) frequently presented musical illustrations along with visual ones. For example, in order to illustrate that 'more or less dissimilarity operates to determine experienced arrangement' he wrote: 'With tones, for example, C, C~, E, F, G~, A, C, C~... will be heard in the grouping ab/cd... ; and C, C~, D, E, F, F~, G~, A, A~, C, C~, D ... in the grouping abc/def' (1923/1955, p.
    [Show full text]
  • Music and the Brain Course Guidebook
    Topic Subtopic Science Neuroscience & Psychology Music and the Brain Course Guidebook Professor Aniruddh D. Patel Tufts University PUBLISHED BY: THE GREAT COURSES Corporate Headquarters 4840 Westfields Boulevard, Suite 500 Chantilly, Virginia 20151-2299 Phone: 1-800-832-2412 Fax: 703-378-3819 www.thegreatcourses.com Copyright © The Teaching Company, 2015 Printed in the United States of America This book is in copyright. All rights reserved. Without limiting the rights under copyright reserved above, no part of this publication may be reproduced, stored in or introduced into a retrieval system, or transmitted, in any form, or by any means (electronic, mechanical, photocopying, recording, or otherwise), without the prior written permission of The Teaching Company. Aniruddh D. Patel, Ph.D. Professor of Psychology Tufts University rofessor Aniruddh D. Patel is a Professor of Psychology at Tufts University. After Pattending the University of Virginia as a Jefferson Scholar, he received his Ph.D. in Organismic and Evolutionary Biology from Harvard University, where he studied with Edward O. Wilson and Evan Balaban. His research focuses on the cognitive neuroscience of music. Prior to arriving at Tufts, Professor Patel was the Esther J. Burnham Senior )HOORZ DW 7KH 1HXURVFLHQFHV ,QVWLWXWH D VFLHQWL¿F UHVHDUFK RUJDQL]DWLRQ founded by the late Nobel laureate Gerald M. Edelman. Professor Patel’s major contributions have included research on music-language relations, the processing of musical rhythm, cross-species comparisons, and relations between musical training and neural plasticity. Professor Patel is the author of Music, Language, and the Brain, which won a Deems Taylor Award from the American Society of Composers, Authors and Publishers in 2008.
    [Show full text]
  • Creation of Auditory Augmented Reality Using a Position-Dynamic Binaural Synthesis System—Technical Components, Psychoacoustic Needs, and Perceptual Evaluation
    applied sciences Article Creation of Auditory Augmented Reality Using a Position-Dynamic Binaural Synthesis System—Technical Components, Psychoacoustic Needs, and Perceptual Evaluation Stephan Werner 1,†,* , Florian Klein 1,† , Annika Neidhardt 1,† , Ulrike Sloma 1,† , Christian Schneiderwind 1,† and Karlheinz Brandenburg 1,2,†,* 1 Electronic Media Technology Group, Technische Universität Ilmenau, 98693 Ilmenau, Germany; fl[email protected] (F.K.); [email protected] (A.N.); [email protected] (U.S.); [email protected] (C.S.) 2 Brandenburg Labs GmbH, 98693 Ilmenau, Germany * Correspondence: [email protected] (S.W.); [email protected] (K.B.) † These authors contributed equally to this work. Featured Application: In Auditory Augmented Reality (AAR), the real room is enriched by vir- tual audio objects. Position-dynamic binaural synthesis is used to auralize the audio objects for moving listeners and to create a plausible experience of the mixed reality scenario. Abstract: For a spatial audio reproduction in the context of augmented reality, a position-dynamic binaural synthesis system can be used to synthesize the ear signals for a moving listener. The goal is the fusion of the auditory perception of the virtual audio objects with the real listening environment. Such a system has several components, each of which help to enable a plausible auditory simulation. For each possible position of the listener in the room, a set of binaural room impulse responses Citation: Werner, S.; Klein, F.; (BRIRs) congruent with the expected auditory environment is required to avoid room divergence Neidhardt, A.; Sloma, U.; effects. Adequate and efficient approaches are methods to synthesize new BRIRs using very few Schneiderwind, C.; Brandenburg, K.
    [Show full text]