Tutorial 26 Applications of Binaural Psychoacoustics in Audio — Designing Spatial Audio Techniques for Human Listeners

Total Page:16

File Type:pdf, Size:1020Kb

Tutorial 26 Applications of Binaural Psychoacoustics in Audio — Designing Spatial Audio Techniques for Human Listeners Tutorial 26 Applications of Binaural Psychoacoustics in Audio — Designing Spatial Audio Techniques for Human Listeners Ville Pulkki Department of Signal Processing and Acoustics Aalto University, Finland June 6, 2016 Background literature "Communication acoustics – Introduction to Speech, Audio and Psychoacoustics" Textbook. Ch 12: Spatial hearing, Pulkki & Karjalainen 2015, Wiley "Parametric time-frequency-domain spatial audio" Contributed book with 15 chapters. eds Pulkki, Delikaris-Manias and Politis, Wiley, early 2017 Spatial hearing 2/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Where and what? Localization of sources Beam-forming towards different directions squawk chirp chirp brr toot! quack! swooh clip clop wind sound swooh rustle Spatial hearing 3/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Response to quite limited range of wavelengths (380-740nm) Human eye The cells in the eye are a priori sensitive to direction of light Spatial hearing 4/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Human eye The cells in the eye are a priori sensitive to direction of light Response to quite limited range of wavelengths (380-740nm) Spatial hearing 4/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics One ear alone knows quite little of the spatial position of the source Human spatial hearing Response to very large range of wavelengths (2cm–30m) Ear canal diameter <1cm, sound just bends into the canal Spatial hearing 5/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Human spatial hearing Response to very large range of wavelengths (2cm–30m) Ear canal diameter <1cm, sound just bends into the canal One ear alone knows quite little of the spatial position of the source Spatial hearing 5/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Signal characteristics in one ear / Signal differences between two ears Hearing mechanisms estimate the most probable position for source Hearing can be fooled easily by audio techniques! Human spatial hearing c J. Blauert The sources are localized by signal analysis by the brains Spatial hearing 6/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Hearing mechanisms estimate the most probable position for source Hearing can be fooled easily by audio techniques! Human spatial hearing c J. Blauert The sources are localized by signal analysis by the brains Signal characteristics in one ear / Signal differences between two ears Spatial hearing 6/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Hearing can be fooled easily by audio techniques! Human spatial hearing c J. Blauert The sources are localized by signal analysis by the brains Signal characteristics in one ear / Signal differences between two ears Hearing mechanisms estimate the most probable position for source Spatial hearing 6/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Human spatial hearing c J. Blauert The sources are localized by signal analysis by the brains Signal characteristics in one ear / Signal differences between two ears Hearing mechanisms estimate the most probable position for source Hearing can be fooled easily by audio techniques! Spatial hearing 6/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics This chapter Basic concepts Head-related acoustics Localization cues Localization accuracy Perception of direction in presence of echoes Ability to listen selectively specific direction Distance perception Spatial hearing 7/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Coordinate system Azimuth-elevation / median plane z Frontal plane Median plane y r δ ϕ x front Horizontal plane Spatial hearing 8/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Coordinate system 2 Cone of confusion r δ ϕ cc cc Cone of confusion Spatial hearing 9/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Basic concepts Listening conditions Monaural hearing (hearing processes and cues that need signal from only one ear) Binaural hearing (differences in ear signals have also an effect) Spatial sound reproduction methods Monotic (signal fed to only one ear) Diotic (same signal fed to both ears) Dichotic (different signal fed to ears) Spatial hearing 10/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Dichotic listening with headphones a) b) τ ph1 τ ph2 delays a1 a2 attenuators signal signal Spatial hearing 11/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Head-related acoustics Let us consider only free field first How does incoming sound change as it hits the listener? What is the transfer function from source to ear canal? X Yl y =h (t)*x(t) Yleft = Hleft X left left Spatial hearing 12/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Head-related acoustics measurements Transfer function from source to ear canal Place a microphone to ear canal or use dummy heads Head-related transfer function, head-related impulse response Depends heavily on the direction of the source Spatial hearing 13/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Head related transfer function (HRTF) Measured with real subject for three directions Magnitude response 0dB 0 0 -10 -10 -10 -20 -20 -20 left ear left ear left ear -30 ϕ = 0 -30 ϕ = 60 -30 ϕ = 0 δ = 0 δ = 0 δ = 60 -40 -40 -40 2 4 102 103 104Hz 102 103 104 Hz10 103 10 Hz a) b) c) 0dB 0 0 -10 -10 -10 -20 -20 -20 right ear right ear right ear -30 -30 -30 ϕ = 0 ϕ = 60 ϕ = 0 δ = 0 δ δ = 60 -40 -40 = 0 -40 2 4 2 4 102 103 104Hz 10 103 10 Hz 10 103 10 Hz Spatial hearing 14/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Head related impulse response (HRIR) HRIR == HRTF in time domain, quite often HRIRs are also called HRTFs 0.2 left ear 0.2 left ear 0.2 left ear ϕ o ϕ o ϕ o 0.1 = 0 0.1 = 60 0.1 = 0 δ = 0o δ = 0o δ = 60o 0 0 0 -0.1 -0.1 -0.1 -0.2 -0.2 -0.2 0 1 2 ms 0 1 2 ms 0 1 2 ms a) b) c) 0.2 right ear 0.2 right ear 0.2 right ear ϕ = 0o ϕ o ϕ o 0.1 0.1 = 60 0.1 = 0 δ = 0o δ = 0o δ = 60o 0 0 0 -0.1 -0.1 -0.1 -0.2 -0.2 -0.2 0 1 2 ms 0 1 2 ms 0 1 2ms Spatial hearing 15/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics HRTF in horizontal plane Horizontal plane HRTF left ear [dB] 0 0 31 63 −5 94 124 153 −10 −179 −150 −15 −121 −92 Azimuth [degrees] −20 −61 −30 −25 0.1 0.2 0.4 0.8 1.6 3.2 6.4 12.8 Frequency [kHz] Spatial hearing 16/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics HRIR in horizontal plane Horizonal plane HRIR left ear 0 31 0.8 63 94 0.6 124 153 0.4 −179 0.2 −150 −121 0 Azimuth [degree] −92 −61 −0.2 −30 −0.4 0 0.5 1 1.5 2 Time [ms] Spatial hearing 17/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics HRTF in median plane Median plane HRTF left ear [dB] −45 0 0 −5 45 −10 [degrees] cc 90 −15 135 180 −20 Elevation 225 −25 0.4 0.8 1.6 3.2 6.4 12.8 Frequency [kHz] Spatial hearing 18/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics HRIR in median plane Median plane HRIR left ear −45 0.05 0 45 [degrees] cc 90 0 135 180 Elevation 225 −0.05 0 0.5 1 1.5 2 2.5 3 3.5 Time [ms] Spatial hearing 19/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Localization cues HRTFs impose some spatial information to ear canal signals What information do we dig out from those? Binaural localization cues Interaural time difference, ITD Interaural level difference, ILD Monaural localization cues: spectral cues Dynamic cues Spatial hearing 20/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Interaural time difference (ITD) Sound arrives a bit earlier to one ear than the other ITD varies between -0.7ms and +0.7ms JND of ITD is of order of 20µs D τ = (' + sin ') 2c D 600 Computed from equation s] μ 400 Δ S Measured values ϕ 200 Time difference ITD [ 0 0 30 60 90 Azimuth angle ϕ [o] Spatial hearing 21/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Interaural time difference (ITD) We are sensitive to phase differences at low frequencies time differences between envelopes at higher frequencies low frequencies ~200 − ~1600 Hz high frequencies > ~1600 Hz time/phase delay btw carriers time delay btw envelopes left right ITD ITD Spatial hearing 22/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Interaural time difference HRTFs measured from a human, ITD of noise sound source in free field computed with an auditory model 12.418.2 5.7 8.5 2.6 3.9 1.1 1.7 0.4 0.7 0.2 1 0.5 0 ITD [ms] −0.5 −1 −90 −60 18.2 12.4 −30 8.5 5.7 0 3.9 2.6 30 1.7 60 1.1 0.4 0.7 90 0.2 Azimuth [degree] Frequency [kHz] Spatial hearing 23/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Interaural level difference (ILD) Head shadows incoming sound Effect depends on wavelength, no shadowing at low frequencies distance, larger ILD at very short distances (< 1m) ILD varies between -20dB and 20dB JND is of order of 1dB for sources near median plane Spatial hearing 24/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Interaural level difference HRTFs measured from a human, ILD of sound source in free field computed with an auditory model 18.2 8.5 12.4 3.9 5.7 1.7 2.6 0.7 1.1 0.2 0.4 30 20 10 0 ILD [phon] −10 −20 −30 −90 −60 −30 0 30 18.2 8.5 12.4 60 3.9 5.7 1.7 2.6 0.7 1.1 Azimuth [degree] 90 0.2 0.4 Frequency [kHz] Spatial hearing 25/72 Pulkki June 6, 2016 Dept Signal Processing and Acoustics Basic lateralization results Subjects report the position of internalized auditory source on interaural axis a) b) 6 τ ph1 τ ph2 6 left earlier delaysleft later a1 lefta2 attenuatorsleft stronger weaker 4 4 signal signal 2 2 right right 0 0 2 2 left left 4 4 perceived lateralization perceived lateralization
Recommended publications
  • NOISE-CON 2004 the Impact of A-Weighting Sound Pressure Level
    Baltimore, Maryland NOISE-CON 2004 2004 July 12-14 The Impact of A-weighting Sound Pressure Level Measurements during the Evaluation of Noise Exposure Richard L. St. Pierre, Jr. RSP Acoustics Westminster, CO 80021 Daniel J. Maguire Cooper Standard Automotive Auburn, IN 46701 ABSTRACT Over the past 50 years, the A-weighted sound pressure level (dBA) has become the predominant measurement used in noise analysis. This is in spite of the fact that many studies have shown that the use of the A-weighting curve underestimates the role low frequency noise plays in loudness, annoyance, and speech intelligibility. The intentional de-emphasizing of low frequency noise content by A-weighting in studies can also lead to a misjudgment of the exposure risk of some physical and psychological effects that have been associated with low frequency noise. As a result of this reliance on dBA measurements, there is a lack of importance placed on minimizing low frequency noise. A review of the history of weighting curves as well as research into the problems associated with dBA measurements will be presented. Also, research relating to the effects of low frequency noise, including increased fatigue, reduced memory efficiency and increased risk of high blood pressure and heart ailments, will be analyzed. The result will show a need to develop and utilize other measures of sound that more accurately represent the potential risk to humans. 1. INTRODUCTION Since the 1930’s, there have been large advances in the ability to measure sound and understand its effects on humans. Despite this, a vast majority of acoustical measurements done today still use the methods originally developed 70 years ago.
    [Show full text]
  • White Paper: Acoustics Primer for Music Spaces
    WHITE PAPER: ACOUSTICS PRIMER FOR MUSIC SPACES ACOUSTICS PRIMER Music is learned by listening. To be effective, rehearsal rooms, practice rooms and performance areas must provide an environment designed to support musical sound. It’s no surprise then that the most common questions we hear and the most frustrating problems we see have to do with acoustics. That’s why we’ve put this Acoustics Primer together. In simple terms we explain the fundamental acoustical concepts that affect music areas. Our hope is that music educators, musicians, school administrators and even architects and planners can use this information to better understand what they are, and are not, hearing in their music spaces. And, by better understanding the many variables that impact acoustical environ- ments, we believe we can help you with accurate diagnosis and ultimately, better solutions. For our purposes here, it is not our intention to provide an exhaustive, technical resource on the physics of sound and acoustical construction methods — that has already been done and many of the best works are listed in our bibliography and recommended readings on page 10. Rather, we want to help you establish a base-line knowledge of acoustical concepts that affect music education and performance spaces. This publication contains information reviewed by Professor M. David Egan. Egan is a consultant in acoustics and Professor Emeritus at the College of Architecture, Clemson University. He has been principal consultant of Egan Acoustics in Anderson, South Carolina for more than 35 years. A graduate of Lafayette College (B.S.) and MIT (M.S.), Professor Eagan also has taught at Tulane University, Georgia Institute of Technology, University of North Carolina at Charlotte, and Washington University.
    [Show full text]
  • Physics of Music PHY103 Lab Manual
    Physics of Music PHY103 Lab Manual Lab #6 – Room Acoustics EQUIPMENT • Tape measures • Noise making devices (pieces of wood for clappers). • Microphones, stands, preamps connected to computers. • Extra XLR microphone cables so the microphones can reach the padded closet and hallway. • Key to the infamous padded closet INTRODUCTION One important application of the study of sound is in the area of acoustics. The acoustic properties of a room are important for rooms such as lecture halls, auditoriums, libraries and theatres. In this lab we will record and measure the properties of impulsive sounds in different rooms. There are three rooms we can easily study near the lab: the lab itself, the “anechoic” chamber (i.e. padded closet across the hall, B+L417C, that isn’t anechoic) and the hallway (that has noticeable echoes). Anechoic means no echoes. An anechoic chamber is a room built specifically with walls that absorb sound. Such a room should be considerably quieter than a normal room. Step into the padded closet and snap your fingers and speak a few words. The sound should be muffled. For those of us living in Rochester this will not be a new sensation as freshly fallen snow absorbs sound well. If you close your eyes you could almost imagine that you are outside in the snow (except for the warmth, and bizarre smell in there). The reverberant sound in an auditorium dies away with time as the sound energy is absorbed by multiple interactions with the surfaces of the room. In a more reflective room, it will take longer for the sound to die away and the room is said to be 'live'.
    [Show full text]
  • Acoustic Textiles - the Case of Wall Panels for Home Environment
    Acoustic Textiles - The case of wall panels for home environment Bachelor Thesis Work Bachelor of Science in Engineering in Textile Technology Spring 2013 The Swedish school of Textiles, Borås, Sweden Report number: 2013.2.4 Written by: Louise Wintzell, Ti10, [email protected] Abstract Noise has become an increasing public health problem and has become serious environment pollution in our daily life. This indicates that it is in time to control and reduce noise from traffic and installations in homes and houses. Today a plethora of products are available for business, but none for the private market. The project describes a start up of development of a sound absorbing wall panel for the private market. It will examine whether it is possible to make a wall panel that can lower the sound pressure level with 3 dB, or reach 0.3 s in reverberation time, in a normally furnished bedroom and still follow the demands of price and environmental awareness. To start the project a limitation was made to use the textiles available per meter within the range of IKEA. The test were made according to applicable standards and calculation of reverberation time and sound pressure level using Sabine’s formula and a formula for sound pressure equals sound effect. During the project, tests were made whether it was possible to achieve a sound classification C on a A-E grade scale according to ISO 11654, where A is the best, with only textiles or if a classic sound absorbing mineral wool had to be used. To reach a sound classification C, a weighted sound absorption coefficient (αw) of 0.6 as a minimum must be reached.
    [Show full text]
  • Making It Sound As Good As It Tastes: Noise Reverberation Reduction in a Micro Distillery Tasting Room Andrew Sheehy Montana Tech of the University of Montana
    Montana Tech Library Digital Commons @ Montana Tech Graduate Theses & Non-Theses Student Scholarship Spring 2016 Making it Sound as Good as it Tastes: Noise Reverberation Reduction in a Micro Distillery Tasting Room Andrew Sheehy Montana Tech of the University of Montana Follow this and additional works at: http://digitalcommons.mtech.edu/grad_rsch Part of the Occupational Health and Industrial Hygiene Commons Recommended Citation Sheehy, Andrew, "Making it Sound as Good as it Tastes: Noise Reverberation Reduction in a Micro Distillery Tasting Room" (2016). Graduate Theses & Non-Theses. 89. http://digitalcommons.mtech.edu/grad_rsch/89 This Publishable Paper is brought to you for free and open access by the Student Scholarship at Digital Commons @ Montana Tech. It has been accepted for inclusion in Graduate Theses & Non-Theses by an authorized administrator of Digital Commons @ Montana Tech. For more information, please contact [email protected]. 1 Making it Sound as Good as it Tastes: Noise Reverberation Reduction in a Micro Distillery Tasting Room Andrew John Sheehy, Dan Autenrieth, Julie Hart, and Scott Risser Montana Tech Publishable Paper submitted to Artisan Spirit Magazine 2 Abstract Noise reverberation in micro distillery tasting rooms can interfere with speech communication and negatively impact the acoustic quality of live music. Noise reverberation was characterized in a tasting room in Butte, MT by calculated and quantified methods. Sound absorbing baffles were then installed in an effort to reduce reverberation and improve room acoustics. The overall reverberation time and speech interference level were decreased by measureable amounts that corresponded with an increase in overall absorption in the space. Reverberation time decreased from 0.85 seconds to 0.49 seconds on average.
    [Show full text]
  • University of Huddersfield Repository
    University of Huddersfield Repository Robotham, Thomas The effects of a vertical reflection on the relationship between listener preference and timbral and spatial attributes Original Citation Robotham, Thomas (2016) The effects of a vertical reflection on the relationship between listener preference and timbral and spatial attributes. Masters thesis, University of Huddersfield. This version is available at http://eprints.hud.ac.uk/id/eprint/30196/ The University Repository is a digital collection of the research output of the University, available on Open Access. Copyright and Moral Rights for the items on this site are retained by the individual author and/or other copyright owners. Users may access full items free of charge; copies of full text items generally can be reproduced, displayed or performed and given to third parties in any format or medium for personal research or study, educational or not-for-profit purposes without prior permission or charge, provided: • The authors, title and full bibliographic details is credited in any copy; • A hyperlink and/or URL is included for the original metadata page; and • The content is not changed in any way. For more information, including our policy and submission procedure, please contact the Repository Team at: [email protected]. http://eprints.hud.ac.uk/ THE EFFECTS OF A VERTICAL REFLECTION ON THE RELATIONSHIP BETWEEN LISTENER PREFERENCE AND TIMBRAL AND SPATIAL ATTRIBUTES THOMAS ROBOTHAM A Thesis submitted to the University of Huddersfield in partial fulfilment of the requirements for the degree of Master of Science by Research Applied Psychoacoustic Lab April 2016 Copyright Statement i. The author of this thesis (including any appendices and/or schedules to this thesis) owns any copyright in it (the “Copyright”) and s/he has given The University of Huddersfield the right to use such Copyright for any administrative, promotional, educational and/or teaching purposes.
    [Show full text]
  • Decibels, Phons, and Sones
    Decibels, Phons, and Sones The rate at which sound energy reaches a Table 1: deciBel Ratings of Several Sounds given cross-sectional area is known as the Sound Source Intensity deciBel sound intensity. There is an abnormally Weakest Sound Heard 1 x 10-12 W/m2 0.0 large range of intensities over which Rustling Leaves 1 x 10-11 W/m2 10.0 humans can hear. Given the large range, it Quiet Library 1 x 10-9 W/m2 30.0 is common to express the sound intensity Average Home 1 x 10-7 W/m2 50.0 using a logarithmic scale known as the Normal Conversation 1 x 10-6 W/m2 60.0 decibel scale. By measuring the intensity -4 2 level of a given sound with a meter, the Phone Dial Tone 1 x 10 W/m 80.0 -3 2 deciBel rating can be determined. Truck Traffic 1 x 10 W/m 90.0 Intensity values and decibel ratings for Chainsaw, 1 m away 1 x 10-1 W/m2 110.0 several sound sources listed in Table 1. The decibel scale and the intensity values it is based on is an objective measure of a sound. While intensities and deciBels (dB) are measurable, the loudness of a sound is subjective. Sound loudness varies from person to person. Furthermore, sounds with equal intensities but different frequencies are perceived by the same person to have unequal loudness. For instance, a 60 dB sound with a frequency of 1000 Hz sounds louder than a 60 dB sound with a frequency of 500 Hz.
    [Show full text]
  • Reverb 101: Room Reverberation and Better Listening Experiences
    Reverb 101: Room Reverberation and Better Listening Experiences A great listening experience involves more than just high-quality audio equipment—the way sound moves around the room matters just as much. You don’t have to be an expert acoustician to create an impressive listening environment —just an understanding of some basic acoustic concepts will get you well on your way. The Basics of Reverberation Reverberation (or reverb) is important to think about when planning the acoustical treatments for your room. To define reverb, we first need to understand how sound travels from the source to your ear. When sound is emitted, sound waves take off in every direction. Some waves will travel the straight path from the source to your ear, and the rest will bounce around the room, ricocheting off of hard surfaces until they reach your ear or lose steam and die out. You hear the unadulterated direct sound first—then you start hearing indirect sounds as they arrive. Given that sound travels roughly 770 miles per hour, this happens very fast, and your brain interprets all the copies of sound together as one. The ratio of direct and indirect sounds determines the sound quality. That is where reverberation comes in. Reverberation is the collection of indirect sounds in an enclosed space. The more sound copies that pile up, the muddier the sound becomes. How Does Reverberation Affect the Listening Experience? When direct sound and reverberation combine in a favorable ratio, the sound is rich and full. The effect can bring music to life and even smooth out transitions between musical notes, creating a pleasant, desirable sound.
    [Show full text]
  • Guide for the Use of the International System of Units (SI)
    Guide for the Use of the International System of Units (SI) m kg s cd SI mol K A NIST Special Publication 811 2008 Edition Ambler Thompson and Barry N. Taylor NIST Special Publication 811 2008 Edition Guide for the Use of the International System of Units (SI) Ambler Thompson Technology Services and Barry N. Taylor Physics Laboratory National Institute of Standards and Technology Gaithersburg, MD 20899 (Supersedes NIST Special Publication 811, 1995 Edition, April 1995) March 2008 U.S. Department of Commerce Carlos M. Gutierrez, Secretary National Institute of Standards and Technology James M. Turner, Acting Director National Institute of Standards and Technology Special Publication 811, 2008 Edition (Supersedes NIST Special Publication 811, April 1995 Edition) Natl. Inst. Stand. Technol. Spec. Publ. 811, 2008 Ed., 85 pages (March 2008; 2nd printing November 2008) CODEN: NSPUE3 Note on 2nd printing: This 2nd printing dated November 2008 of NIST SP811 corrects a number of minor typographical errors present in the 1st printing dated March 2008. Guide for the Use of the International System of Units (SI) Preface The International System of Units, universally abbreviated SI (from the French Le Système International d’Unités), is the modern metric system of measurement. Long the dominant measurement system used in science, the SI is becoming the dominant measurement system used in international commerce. The Omnibus Trade and Competitiveness Act of August 1988 [Public Law (PL) 100-418] changed the name of the National Bureau of Standards (NBS) to the National Institute of Standards and Technology (NIST) and gave to NIST the added task of helping U.S.
    [Show full text]
  • The Human Ear  Hearing, Sound Intensity and Loudness Levels
    UIUC Physics 406 Acoustical Physics of Music The Human Ear Hearing, Sound Intensity and Loudness Levels We’ve been discussing the generation of sounds, so now we’ll discuss the perception of sounds. Human Senses: The astounding ~ 4 billion year evolution of living organisms on this planet, from the earliest single-cell life form(s) to the present day, with our current abilities to hear / see / smell / taste / feel / etc. – all are the result of the evolutionary forces of nature associated with “survival of the fittest” – i.e. it is evolutionarily{very} beneficial for us to be able to hear/perceive the natural sounds that do exist in the environment – it helps us to locate/find food/keep from becoming food, etc., just as vision/sight enables us to perceive objects in our 3-D environment, the ability to move /locomote through the environment enhances our ability to find food/keep from becoming food; Our sense of balance, via a stereo-pair (!) of semi-circular canals (= inertial guidance system!) helps us respond to 3-D inertial forces (e.g. gravity) and maintain our balance/avoid injury, etc. Our sense of taste & smell warn us of things that are bad to eat and/or breathe… Human Perception of Sound: * The human ear responds to disturbances/temporal variations in pressure. Amazingly sensitive! It has more than 6 orders of magnitude in dynamic range of pressure sensitivity (12 orders of magnitude in sound intensity, I p2) and 3 orders of magnitude in frequency (20 Hz – 20 KHz)! * Existence of 2 ears (stereo!) greatly enhances 3-D localization of sounds, and also the determination of pitch (i.e.
    [Show full text]
  • The Sonification Handbook Chapter 4 Perception, Cognition and Action In
    The Sonification Handbook Edited by Thomas Hermann, Andy Hunt, John G. Neuhoff Logos Publishing House, Berlin, Germany ISBN 978-3-8325-2819-5 2011, 586 pages Online: http://sonification.de/handbook Order: http://www.logos-verlag.com Reference: Hermann, T., Hunt, A., Neuhoff, J. G., editors (2011). The Sonification Handbook. Logos Publishing House, Berlin, Germany. Chapter 4 Perception, Cognition and Action in Auditory Display John G. Neuhoff This chapter covers auditory perception, cognition, and action in the context of auditory display and sonification. Perceptual dimensions such as pitch and loudness can have complex interactions, and cognitive processes such as memory and expectation can influence user interactions with auditory displays. These topics, as well as auditory imagery, embodied cognition, and the effects of musical expertise will be reviewed. Reference: Neuhoff, J. G. (2011). Perception, cognition and action in auditory display. In Hermann, T., Hunt, A., Neuhoff, J. G., editors, The Sonification Handbook, chapter 4, pages 63–85. Logos Publishing House, Berlin, Germany. Media examples: http://sonification.de/handbook/chapters/chapter4 8 Chapter 4 Perception, Cognition and Action in Auditory Displays John G. Neuhoff 4.1 Introduction Perception is almost always an automatic and effortless process. Light and sound in the environment seem to be almost magically transformed into a complex array of neural impulses that are interpreted by the brain as the subjective experience of the auditory and visual scenes that surround us. This transformation of physical energy into “meaning” is completed within a fraction of a second. However, the ease and speed with which the perceptual system accomplishes this Herculean task greatly masks the complexity of the underlying processes and often times leads us to greatly underestimate the importance of considering the study of perception and cognition, particularly in applied environments such as auditory display.
    [Show full text]
  • Acoustic Structure and Musical Function: Musical Notes Informing Auditory Research
    OUP UNCORRECTED PROOF – REVISES, 06/29/2019, SPi chapter 7 Acoustic Structure and Musical Function: Musical Notes Informing Auditory Research Michael Schutz Introduction and Overview Beethoven’s Fifth Symphony has intrigued audiences for generations. In opening with a succinct statement of its four-note motive, Beethoven deftly lays the groundwork for hundreds of measures of musical development, manipulation, and exploration. Analyses of this symphony are legion (Schenker, 1971; Tovey, 1971), informing our understanding of the piece’s structure and historical context, not to mention the human mind’s fascination with repetition. In his intriguing book The first four notes, Matthew Guerrieri decon- structs the implications of this brief motive (2012), illustrating that great insight can be derived from an ostensibly limited grouping of just four notes. Extending that approach, this chapter takes an even more targeted focus, exploring how groupings related to the harmonic structure of individual notes lend insight into the acoustical and perceptual basis of music listening. Extensive overviews of auditory perception and basic acoustical principles are readily available (Moore, 1997; Rossing, Moore, & Wheeler, 2013; Warren, 2013) discussing the structure of many sounds, including those important to music. Additionally, several texts now focus specifically on music perception and cognition (Dowling & Harwood, 1986; Tan, Pfordresher, & Harré, 2007; Thompson, 2009). Therefore this chapter focuses 0004314570.INDD 145 Dictionary: NOAD 6/29/2019 3:18:38 PM OUP UNCORRECTED PROOF – REVISES, 06/29/2019, SPi 146 michael schutz on a previously under-discussed topic within the subject of musical sounds—the importance of temporal changes in their perception. This aspect is easy to overlook, as the perceptual fusion of overtones makes it difficult to consciously recognize their indi- vidual contributions.
    [Show full text]