Dynamic Diffuse Signal Processing for Sound Reinforcement and Reproduction

Total Page:16

File Type:pdf, Size:1020Kb

Dynamic Diffuse Signal Processing for Sound Reinforcement and Reproduction Dynamic diffuse signal processing for sound reinforcement and reproduction. Item Type Article Authors Moore, Jonathan B.; Hill, Adam J. Citation Moore, J.B., and Hill, A.J. (2018) ‘Dynamic diffuse signal processing for sound reinforcement and reproduction’, Journal of the Audio Engineering Society, 66 (11), pp.953-965. doi: 10.17743/ jaes.2018.0054 DOI 10.17743/jaes.2018.0054 Publisher Audio Engineering Society Journal Journal of the Audio Engineering Society Download date 28/09/2021 22:19:18 Link to Item http://hdl.handle.net/10545/623149 PAPERS Dynamic diffuse signal processing for sound reinforcement and reproduction Jonathan B. Moore, AES Member, Adam J. Hill, AES Member [email protected] [email protected] Department of Electronics, Computing and Mathematics, University of Derby, Derby, DE22 1GB, United Kingdom High inter-channel coherence between signals emitted from multiple loudspeakers can cause undesirable acoustic and psychoacoustic effects. Examples include position-dependent low- frequency magnitude response variation, where comb-filtering leads to the attenuation of certain frequencies dependent on path length differences between multiple coherent sources, lack of apparent source width in multi-channel reproduction and lack of externalization in headphone reproduction. This work examines a time-variant, real-time decorrelation algorithm for the reduction of coherence between sources as well as between direct sound and early reflections, with a focus on minimization of low-frequency magnitude response variation. The algorithm is applicable to a wide range of sound reinforcement and reproduction applications, including those requiring full-band decorrelation. Key variables which control the balance between decorrelation and processing artifacts such as transient smearing are described and evaluated using a MUSHRA test. Variable values which render the processing transparent whilst still providing decorrelation are discussed. Additionally, the benefit of transient preservation is investigated and is shown to increase transparency. 0 INTRODUCTION multiple of f0 will also be subject to similar cancellation. This gives rise to the well-known comb-filtering effect. In many sound reinforcement and reproduction When TDOA is small, comb-filtering is limited to mid- scenarios, the desired audience sound coverage may only and high-frequencies. Subjectively, this is experienced as be achieved by using multiple electro-acoustic transducers timbral anomalies between the received and source emitting coherent signals at equal or nearly equal sound signals. For large-scale sound reinforcement systems, power levels. Where transducers are not arrayed in such a path-length differences are regularly on the order of way that leads to acoustical coupling over their operational several meters, leading to comb-filtering commencing at frequency range, any difference in path-length from a low-frequencies. In this case, there will exist frequency- listening position or acoustic measurement point to two or dependent amplitude nulls spanning several meters. The more loudspeakers will give a relative phase difference overall subjective implication is that audience members between the emitted signals [1]. The summation of such will receive a magnitude response that both differs from signals will result in a frequency response that is dependent the source material and is position-dependent. on the path-length differences, and cancellation of Spatial variance quantifies the magnitude response frequencies where phase difference equates to 180 variation over a pre-determined frequency range and degrees. audience area [2,3,4], given by Eq. 0.2. For a two-transducer system, the fundamental frequency of cancellation is given by Eq. 0.1. 푓ℎ 푁푝 1 1 1 SV = ∑ √ ∑(퐿 (푝, 푖) − 퐿 (푖))2 0.2 푓0 = 0.1 푝 푝 2푡 푁푓 푁푝 − 1 푖=푓 푝=1 푙표 where, f0 is the fundamental frequency of cancellation (Hz) and t is the time difference of arrival (TDOA) between the where, SV is spatial variance in (dB), calculated based on two transducers (s). the number of frequency bins (Nf), the number of Therefore, f0 is inversely proportional to TDOA, measurement points (Np), the frequency range of interest meaning that greater TDOA causes lower a fundamental (flo to fhi, in Hz), the sound pressure level (dB) at frequency of cancellation. Additionally, any integer measurement point p and frequency bin i, Lp(p,i), and the Journal information 1 Moore and Hill Diffuse signal processing for sound reinforcement and reproduction mean sound pressure level (dB) over all measurement width [13], increased headphone externalization [29] and synthetic reverb [19]. Early algorithms were primarily points at frequency bin i, Lp(i) . used to produce a pseudo-stereophonic signal from a As described by Eq. 0.2, SV is the standard deviation of monophonic source. These methods rely on the generation sound pressure level for all frequency bins of interest of complimentary comb-filters by use of either delay lines across all measurement points. 0 dB SV implies no or all-pass filters [6, 7, 8]. Using this method, two deviation in magnitude response across a listening area, independent signals may be generated whose summed disregarding propagation loss if the responses are magnitude response is proportional to the magnitude normalized. response of the input signal. However, these methods are A potential solution to high SV across an audience is to not suitable for the applications discussed in this work reduce inter-channel drive-signal coherence using since only a limited number of sources may be decorrelation. In this case, the acoustic signals will sum by decorrelated using this technique and perfect summation is the powers of their amplitudes since phase is randomized only achieved in a limited sweet-spot [9, 10]. [5]. Should sufficient signal decorrelation be achieved, Other decorrelation methods have been developed for interference effects will be minimized and the resulting use in stereophonic echo-cancellation for voice sound field will be approximately diffuse, characterized by conferencing [11, 12]. Unfortunately, these are also a consistent magnitude response across an audience. unsuitable for the specific sound reinforcement and A signal decorrelation algorithm termed diffuse signal reproduction applications in this research due to the level processing (DiSP), first described in [22], has been of distortion introduced and the limited number of investigated in prior work by the authors [23, 24, 25]. It decorrelated signals generated [13]. was found to be a useful tool for the decorrelation of Kendall [14] describes a method of decorrelation filter multiple sources in sound reinforcement and reproduction generation, whereby filter coefficients are obtained via an applications. However, in [24] simulations showed that inverse Fourier transform of a frequency domain DiSP performance is reduced in closed acoustic spaces specification of unity magnitude and random phase. This when decorrelation filters remain fixed. This is because method allows for the generation of a large number of direct sources maintain coherence with their early decorrelation filters which display low correlation with reflections, leading to comb filtering. each other, allowing for many discrete sources to be Therefore, a time-varying DiSP algorithm was decorrelated. However, it was found that while unity introduced in [24], termed dynamic DiSP. This work magnitude and random phase are specified at each advances dynamic DiSP with the introduction of two key frequency bin, the resulting magnitude spectrum from the user definable variables that may be used to balance inverse Fourier transform is not uniform in between these dynamic decorrelation performance with processing points leading to timbral coloration [14, 15, 16]. perceptibility. Decorrelation has also been achieved with frequency A MUSHRA style subjective test is presented to suggest domain processing, where the signal is passed through a suitable limits for these variables and to assess the filter bank to divide the signal into critical frequency transparency of the algorithm in comparison to bands, with a random time shift applied to each band [15, unprocessed musical samples. Further to this, transient 17, 18]. Depending on the magnitude of the random time extraction prior to dynamic DiSP processing is utilized and shift calculated per frequency band, this method may result its impact on decorrelation versus perceptual transparency in frequency cancellation at band edges when the signal is is investigated. reconstructed [15, 16]. This occurs when the time shifts After a brief review of existing signal decorrelation equate to approximately 180 degrees phase difference methods in Section 1, DiSP is reviewed in Section 2, between edge frequencies. This may be alleviated by followed by a justification for the need for a time-varying, constraining the time shift per band to multiples of 360 dynamic variant of DiSP, capable of direct signal and early degrees phase shift for the edge frequency of each band reflection decorrelation. Transient detection for the [15]. However, this then limits the number of discrete preservation of input signal’s sharp transient content is sources which may be decorrelated. There will be a limited also investigated (Section 3). The algorithm is objectively number of time shift values for each band that meet the analyzed in Section 4 using image-source modeling. This criteria of being sufficient for decorrelation, not exceeding is followed
Recommended publications
  • Architectural Acoustics
    A COMPARISON OF SOURCE TYPES AND THEIR IMPACTS ON ACOUSTICAL METRICS By KEELY SIEBEIN A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE IN ARCHITECTURAL STUDIES UNIVERSITY OF FLORIDA 2012 1 © 2012 Keely Siebein 2 To my parents 3 ACKNOWLEDGMENTS I would like to thank my mother, father, sisters, brothers, nephews and the rest of my family for all your love and support throughout my life. I would like to thank Professors Gold and Siebein for your continuous support, guidance and encouragement throughout the process of my thesis. I’d also like to thank my fellow acoustics students, Sang Bong, and Sang Bum, Cory, Adam B, Adam G., Jose, Jorge and Jenn and any others who helped me with the data collection and reviewing of the material and for keeping me on track with the process. I’d also like to thank Dave for your continuous support throughout this entire process, for believing in me and doing all that you could to help me throughout the course of my studies. I’d also like to thank my wonderful friends, for believing in me and encouraging me throughout my studies. Thanks to my Dad, for being my father, professor, boss, but most importantly, my mentor. It’s an honor to have studied with you and learned from you in this capacity and I will treasure these years and the time I spent learning under your guidance forever. Thank you all for everything. 4 TABLE OF CONTENTS page ACKNOWLEDGMENTS .................................................................................................
    [Show full text]
  • Object-Based Reverberation for Spatial Audio Coleman, P, Franck, A, Jackson, P, Hughes, RJ, Remaggi, L and Melchior, F 10.17743/Jaes.2016.0059
    Object-based reverberation for spatial audio Coleman, P, Franck, A, Jackson, P, Hughes, RJ, Remaggi, L and Melchior, F 10.17743/jaes.2016.0059 Title Object-based reverberation for spatial audio Authors Coleman, P, Franck, A, Jackson, P, Hughes, RJ, Remaggi, L and Melchior, F Type Article URL This version is available at: http://usir.salford.ac.uk/id/eprint/53438/ Published Date 2017 USIR is a digital collection of the research output of the University of Salford. Where copyright permits, full text material held in the repository is made freely available online and can be read, downloaded and copied for non-commercial private study or research purposes. Please check the manuscript for any further copyright restrictions. For more information, including our policy and submission procedure, please contact the Repository Team at: [email protected]. PAPERS Journal of the Audio Engineering Society Vol. 65, No. 1/2, January/February 2017 (C 2017) DOI: https://doi.org/10.17743/jaes.2016.0059 Object-Based Reverberation for Spatial Audio* PHILIP COLEMAN,1 AES Member, ANDREAS FRANCK,2 AES Member, ([email protected]) PHILIP J. B. JACKSON,1 AES Member, RICHARD J. HUGHES,3 AES Associate Member, LUCA REMAGGI1, AND FRANK MELCHIOR,4 AES Member 1Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, Surrey, GU2 7XH, UK 2Institute of Sound and Vibration Research, University of Southampton, Southampton, Hampshire, SO17 1BJ, UK 3Acoustics Research Centre, University of Salford, Salford, M5 4WT, UK 4BBC Research and Development, Dock House, MediaCityUK, Salford, M50 2LH, UK Object-based audio is gaining momentum as a means for future audio content to be more immersive, interactive, and accessible.
    [Show full text]
  • Contribution of Precisely Apparent Source Width to Auditory Spaciousness
    Chapter 8 Contribution of Precisely Apparent Source Width to Auditory Spaciousness Chiung Yao Chen Additional information is available at the end of the chapter http://dx.doi.org/10.5772/56616 1. Introduction It has been shown that the apparent source width (ASW) for one-third-octave band pass noises signal offers a satisfactory explanation for functions of the inter-aural cross-correlation (IACC) and WIACC, which is defined as the time interval of the inter-aural cross-correlation function within ten percent of the maximum (Sato and Ando, [18]). In this chapter, the binaural criteria of spatial impression in halls will be investigated by comparing with ASW for the auditory purpose assistant to visual attention, which is called source localization. It was proposed that the ASW could properly define directional impression corresponding to the inter-aural time delay (τIACC) perceived when listening to sound with a sharp peak in the inter-aural cross- correlation function (ICF) with a small value of WIACC. We supposed that the ASW would be sensed not only with regard to the relative amplitudes between reflections in a hall, but the total arrived energies at two ears through the A-weighting network in the brain, termed as listening level (LL) and the temporal characteristics of sound sources. This hypothesis is based on the fact that the spatial experience in a room will be varied by changing the center frequency of one-third-octave band pass noise signal, and the ASW decreases as the main frequency goes up. For the purpose of this chapter, we shall discuss the relationship among some factors, the geometric mean of sound energies at two ears, the reverberation, IACC, τIACC, and WIACC, and whether they are independently related to the sound source on a horizontal plane.
    [Show full text]
  • Characterizing Apparent Source Width Perception
    CONTRIBUTIONS TO HEARING RESEARCH Volume 25 Johannes Käsbach Characterizing apparent source width perception Hearing Systems Department of Electrical Engineering Characterizing apparent source width perception PhD thesis by Johannes Käsbach Preliminary version: August 2, 2016 Technical University of Denmark 2016 © Johannes Käsbach, 2016 Cover illustration by Emma Ekstam. Preprint version for the assessment committee. Pagination will differ in the final published version. This PhD dissertation is the result of a research project carried out at the Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark. The project was partly financed by the CAHR consortium (2/3) and by the Technical University of Denmark (1/3). Supervisors Prof. Torsten Dau Phd Tobias May Hearing Systems Group Department of Electrical Engineering Technical University of Denmark Kgs. Lyngby, Denmark Abstract Our hearing system helps us in forming a spatial impression of our surrounding, especially for sound sources that lie outside our visual field. We notice birds chirping in a tree, hear an airplane in the sky or a distant train passing by. The localization of sound sources is an intuitive concept to us, but have we ever thought about how large a sound source appears to us? We are indeed capable of associating a particular size with an acoustical object. Imagine an orchestra that is playing in a concert hall. The orchestra appears with a certain acoustical size, sometimes even larger than the orchestra’s visual dimensions. This sensation is referred to as apparent source width. It is caused by room reflections and one can say that more reflections generate a larger apparent source width.
    [Show full text]
  • The Sonification Handbook Chapter 3 Psychoacoustics
    The Sonification Handbook Edited by Thomas Hermann, Andy Hunt, John G. Neuhoff Logos Publishing House, Berlin, Germany ISBN 978-3-8325-2819-5 2011, 586 pages Online: http://sonification.de/handbook Order: http://www.logos-verlag.com Reference: Hermann, T., Hunt, A., Neuhoff, J. G., editors (2011). The Sonification Handbook. Logos Publishing House, Berlin, Germany. Chapter 3 Psychoacoustics Simon Carlile Reference: Carlile, S. (2011). Psychoacoustics. In Hermann, T., Hunt, A., Neuhoff, J. G., editors, The Sonification Handbook, chapter 3, pages 41–61. Logos Publishing House, Berlin, Germany. Media examples: http://sonification.de/handbook/chapters/chapter3 6 Chapter 3 Psychoacoustics Simon Carlile 3.1 Introduction Listening in the real world is generally a very complex task since sounds of interest typically occur on a background of other sounds that overlap in frequency and time. Some of these sounds can represent threats or opportunities while others are simply distracters or maskers. One approach to understanding how the auditory system makes sense of this complex acoustic world is to consider the nature of the sounds that convey high levels of information and how the auditory system has evolved to extract that information. From this evolutionary perspective, humans have largely inherited this biological system so it makes sense to consider how our auditory systems use these mechanisms to extract information that is meaningful to us and how that knowledge can be applied to best sonify various data. One biologically important feature of a sound is its identity; that is, the spectro-temporal characteristics of the sound that allow us to extract the relevant information represented by the sound.
    [Show full text]
  • Measurements of IACC During Music Performance in Concert Halls by Magne Skålevik a Paper Presented (Link) at ISMRA 11 September, La Plata, Argentina
    akuTEK www.akutek.info PRESENTS Measurements of IACC during music performance in concert halls by Magne Skålevik A paper presented (link) at ISMRA 11 September, La Plata, Argentina INTRODUCTION Spatial aspects of the sound field at concert listeners’ ears have for many decades been considered important for the experience of concert hall acoustics. Strangely enough, this knowledge has not led to development of direct measurement methods. Acousticians seems to have arrived at the consensus that two distinct aspects are important, namely Source Broadening or Apparent Source Width (ASW), and Listener Envelopment (LEV). The aspects are considered predictable from impulse response measurements, in terms of LF and 1-IACCE for ASW, and LLG and 1-IACCL for LEV. In order to explore cues of ASW and LEV directly at listeners’ ears, this author has analyzed binaural recordings in concerts with symphony orchestras. Until otherwise proven, one would expect that any difference between two halls that can be perceived by our ears could also be measured as soon as the technical challenges are overcome. At least, one would like to test the common understanding that there are significant differences in IACC from hall to hall. This paper presents results from measurements of IACC(t) obtained from binaural recordings during symphony orchestra performances worldwide, including well- known halls and orchestras, with statistics from N=337989 correlation periods measured over 33799 seconds, i.e. >9 hours, from 10 big concert halls in Europe and the US. The hypothesis, “Binaural signals, i.e. signals at listeners’ pair of ears, can exhibit statistically significant hall-to-hall differences in cross- correlation”, is not rejected by the data.
    [Show full text]
  • Concert Hall Acoustics, Springer-Verlag, It Is Considered That the Scale Value of Apparent Heidelberg
    Orthogonal Factors Describing Primary and Spatial Sensations of the Sound Field in a Concert Hall Yoichi Ando Graduate School of Science and Technology, Kobe University Rokkodai, Nada, Kobe 657-8501 Japan Subjective preference of the sound field in a concert hall is described based on the model of human auditory-brain system. The model consists of the autocorrelation function (ACF) mechanism and the interaural crosscorrelation function (IACF) mechanism for signals arriving at two ear entrances, and the specialization of human cerebral hemispheres [Ando, Architectural Acoustics, AIP/Springer, 1998]. From this view point, primary sensations such as pitch or missing fundamental, loudness, timbre, and in addition duration sensation which is introduce here as a fourth are well described by the temporal factors extracted from the ACF associated with left hemisphere. And, spatial sensations such as apparent source width (ASW) and subjective diffuseness are described by the spatial factors extracted from the IACF associated with the right hemisphere ORTHOGONAL FACTORS peak. Usually, there are certain correlation between τ τ φ φ n and n +1, and between n and n+1; Primary sensations and spatial sensations as well as (3) Effective duration of the envelope of the subjective preference for sound fields are well normalized ACF, τe, which is defined by the ten- described by a model of the auditory-brain system. The percentile delay and which represents a repetitive model includes autocorrelation function (ACF) and feature or reverberation containing the sound interaural crosscorrelation function (IACF) source it. mechanisms [1,2]. Important evidences supporting this (a) 1 model were discovered in relation to the auditory-brain activity [2].
    [Show full text]
  • Interaural Cross-Correlation Affects Perceived Elevation of Auditory Images Presented Via Height Channels in Multichannel Audio Reproduction Systems
    Proceedings of ACOUSTICS 2016 9-11 November 2016, Brisbane, Australia Interaural Cross-Correlation Affects Perceived Elevation of Auditory Images Presented via Height Channels in Multichannel Audio Reproduction Systems David Stepanavicius 1 and William L. Martens 1 1Faculty of Architecture, Design and Planning, The University of Sydney, Sydney, Australia ABSTRACT With ‘height channels’ in multichannel audio reproduction systems becoming more commonplace, there is substantial and growing interest in understanding production techniques for spatially enhancing the listening experiences made available by such systems. When a group of loudspeakers well above ear level are included in a multichannel loudspeaker array, these ‘height channels’ present opportunities for manipulation of many spatial auditory attributes, in addition to the possibility of controlling the incidence angle of single virtual sources in both azimuth and elevation. The opportunities investigated in the current study are those enabled by the use of elevated pairs of loudspeakers that present spatially extended stereophonic images of virtual sources. The interaural cross- correlation correlation (IACC) for each of a number of recorded string instruments was manipulated for reproduction via pairs of loudspeakers, the shared elevation angle of which could be varied in elevation. The results demonstrate that the perceived elevation of spatially extended stereophonic images of string instruments could be manipulated by IACC adjustment even though the actual elevation angle was held constant for the loudspeaker pair reproducing each spatially extended component of a multi-instrument ensemble. 1. INTRODUCTION It is well known that wide auditory images associated with strong lateral reflections produce preferred listening experiences in concert halls (Barron, 1971), but less is known about control of auditory image width for musical ensembles reproduced via multichannel loudspeaker systems that include two or more ‘height channels’ for enhancing spatial audio reproduction.
    [Show full text]
  • Dynamic Spatial Responsiveness in Concert Halls
    acoustics Article Dynamic Spatial Responsiveness in Concert Halls Evan Green * and Eckhard Kahle Kahle Acoustics, 1050 Brussels, Belgium * Correspondence: [email protected]; Tel.: +32-2-345-10-10 Received: 30 May 2019; Accepted: 2 July 2019; Published: 22 July 2019 Abstract: In musical perception, a proportion of the reflected sound energy arriving at the ear is not consciously perceived. Investigations by Wettschurek in the 1970s showed the detectability to be dependent on the overall loudness and direction of arrival of reflected sound. The relationship Wettschurek found between reflection detectability, listening level, and direction of arrival correlates well with the subjective progression of spatial response during a musical crescendo: from frontal at pianissimo, through increasing apparent source width, to a fully present room acoustic at forte. “Dynamic spatial responsiveness” was mentioned in some of the earliest psychoacoustics research and recent work indicates that it is a key factor in acoustical preference. This article describes measurements of perception thresholds made using a binaural virtual acoustics system—these show good agreement with Wettschurek’s results. The perception measurements indicate that the subjective effect of reflections varies with overall listening level, even when the reflection level, delay, and direction relative to the direct sound are maintained. Reflections which are perceptually fused with the source may at louder overall listening levels become allocated to the room presence. An algorithm has been developed to visualize dynamic spatial responsiveness—i.e., which aspects of a three-dimensional (3D) Room Impulse Response would be detectable at different dynamic levels—and has been applied to measured concert hall impulse responses.
    [Show full text]
  • Subjective Study on Listener Envelopment Using Hybrid Room Acoustics Simulation and Higher Order Ambisonics Reproduction
    Proceedings of the Institute of Acoustics SUBJECTIVE STUDY ON LISTENER ENVELOPMENT USING HYBRID ROOM ACOUSTICS SIMULATION AND HIGHER ORDER AMBISONICS REPRODUCTION MT Neal The Graduate Program in Acoustics, The Pennsylvania State University, University Park, Pennsylvania, USA MC Vigeant The Graduate Program in Acoustics, The Pennsylvania State University, University Park, Pennsylvania, USA 1 INTRODUCTION In concert hall acoustics, the spatial character of a sound field has been understood as important for quite some time. Reichardt and Schmidt performed initial studies on the perception of directional reflections and reverberation1. Identifying lateral reflections as important, Reichardt and Lehmann proposed a metric, Room Impression Index (R), to predict subjective room impression, mainly in terms of spaciousness and liveliness2. Around the time of Reichardt’s first work, Marshall also introduced the concept of spatial responsiveness, and he concluded that a narrower hall cross-section created a higher sense of spatial and overall acoustic impression3. The next appearance of the connection between spaciousness and overall impression came from Barron’s study of British concert halls4. While listening to live performance in 11 halls, Barron had listeners rate a variety of subjective attributes, including intimacy, clarity, reverberance, envelopment, and overall acoustic impression. Using a correlation analysis to identify which attributes were highly correlated to overall impression, two main groups of subjects emerged: those that associated reverberance versus those that associated intimacy the most with overall impression. Despite the differences between the two groups, both groups showed that the second highest attribute to correlate with overall impression was envelopment. Even with a connection between envelopment and overall room impression, still questions remain as to what creates a sense of envelopment.
    [Show full text]
  • Beyond 80Ms: the Subjective Effects of Sound Energy Arriving Shortly After the “Early” Sound Period
    PROCEEDINGS of the International Symposium on Room Acoustics 15 to 17 September 2019 in Amsterdam, Netherlands Beyond 80ms: The Subjective Effects of Sound Energy Arriving Shortly After the “Early” Sound Period Evan GREEN1; Eckhard KAHLE1; Vincent BERRIER1; Emilie CARAYOL1 1 Kahle Acoustics, Brussels, BELGIUM ABSTRACT Sound energy arriving in the first 80ms after the direct sound is the subject of much room acoustics research and design work, a time interval where reflected energy is expected to be subjectively associated or fused with the direct sound. The perceptual effects of sound arriving shortly after 80ms (usually bundled into a “late energy integral”) are however less clear: energy arriving in this time period may be masked by the early sound, but may also have perceptual effects on both source and room presence. This time period also overlaps with the calculation interval for the EDT (300ms for RT=2s), a parameter correlated with perceived running reverberation. Listener envelopment has also been shown to correlate strongly with lateral sound level after 80ms. Are there acoustical attributes in the time period after 80ms that are necessary to enhance subjective acoustical effects such as running reverberation and listener envelopment? To establish the subjective effects of sound energy arriving between 80ms and 300ms – termed “cluster reflections” – listening tests having been conducted in binaurally reproduced 3D virtual acoustic environments with varying reflection level, delay and direction of arrival. These experiments indicate that there are distinct time and spatial regions that are responsible for generating the subjective impression of envelopment and running reverberation. Only for a particular combination of delay times between 150ms and 300ms and directions of arrival between 60° and 150° azimuth were simultaneous running reverberation and listener envelopment generated.
    [Show full text]
  • The Artistic Play of Spatial Organization: Spatial Attributes, Scene Analysis and Auditory Spatial Schemata
    The Artistic Play of Spatial Organization: Spatial Attributes, Scene Analysis and Auditory Spatial Schemata Kendall, G., & Ardila, M. (2008). The Artistic Play of Spatial Organization: Spatial Attributes, Scene Analysis and Auditory Spatial Schemata. In R. Kronland-Martinet, S. Ystad, & K. Jensen (Eds.), Computer Music Modeling and Retrieval: Sense of Sounds (Vol. 4969 LNCS, pp. 125-138). Springer. Published in: Computer Music Modeling and Retrieval: Sense of Sounds Queen's University Belfast - Research Portal: Link to publication record in Queen's University Belfast Research Portal General rights Copyright for the publications made accessible via the Queen's University Belfast Research Portal is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The Research Portal is Queen's institutional repository that provides access to Queen's research output. Every effort has been made to ensure that content in the Research Portal does not infringe any person's rights, or applicable UK laws. If you discover content in the Research Portal that you believe breaches copyright or violates any law, please contact [email protected]. Download date:28. Sep. 2021 The Artistic Play of Spatial Organization: Spatial Attributes, Scene Analysis and Auditory Spatial Schemata Gary S. Kendall1 and Mauricio Ardila2 1 Northwestern University, Music Technology Program Evanston, IL 60208 USA [email protected] 2 Columbia College Chicago, Audio Arts and Acoustics Department Chicago, IL 60605 USA [email protected] Abstract. Electroacoustic music lacks a definitive vocabulary for describing its spatiality.
    [Show full text]