Spatial Audio for Soundscape Design: Recording and Reproduction

Total Page:16

File Type:pdf, Size:1020Kb

Spatial Audio for Soundscape Design: Recording and Reproduction applied sciences Review Spatial Audio for Soundscape Design: Recording and Reproduction Joo Young Hong 1,*, Jianjun He 2, Bhan Lam 1, Rishabh Gupta 1 and Woon-Seng Gan 1 1 School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; [email protected] (B.L.); [email protected] (R.G.); [email protected] (W.-S.G.) 2 Maxim Integrated Products Inc., San Jose, CA 95129, USA; [email protected] * Correspondence: [email protected]; Tel.: +65-8429-7512 Academic Editor: Gino Iannace Received: 29 March 2017; Accepted: 9 June 2017; Published: 16 June 2017 Featured Application: This review introduces the concept of spatial audio in the perspective of soundscape practitioners. A selection guide based on the spatial fidelity and degree of perceptual accuracy of the mentioned spatial audio recording and reproduction techniques is also provided. Abstract: With the advancement of spatial audio technologies, in both recording and reproduction, we are seeing more applications that incorporate 3D sound to create an immersive aural experience. Soundscape design and evaluation for urban planning can now tap into the extensive spatial audio tools for sound capture and 3D sound rendering over headphones and speaker arrays. In this paper, we outline a list of available state-of-the-art spatial audio recording techniques and devices, spatial audio physical and perceptual reproduction techniques, emerging spatial audio techniques for virtual and augmented reality, followed by a discussion on the degree of perceptual accuracy of recording and reproduction techniques in representing the acoustic environment. Keywords: soundscape; spatial audio; recording; reproduction; virtual reality; augmented reality 1. Introduction Urban acoustic environments consist of multiple types of sound sources (e.g., traffic sounds, biological sounds, geophysical sounds, human sounds) [1]. The different types of sound sources have different acoustical characteristics, meanings and values [2–4]. Conventionally, environmental noise policies are primarily centered on the energetic reduction of sound pressure levels (SPL). However, SPL indicators provide limited information on perceived acoustic comfort as it involves higher cognitive processes. To address the limitations of traditional noise management, the notion of a soundscape has been applied as a new paradigm. Schafer, a Canadian composer and music educator, introduced the term soundscape to encompass a holistic acoustic environment as a macrocosmic musical composition [5]. In this context, soundscape considers sound as a resource rather than waste and focuses on people’s contextual perception of the acoustic environment [6]. Due to rising prominence of the soundscape approach, ISO TC43 SC1 WG 54 was started with the aim of standardizing the perceptual assessment of soundscapes. As defined in ISO 12913-1: definition and conceptual framework, acoustic environment is “sound from all sound sources as modified by the environment” and the modification by the environment includes effects of various physical factors (e.g., meteorological conditions, absorption, diffraction, reverberation, and reflection) on sound propagation. This implies that soundscape is a perceptual construct related to physical acoustic environment in a place. Appl. Sci. 2017, 7, 627; doi:10.3390/app7060627 www.mdpi.com/journal/applsci Appl. Sci. 2017, 7, 627 2 of 21 acoustic environment [7]. The context includes all other non-acoustic components of the place (e.g., physical as well as previous experience of the individual). Herranz-Pascul et al. [8] proposed a people-activity-place framework for contexts based on four clusters: person, place, person-place interactionAppl. Sci. 2017 ,and7, 627 activity. The suggested framework shows interrelationships between person2 and of 22 activity and place, which may influence a person’s experience of the acoustic environment. It is evident from [8] that the human auditory process is deeply entwined in the person-activity-place framework.According Thus, to sufficient ISO 12913-1, insight the about context the plays huma an critical auditory role process in the perceptionis imperative of ato soundscape, record and reproduceas it affects an the acoustic auditory environment sensation, the with interpretation sufficient ofperceptual auditory sensation,accuracy for and proper the responses analysis. to Forthe simplicity, acoustic environment Zwicker and [ 7Fastl]. The has context grouped includes the human all other auditory non-acoustic processing components system into of two the stages: place (1)(e.g., the physical preprocessing as well in asthe previous peripheral experience system, and of the(2) information individual). processing Herranz-Pascul in the auditory et al. [8 ]system proposed [9]. a people-activity-placeCurrent reproduction framework methods for(e.g., contexts loudspeakers, based on headphones, four clusters: etc.) person, have sufficient place, person-place fidelity for properinteraction interpretation and activity. of frequency The suggested and temporal framework char showsacteristics interrelationships in the peripheral between system person (stage and 1). Asactivity the spatial and place, dimension which of may sound influence is only a person’sinterprete experienced in stage of2, thewhere acoustic there environment.is significant exchange of informationIt is evident between from [both8] that ears, the complexity human auditory of reproduction process isis deeplygreatly entwinedincreased. in For the instance, person- toactivity-place locate a sound framework. source in space, Thus, the sufficient characteristic insights of about intensity, the human phase, auditory and latency process must isbe imperative presented accuratelyto record andto both reproduce ears. Hence, an acoustic spatial environment audio recording with sufficientand reproduction perceptual techniques accuracy forshould proper be reviewedanalysis. Forin simplicity,the technological Zwicker perspective and Fastl has with grouped a focus the on human their auditoryrelationship processing with soundscape system into perception—thetwo stages: (1) themain preprocessing goal of this paper. in the peripheral system, and (2) information processing in the auditoryAs recording system [9 ].and reproduction techniques are heavily employed in the soundscape design process,Current a brief reproduction discussion methodswill shed (e.g., light loudspeakers, on the areas where headphones, such techniques etc.) have are sufficient most commonly fidelity for used.proper The interpretation soundscape design of frequency process andcan be temporal summarized characteristics into three stages, in the peripheralas illustrated system in Figure (stage 1 [10]. 1). StageAs the 1 spatial aims to dimension define and of analyze sound is existing only interpreted soundscapes. in stage In this 2, where stage, there soundscape is significant researchers exchange and of plannersinformation are between required both to ears,evaluate complexity and identify of reproduction ‘wanted issounds’ greatly to increased. be preserved For instance, or added, to locate and ‘unwanteda sound source sounds’ in space, to be the removed characteristics or reduced of intensity, based on phase, the context. and latency In Stage must 2, be soundscape presentedaccurately planning andto both design ears. scenarios Hence, spatialare proposed audio recordingbased on the and analysis reproduction of the existing techniques soundscapes. should be The reviewed soundscape in the designtechnological proposals perspective will then withbe simulated a focus on to theirbe obje relationshipctively and with subjectively soundscape evaluated perception—the by stakeholders main togoal determine of this paper. the final soundscape design. In Stage 3, the final soundscape design will be implemented in situ.As After recording implementation, and reproduction validation techniques of the soundscape are heavily design employed will be in performed the soundscape with iteration design ofprocess, Stage a1 brieffor analysis discussion of implemented will shed light soundscape on the areas design. where such techniques are most commonly used. The soundscapeThroughout designthe soundscape process can design be summarized process, soundscapes into three stages,can be asevaluated illustrated based in Figure on the1[ real10]. acousticStage 1 aimsenvironment to define or andvirtual analyze (reproduced existing or soundscapes. synthesized) environment In this stage, [6,11]. soundscape Similarly, researchers in Stage 1 (analysisand planners of existing are required soundscape) to evaluate and Stage andidentify 3 (implementation ‘wanted sounds’ of soundscape to be preserved design), or soundscapes added, and ‘unwantedcan be assessed sounds’ in real to beor removedreproduced or reducedacoustic basedenvironments. on the context. The reproduced In Stage 2, acoustic soundscape environment planning isand constructed design scenarios from areaudio proposed recordings based in on the the real analysis world. of theIf the existing physical soundscapes. location Thefor soundscape soundscape implementationdesign proposals is will not then available be simulated (i.e., not to built), be objectively future soundscape and subjectively design evaluated scenarios by might stakeholders also be evaluatedto determine using the finalsynthesized soundscape (or simulated) design. In Stageacoust 3,ic the environment final soundscape in Stage design 2 (soundscape will be implemented planning
Recommended publications
  • Audio for Virtual and Augmented Reality
    2016 AES CONFERENCE AUDIO FOR VIRTUAL AND AUGMENTED REALITY FRIDAY, SEPT 30 THRU SATURDAY, OCT 1 LOS ANGELES CONVENTION CENTER CONFERENCE PROGRAM PLATINUM SPONSORS GOLD SPONSORS SPONSORS MESSAGE FROM THE CONFERENCE CO-CHAIRS Welcome to the first AES International Conference on Audio for Virtual and Augmented Reality! We are really proud to present this amazing technical program, which is the result of many months of extremely hard teamwork. We aimed for the best content we could possibly provide, and here you have it. We are extremely thankful to our great presenters, authors, keynote speakers and sponsors. Together, we made this possible and we sincerely hope you will take away a lot of useful information. Also, I´d like to extend our special thanks to our delegates, coming from all over the world to ANDRES MAYO attend this truly unique event, and to our really hard working team of volunteers, which ultimately made it possible to Conference Co-chair pack this awesome quality and quantity of knowledge in 2 full days crammed with papers, workshops, tutorials and even a technical showcase. Welcome to the show! I would like to extend a warm welcome to all of our delegates, authors, presenters and sponsors. This conference has been a dream of Andres’ and mine since May of 2015. The world of VR / AR has grown so quickly, so fast that we knew we had to bring a conference dedicated specifically to this topic to the audio community. We could not have done this without the hard work and dedication of an incredible conference committee.
    [Show full text]
  • TV in VR Changho Choi, Peter Langner, Praveen Reddy, Satender Saroha, Sunil Srinivasan, Naveen Suryavamsh
    TV in VR Changho Choi, Peter Langner, Praveen Reddy, Satender Saroha, Sunil Srinivasan, Naveen Suryavamsh Introduction The evolution of storytelling has gone through various phases. Earliest known methods were through plain text. Plays and theatres were used to bring some of these stories to life but for the most part, artists relied on their audience to imagine the fictional worlds they were describing. Illustrations were a nice addition to help visualize an artist's perception. With the advent of cinema in the early 1900’s starting with silent films to the current summer blockbusters with their CGI, 3D and surround sound – viewers are transported into these imaginary worlds – to experience these worlds just as the creators of this content envisioned it. Virtual reality, with its ability to provide an immersive medium with a sense of presence and depth is the next frontier of storytelling. Seminal events in history ­ Moon landing in 1969 When Neil Armstrong and Buzz Aldrin took the first steps on the moon ­ it captured the imagination of the world. The culmination of a grand vision and the accompanying technological breakthroughs brought about an event that transfixed generations to come. As it happened in the 1969, the enabling technology for experiencing this event was the trusted radio or through grainy broadcasts of television anchors describing the events as they were described to them! Super Bowl 49 As the Seattle Seahawks stood a yard away from winning the Super Bowl in 2015, 115 million people watched on NBC in the United States alone. In front of their big screen TVs and every possible option explained to them by the commentators, the casual and the rabid football fan alike watched as the Seahawks lost due to a confluence of events.
    [Show full text]
  • Optimal Crosstalk Cancellation for Binaural Audio with Two Loudspeakers
    Optimal Crosstalk Cancellation for Binaural Audio with Two Loudspeakers Edgar Y. Choueiri Princeton University [email protected] Crosstalk cancellation (XTC) yields high-spatial-fidelity reproduction of binaural audio through loudspeakers allowing a listener to perceive an accurate 3-D image of a recorded soundfield. Such accurate 3-D sound reproduction is useful in a wide range of applications in the medical, military and commercial audio sectors. However, XTC is known to add a severe spectral coloration to the sound and that has been an impediment to the wide adoption of loudspeaker-based binaural audio. The nature of this coloration in two-loudspeaker XTC systems, and the fundamental aspects of the regularization methods that can be used to optimally control it, were studied analytically using a free-field two-point-source model. It was shown that constant-parameter regularization, while effective at decreasing coloration peaks, does not yield optimal XTC filters, and can lead to the formation of roll-offs and doublet peaks in the filter’s frequency response. Frequency-dependent regularization was shown to be significantly better for XTC optimization, and was used to derive a prescription for designing optimal two-loudspeaker XTC filters, whereby the audio spectrum is divided into adjacent bands, each of is which associated with one of three XTC impulse responses, which were derived analytically. Aside from the sought fundamental insight, the analysis led to the formulation of band-assembled XTC filters, whose optimal properties favor their practical use for enhancing the spatial realism of two-loudspeaker playback of standard stereo recordings containing binaural cues. I.
    [Show full text]
  • Practises in Listener Envelopment with Controllable Spatial Audio
    DIGITAL AUDIO SYSTEMS BRUNO MARION WRITTEN REVIEW 2 440566533 Practises in Listener Envelopment with Controllable Spatial Audio Introduction The envelopment of the listener in the sound field can be created in a number of ways. The two most import factors to Binaural Quality Index in acoustic space, be it real or virtual; is the Auditory Source Width and the Listener Envelopment Factor (Cardenas, et al., 2012). Multiple solutions exist for the increase in ASW and LEV, those discussed here will be Natural Sounding Artificial Reverberation, Recirculating Delays, Higher-Order Ambisonics and finally Higher-Order Speakers. Reverberation A computationally effective artificial reverberation can be easily achieved by using an exponentially decaying impulse and convolving this with the original audio signal. Inversing a duplication of this signal can also give a stereo effect but when compared to reality, this single-impulse method does not come close to creating the complex interactions of plane waves in a given space as each time a wave reflects from a surface or another wave, it is properties are affected as a function of frequency, phase and amplitude. The two main problems cited by Manfred Schroeder with artificial reverberation are: 1. A non-flat Amplitude Frequency Response 2. Low Echo Density (Schroeder, 1961) Shroeder offers solutions to these problems by use of an all-pass filter. An allpass filter is a method of filtering an audio signal without manipulation of other properties and can be constructed by combining a Finite Impuse Respose (FIR) feed forward and feedback. Figure 1 The allpass filter where fc is equal to the cut off frequency, fcl is equal to the lower cu toff frequency and fch is equal to the higher cut off frequency.
    [Show full text]
  • Franz Zotter Matthias Frank a Practical 3D Audio Theory for Recording
    Springer Topics in Signal Processing Franz Zotter Matthias Frank Ambisonics A Practical 3D Audio Theory for Recording, Studio Production, Sound Reinforcement, and Virtual Reality Springer Topics in Signal Processing Volume 19 Series Editors Jacob Benesty, INRS-EMT, University of Quebec, Montreal, QC, Canada Walter Kellermann, Erlangen-Nürnberg, Friedrich-Alexander-Universität, Erlangen, Germany The aim of the Springer Topics in Signal Processing series is to publish very high quality theoretical works, new developments, and advances in the field of signal processing research. Important applications of signal processing will be covered as well. Within the scope of the series are textbooks, monographs, and edited books. More information about this series at http://www.springer.com/series/8109 Franz Zotter • Matthias Frank Ambisonics A Practical 3D Audio Theory for Recording, Studio Production, Sound Reinforcement, and Virtual Reality Franz Zotter Matthias Frank Institute of Electronic Music and Acoustics Institute of Electronic Music and Acoustics University of Music and Performing Arts University of Music and Performing Arts Graz, Austria Graz, Austria ISSN 1866-2609 ISSN 1866-2617 (electronic) Springer Topics in Signal Processing ISBN 978-3-030-17206-0 ISBN 978-3-030-17207-7 (eBook) https://doi.org/10.1007/978-3-030-17207-7 © The Editor(s) (if applicable) and The Author(s) 2019. This book is an open access publication. Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adap- tation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
    [Show full text]
  • Accurate Reproduction of Binaural Recordings Through Individual Headphone Equalization and Time Domain Crosstalk Cancellation
    PROCEEDINGS of the 23rd International Congress on Acoustics 9 to 13 September 2019 in Aachen, Germany Accurate reproduction of binaural recordings through individual headphone equalization and time domain crosstalk cancellation David GRIESINGER1 1 David Griesinger Acoustics, Cambridge, MA, USA ABSTRACT Accessing the acoustic quality of spaces of all sizes depends on methods that can instantly and precisely compare the sound of different seats and spaces. Complex systems using many loudspeakers have been developed that hopefully achieve this goal. Binaural technology offers a simpler solution. The sound pressure at a listener’s eardrums can be measured with probe microphones, and reproduced with headphones or speakers calibrated at the eardrums. When carefully done the scene is precisely reproduced. But due to the variability of ear canal resonances such recordings and playbacks are highly individual. In this paper we present methods that are non-individual. Our recordings from a dummy head or from the eardrums are equalized to be essentially frequency linear to sound sources in front, giving the recording head the frontal frequency response of studio microphones. The recordings are then played back either through headphones equalized at the eardrum to match the response of a frontal source, or with a simple, non-individual crosstalk cancelling system. We equalize headphones at an individual’s eardrums using equal loudness measurements, and generate crosstalk cancellation with a non-individual algorithm in the time domain. Software apps and plug-ins that enable headphone equalization and crosstalk cancellation on computers and cellphones are now available. Keywords: Binaural, Headphones, Crosstalk 1. INTRODUCTION This paper concerns methods for binaurally recording and later precisely reproducing the sound in a hall or room.
    [Show full text]
  • Scene-Based Audio and Higher Order Ambisonics
    Scene -Based Audio and Higher Order Ambisonics: A technology overview and application to Next-Generation Audio, VR and 360° Video NOVEMBER 2019 Ferdinando Olivieri, Nils Peters, Deep Sen, Qualcomm Technologies Inc., San Diego, California, USA The main purpose of an EBU Technical Review is to critically examine new technologies or developments in media production or distribution. All Technical Reviews are reviewed by 1 (or more) technical experts at the EBU or externally and by the EBU Technical Editions Manager. Responsibility for the views expressed in this article rests solely with the author(s). To access the full collection of our Technical Reviews, please see: https://tech.ebu.ch/publications . If you are interested in submitting a topic for an EBU Technical Review, please contact: [email protected] EBU Technology & Innovation | Technical Review | NOVEMBER 2019 2 1. Introduction Scene Based Audio is a set of technologies for 3D audio that is based on Higher Order Ambisonics. HOA is a technology that allows for accurate capturing, efficient delivery, and compelling reproduction of 3D audio sound fields on any device, such as headphones, arbitrary loudspeaker configurations, or soundbars. We introduce SBA and we describe the workflows for production, transport and reproduction of 3D audio using HOA. The efficient transport of HOA is made possible by state-of-the-art compression technologies contained in the MPEG-H Audio standard. We discuss how SBA and HOA can be used to successfully implement Next Generation Audio systems, and to deliver any combination of TV, VR, and 360° video experiences using a single audio workflow. 1.1 List of abbreviations & acronyms CBA Channel-Based Audio SBA Scene-Based Audio HOA Higher Order Ambisonics OBA Object-Based Audio HMD Head-Mounted Display MPEG Motion Picture Experts Group (also the name of various compression formats) ITU International Telecommunications Union ETSI European Telecommunications Standards Institute 2.
    [Show full text]
  • People & Planet Report 2016
    People & Planet Report 2016 1.0 About this report We would like to thank you for reading the Nokia People & Planet Report 2016. The report presents and discusses the key ethical, environmental, and socio- economic issues most material to our business and stakeholders during the 2016 fiscal year. Sustainability 2016 About Approach Improve Protect Integrity Respect Together Data Assurance Nokia People & Planet Report 2016 2 7.01.0 ImprovingAbout people’sthis report lives with technology The scope of this report The scope of this report is Nokia Group, including Nokia’s Networks business. However, the numeric Further information Nokia's Networks business groups, Nokia data regarding our facilities energy use, waste, and Technologies, and Group Common and Other water include the whole Nokia Group. The chapters We have published annual corporate responsibility Your feedback Functions, in 2016. ‘Improving people’s lives with technology’ and reports since 1999 and the reports are available in We welcome your ‘Making change happen together’ include references digital format on our website from as far back as views and questions For an explanation of how we chose what to include to activities that took place in early 2017 but this is 2003 at nokia.com/sustainability on our activities and in this year’s report, please refer to the section indicated in the text separately. our performance. If Materiality: Identifying our key priorities. We also discuss sustainability and corporate you would like to share Reporting frameworks and responsibility topics in our official annual reports, your opinions, please At the end of 2015, our shareholders voted including the annual report on Form 20-F that contact us at overwhelmingly to approve the Alcatel-Lucent assurance is filed with the U.S.
    [Show full text]
  • Spatialized Audio Rendering for Immersive Virtual Environments
    Spatialized Audio Rendering for Immersive Virtual Environments Martin Naef1 Oliver Staadt2 Markus Gross1 1Computer Graphics Laboratory 2Computer Science Department ETH Zurich, Switzerland University of California, Davis, USA +41-1-632-7114 +1-530-752-4821 {naef, grossm}@inf.ethz.ch [email protected] ABSTRACT We are currently developing a novel networked SID, the blue-c, which combines immersive projection with multi-stream video We present a spatialized audio rendering system for the use in acquisition and advanced multimedia communication. Multiple immersive virtual environments. The system is optimized for ren- interconnected portals will allow remotely located users to meet, to dering a sufficient number of dynamically moving sound sources communicate, and to collaborate in a shared virtual environment. in multi-speaker environments using off-the-shelf audio hardware. We have identified spatial audio as an important component for Based on simplified physics-based models, we achieve a good this system. trade-off between audio quality, spatial precision, and perfor- In this paper, we present an audio rendering pipeline and system mance. Convincing acoustic room simulation is accomplished by suitable for spatially immersive displays. The pipeline is targeted integrating standard hardware reverberation devices as used in the at systems which require a high degree of flexibility while using professional audio and broadcast community. We elaborate on inexpensive consumer-level audio hardware. important design principles for audio rendering as well as on prac- For successful deployment in immersive virtual environments, tical implementation issues. Moreover, we describe the integration the audio rendering pipeline has to meet several important require- of the audio rendering pipeline into a scene graph-based virtual ments: reality toolkit.
    [Show full text]
  • Localization of 3D Ambisonic Recordings and Ambisonic Virtual
    Localization of 3D Ambisonic Recordings and Ambisonic Virtual Sources Sebastian Braun and Matthias Frank UniversitÄatfÄurMusik und darstellende Kunst Graz, Austria Institut fÄurElektronische Musik und Akustik, Email: [email protected] Abstract The accurate recording and reproduction of real 3D sound environments is still a challenging task. In this paper, the two most established 3D microphones are evaluated: the Sound¯eld SPS200 and MH Acoustics' Eigenmike EM32, according to 1st and 4th order Ambisonics. They are compared to virtual encoded sound sources of the same orders in a localization test. For the reproduction, an Ambisonics system with 24 (12+8+4) hemispherically arranged loudspeakers is used. In order to compare to existing results from the literature, this paper focuses on sound sources in the horizontal plane. As expected, the 4th order sources yield better localization as the 1st order sources. Within each order, the real recordings and the virtual encoded sources show a good correspondence. Introduction Coordinate system. Fig. 1 shows the coordinate sys- tem used in this paper. The directions θ are vectors Surround sound systems have entered movie theaters and of unit length that depend on the azimuth angle ' and our living rooms during the last decades. But these zenith angle # (# = 90± in the horizontal plane) systems are still limited to horizontal sound reproduc- 0 1 tion. Recently, further developments, such as 22.2 [1] cos(') sin(#) and AURO 3D [2], expanded to the third dimension by θ('; #) = @sin(') sin(#)A : (1) adding height channels. cos(#) The third dimension envolves new challenges in panning and recording [3]. VBAP [4] is the simplest method for z panning and at the same time the most flexible one, as it works with arbitrary loudspeaker arrangements.
    [Show full text]
  • Live Delivery of Neurosurgical Operating Theater Experience in Virtual Reality
    Live delivery of neurosurgical operating theater experience in virtual reality Marja Salmimaa (SID Senior Member) Abstract — A system for assisting in microneurosurgical training and for delivering interactive mixed Jyrki Kimmel (SID Senior Member) reality surgical experience live was developed and experimented in hospital premises. An interactive Tero Jokela experience from the neurosurgical operating theater was presented together with associated medical Peter Eskolin content on virtual reality eyewear of remote users. Details of the stereoscopic 360-degree capture, surgery imaging equipment, signal delivery, and display systems are presented, and the presence Toni Järvenpää (SID Member) Petri Piippo experience and the visual quality questionnaire results are discussed. The users reported positive scores Kiti Müller on the questionnaire on topics related to the user experience achieved in the trial. Jarno Satopää Keywords — Virtual reality, 360-degree camera, stereoscopic VR, neurosurgery. DOI # 10.1002/jsid.636 1 Introduction cases delivering a sufficient quality, VR experience is important yet challenging. In general, perceptual dimensions Virtual reality (VR) imaging systems have been developed in affecting the quality of experience can be categorized into the last few years with great professional and consumer primary dimensions such as picture quality, depth quality, interest.1 These capture devices have either two or a few and visual comfort, and into additional ones, which include, more camera modules, providing only a monoscopic view, or for example, naturalness and sense of presence.5 for instance eight or more cameras to image the surrounding VR systems as such introduce various features potentially environment in stereoscopic, three-dimensional (3-D) affecting the visual experience and the perceived quality of fashion.
    [Show full text]
  • (Virtual) Reality the OZO Allows the Director and Crew to View Their Shots, Adjust Exposure and View Stitch – for Video Europe Mark Locations, All Live
    Virtual reality/360º Virtual reality/360º One early lesson I learned was the difference between 360 video and VR. Frankly, I was a bit disappointed to discover that, from a VR purist’s point of view, what I’d been shooting was not actually VR. It was merely 360 video. VR, you see, has Stitch lines are like fault lines in the Earth’s crust, liable to be as good as its name – virtual reality. And reality, as we all to erupt at a moment’s notice with a double image, or a know, is 3D. Hence, true VR cameras capture video in stereo, ghost, or a horrible jutting chin where someone’s face The with multiple left and right lenses capturing the landscape twice, just as our eyes do, to be recombined in the headset has been imperfectly duplicated in two lenses. later to create a sense of depth. Without this vital element of immersion, it might still be a fascinating wrap-a-round these up, the computer is able to stitch two separate images essential experience, but it isn’t VR. together, slowly building up a 360-degree panorama. The Nevertheless, whether it’s VR or 360 video, the stitching annoying thing is that the control points which work for one process is the same and there are some universal truths frame may not work for the next. From moment to moment, the that will probably remain valid for a few more years. One relationship between objects changes – people walk past, trees stitch hard truth is that any algorithm that automatically stitches sway in the wind, inquisitive little kids approach the camera and images together will not give perfect results every time.
    [Show full text]