Audiovisual Spatial Congruence, and Applications to 3D Sound and Stereoscopic Video

Total Page:16

File Type:pdf, Size:1020Kb

Audiovisual Spatial Congruence, and Applications to 3D Sound and Stereoscopic Video Audiovisual spatial congruence, and applications to 3D sound and stereoscopic video. C´edricR. Andr´e Department of Electrical Engineering and Computer Science Faculty of Applied Sciences University of Li`ege,Belgium Thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy (PhD) in Engineering Sciences December 2013 This page intentionally left blank. ©University of Li`ege,Belgium This page intentionally left blank. Abstract While 3D cinema is becoming increasingly established, little effort has fo- cused on the general problem of producing a 3D sound scene spatially coher- ent with the visual content of a stereoscopic-3D (s-3D) movie. The percep- tual relevance of such spatial audiovisual coherence is of significant interest. In this thesis, we investigate the possibility of adding spatially accurate sound rendering to regular s-3D cinema. Our goal is to provide a perceptually matched sound source at the position of every object producing sound in the visual scene. We examine and contribute to the understanding of the usefulness and the feasibility of this combination. By usefulness, we mean that the technology should positively contribute to the experience, and in particular to the storytelling. In order to carry out experiments proving the usefulness, it is necessary to have an appropriate s-3D movie and its corresponding 3D audio soundtrack. We first present the procedure followed to obtain this joint 3D video and audio content from an existing animated s-3D movie, problems encountered, and some of the solutions employed. Second, as s-3D cinema aims at providing the spectator with a strong impression of being part of the movie (sense of presence), we investigate the impact of the spatial rendering quality of the soundtrack on the reported sense of presence. The short 3D audiovisual content is pre- sented with three different soundtracks. These soundtracks differ by their spatial rendering quality, from stereo (low spatial coherence) to Wave Field Synthesis (WFS, high spatial coherence). The original stereo version serves as a reference. Results show that the sound condition does not impact on the sense of presence of all participants. However, participants can be classi- fied according to three different levels of presence sensitivity with the sound condition impacting only on the highest level (12 out of 33 participants). Within this group, the spatially coherent soundtrack provides a lower re- ported sense of presence than the other custom soundtrack. The analysis of the participants’ heart rate variability (HRV) shows that the frequency- domain parameters correlate to the reported presence scores. ABSTRACT By feasibility, we mean that a large portion of the spectators in the audience should benefit from this new technology. In this thesis, we explain why the combination of accurate sound positioning and stereoscopic-3D images can lead to an incongruence between the sound and the image for multiple spectators. Then, we adapt to s-3D viewing a method originally proposed for 2D images in the literature to reduce this error. Finally, a subjective experiment is carried out to prove the efficiency of the method. In this experiment, an angular error between an s-3D video and a spatially accurate sound reproduced through WFS is simulated. The psychometric curve is measured with the method of constant stimuli, and the threshold for bimodal integration is estimated. The impact of the presence of background noise is also investigated. A comparison is made between the case without any background noise and the case with an SNR of 4 dBA. Estimates of the thresholds and the slopes, as well as their confidence intervals, are obtained for each level of background noise. When background noise is present, the point of subjective equality (PSE) is higher (19.4◦ instead of 18.3◦) and the slope is steeper ( 0.077 instead of 0.062 per degree). Because of the overlap − − between the confidence intervals, however, it is not possible to statistically differentiate between the two levels of noise. The implications for the sound reproduction in a cinema theater are discussed. vi Acknowledgements I am truly thankful to my two supervisors Jacques G. Verly and Jean-Jacques Embrechts at the University of Li`egefor letting me pursue this research on such a timely topic. I would like to show my deepest gratitude to Brian F.G. Katz, at LIMSI- CNRS, for welcoming me in his lab and provide me with his constant support. Credit for many aspects of the results in this work goes to him as well as Marc R´ebillat,formerly at LIMSI-CNRS, now at Universit´eParis Descartes, and Etienne´ Corteel, at sonic emotion labs, for their fruitful collaboration on the experiments in this thesis and their work on the SMART-I2 platform. I would also like to thank Matthieu Courgeon, at LIMSI-CNRS, for his work on the software tool MARC and its integration in the SMART-I2. The open source movie “Elephants Dream” was a very important part of this doctoral work. I would like to acknowledge Wolfgang Draxinger who provided me with the sources of the stereoscopic version of the movie and Marc Evrard´ for his work on the creation of a 3D soundtrack for the movie. This dissertation would not have been possible without the financial support of the University of Li`ege,under the form of a salary for a teaching assistant position and two financial aids for my research stays at the LIMSI-CNRS, in France. Finally, I also wish to thank all the colleagues, the friends and my fam- ily who supported me along the way. In particular, I thank Pierre T. and G´eraldinewith whom I shared an office, the whole Audio & Acoustics team at LIMSI-CNRS for the very good time I spent there, Gr´egoire,Pierre P., Fran¸cois,Th´er`ese,and all the others with whom I enjoyed spending numer- ous lunchtimes, and all the PhD students I met during the various events of the PhD Student Network I had the chance to organize and manage. A special thanks goes to my mother whose countless sacrifices led me to where I am today. This page intentionally left blank. Contents Abstractv Acknowledgements vii Contents ix Nomenclature xiii 1 Introduction1 1.1 Motivation for the thesis . .2 1.2 Objectives of the thesis . .3 1.3 Hardware and software considerations . .4 1.4 Review of the key concepts . .5 1.5 Contributions of the thesis . .5 1.6 Organization of the manuscript . .6 I State of the art9 2 Overview of auditory, visual, and auditory-visual perception 11 2.1 Overview of auditory spatial perception . 12 2.2 Overview of visual spatial perception . 20 2.3 Overview of auditory-visual perception . 28 3 Spatial sound reproduction 49 3.1 Two-channel sound reproduction . 50 3.2 A brief history of sound reproduction in movie theaters . 55 3.3 3D audio technologies . 62 4 Principles and limitations of stereoscopic-3D visualization 89 4.1 Principles of stereoscopic-3D imaging . 90 4.2 Limitations of stereoscopic-3D imaging . 94 CONTENTS 5 Combining 3D sound with stereoscopic-3D images on a single platform 111 5.1 New challenges of stereoscopic-3D cinema . 112 5.2 The SMART-I2 ............................. 115 II Contributions of the thesis 123 6 The making of a 3D audiovisual content 125 6.1 Introduction . 126 6.2 Selecting a movie . 127 6.3 Precise sound positioning in the SMART-I2 .............. 128 6.4 Experimental setup . 141 6.5 Creation of the audio track . 142 6.6 Discussion . 146 6.7 Conclusion . 149 7 Impact of 3D sound on the reported sense of presence 151 7.1 Introduction . 152 7.2 Soundtracks . 153 7.3 Method . 156 7.4 Results from post-session questionnaires . 163 7.5 Results from Heart Rate Variability . 169 7.6 Results from the participants’ feedback . 173 7.7 Conclusions . 174 8 Subjective Evaluation of the Audiovisual Spatial Congruence in the Case of Stereoscopic-3D Video and Wave Field Synthesis 179 8.1 Introduction . 180 8.2 Method . 185 8.3 Results . 191 8.4 Discussion . 196 8.5 Conclusion . 201 9 Conclusions and perspectives 207 9.1 Conclusions . 207 9.2 Perspectives . 209 x CONTENTS III Appendices 217 A List of utilized software tools 219 A.1 Open source software tools . 219 A.2 Proprietary software tools . 220 B Other limitations of s-3D imaging 223 B.1 Crosstalk . 223 B.2 Flicker . 225 C An overview of the OSC protocol 227 C.1 Basics . 227 C.2 Implementations . 228 D Affine geometry, homogeneous coordinates, and rigid transforma- tions 229 D.1 A brief introduction to affine geometry . 230 D.2 Representation and manipulation of 3D coordinates . 233 E Post-session presence questionnaire 237 E.1 Presentation of the questions . 238 E.2 Temple Presence Inventory . 239 E.3 Swedish viewer-user presence questionnaire . 242 E.4 Bouvier’s PhD thesis . 243 E.5 Additional questions . 244 F Analysis of variance and multiple imputation 247 F.1 Point and variance estimates . 248 F.2 Analysis of variance . 248 G The chi–squared test for differences between proportions 253 List of Figures 257 List of Tables 261 xi This page intentionally left blank. Nomenclature Roman Symbols c Speed of sound (340 m/s) A, B, X,... A number in R (possibly a coordinate) f Frequency (temporal) (Hz) 1 k Wavenumber (k = ω/c) (m− ) M Order of a spherical (or cylindrical) wave expansion a, b, x,... A point in R3 Ax The point x expressed in the coordinate system (A) B AR The rotation matrix describing the rotation from the coordinate system (A) to the coordinate system (B) u, v,... A vector in R3, or a point or a vector in homogeneous coordinates −→ab, ab, or b a The vector linking a and b in R3 − Greek Symbols ω Angular frequency
Recommended publications
  • Qt6hb7321d.Pdf
    UC Berkeley UC Berkeley Previously Published Works Title Recovering stereo vision by squashing virtual bugs in a virtual reality environment. Permalink https://escholarship.org/uc/item/6hb7321d Journal Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 371(1697) ISSN 0962-8436 Authors Vedamurthy, Indu Knill, David C Huang, Samuel J et al. Publication Date 2016-06-01 DOI 10.1098/rstb.2015.0264 Peer reviewed eScholarship.org Powered by the California Digital Library University of California Submitted to Phil. Trans. R. Soc. B - Issue Recovering stereo vision by squashing virtual bugs in a virtual reality environment. For ReviewJournal: Philosophical Only Transactions B Manuscript ID RSTB-2015-0264.R2 Article Type: Research Date Submitted by the Author: n/a Complete List of Authors: Vedamurthy, Indu; University of Rochester, Brain & Cognitive Sciences Knill, David; University of Rochester, Brain & Cognitive Sciences Huang, Sam; University of Rochester, Brain & Cognitive Sciences Yung, Amanda; University of Rochester, Brain & Cognitive Sciences Ding, Jian; University of California, Berkeley, Optometry Kwon, Oh-Sang; Ulsan National Institute of Science and Technology, School of Design & Human Engineering Bavelier, Daphne; University of Geneva, Faculty of Psychology and Education Sciences; University of Rochester, Brain & Cognitive Sciences Levi, Dennis; University of California, Berkeley, Optometry; Issue Code: Click <a href=http://rstb.royalsocietypublishing.org/site/misc/issue- 3DVIS codes.xhtml target=_new>here</a> to find the code for your issue.: Subject: Neuroscience < BIOLOGY Stereopsis, Strabismus, Amblyopia, Virtual Keywords: Reality, Perceptual learning, stereoblindness http://mc.manuscriptcentral.com/issue-ptrsb Page 1 of 29 Submitted to Phil. Trans. R. Soc. B - Issue Phil.
    [Show full text]
  • The Uses of Animation 1
    The Uses of Animation 1 1 The Uses of Animation ANIMATION Animation is the process of making the illusion of motion and change by means of the rapid display of a sequence of static images that minimally differ from each other. The illusion—as in motion pictures in general—is thought to rely on the phi phenomenon. Animators are artists who specialize in the creation of animation. Animation can be recorded with either analogue media, a flip book, motion picture film, video tape,digital media, including formats with animated GIF, Flash animation and digital video. To display animation, a digital camera, computer, or projector are used along with new technologies that are produced. Animation creation methods include the traditional animation creation method and those involving stop motion animation of two and three-dimensional objects, paper cutouts, puppets and clay figures. Images are displayed in a rapid succession, usually 24, 25, 30, or 60 frames per second. THE MOST COMMON USES OF ANIMATION Cartoons The most common use of animation, and perhaps the origin of it, is cartoons. Cartoons appear all the time on television and the cinema and can be used for entertainment, advertising, 2 Aspects of Animation: Steps to Learn Animated Cartoons presentations and many more applications that are only limited by the imagination of the designer. The most important factor about making cartoons on a computer is reusability and flexibility. The system that will actually do the animation needs to be such that all the actions that are going to be performed can be repeated easily, without much fuss from the side of the animator.
    [Show full text]
  • Daily Mixed Visual Experience That Prevents Amblyopia in Cats Does Not Always Allow the Development of Good Binocular Depth Perception
    Journal of Vision (2009) 9(5):22, 1–7 http://journalofvision.org/9/5/22/ 1 Daily mixed visual experience that prevents amblyopia in cats does not always allow the development of good binocular depth perception Department of Psychology, Dalhousie University, Donald E. Mitchell Halifax, NS, Canada Department of Psychology, Dalhousie University, Jan Kennie Halifax, NS, Canada WT Centre for Neuroimaging, UCL Institute of Neurology, D. Samuel Schwarzkopf London, UK School of Biosciences, Cardiff University, Frank Sengpiel Cardiff, Wales, UK Kittens reared with mixed daily visual input that consists of episodes of normal (binocular) exposure followed by abnormal (monocular) exposure can develop normal visual acuity in both eyes if the length of the former exposure exceeds a critical amount. However, later studies of the tuning of cells in primary visual cortex of animals reared in this manner revealed that their responses to interocular differences in phase were not reliable suggesting that their binocular depth perception may not be normal. We examined this possibility in 3 kittens reared with mixed daily visual exposure (2 hrs binocular vision followed by 5 hrs monocular exposure) that allowed development of normal visual acuity in both eyes. Measurements made of the threshold differences in depth that could be perceived under monocular and binocular viewing revealed a 10-fold superiority of binocular over monocular depth thresholds in one animal while the depth thresholds of the other two animals were poor and there was no binocular superiority. Thus, there was evidence that only one animal possessed stereopsis while the other two were likely stereoblind. While 2 hrs of daily binocular vision protected against the development of amblyopia, the poor outcome with respect to stereopsis points to the need for additional measures to promote binocular vision.
    [Show full text]
  • Simulating Humans: Computer Graphics, Animation, and Control
    University of Pennsylvania ScholarlyCommons Center for Human Modeling and Simulation Department of Computer & Information Science 6-1-1993 Simulating Humans: Computer Graphics, Animation, and Control Bonnie L. Webber University of Pennsylvania, [email protected] Cary B. Phillips University of Pennsylvania Norman I. Badler University of Pennsylvania, [email protected] Follow this and additional works at: https://repository.upenn.edu/hms Recommended Citation Webber, B. L., Phillips, C. B., & Badler, N. I. (1993). Simulating Humans: Computer Graphics, Animation, and Control. Retrieved from https://repository.upenn.edu/hms/68 Reprinted with Permission by Oxford University Press. Reprinted from Simulating humans: computer graphics animation and control, Norman I. Badler, Cary B. Phillips, and Bonnie L. Webber (New York: Oxford University Press, 1993), 283 pages. Author URL: http://www.cis.upenn.edu/~badler/book/book.html This paper is posted at ScholarlyCommons. https://repository.upenn.edu/hms/68 For more information, please contact [email protected]. Simulating Humans: Computer Graphics, Animation, and Control Abstract People are all around us. They inhabit our home, workplace, entertainment, and environment. Their presence and actions are noted or ignored, enjoyed or disdained, analyzed or prescribed. The very ubiquitousness of other people in our lives poses a tantalizing challenge to the computational modeler: people are at once the most common object of interest and yet the most structurally complex. Their everyday movements are amazingly uid yet demanding to reproduce, with actions driven not just mechanically by muscles and bones but also cognitively by beliefs and intentions. Our motor systems manage to learn how to make us move without leaving us the burden or pleasure of knowing how we did it.
    [Show full text]
  • Lec16, 3D Technologies, V1.00 Printfriendly.Pdf
    Course Presentation Multimedia Systems 3D Technologies Mahdi Amiri May 2012 Sharif University of Technology Binocular Vision (Two Eyes) Advantages A spare eye in case one is damaged. A wider field of view (FOV). Maximum horizontal FOV of humans: ~200º (with two eyes) One binocular FOV (seen by both eyes): ~120º Two uniocular FOV (seen by only one eye): ~40º Binocular summation: The ability to detect faint objects is enhanced (neural summation). Perception of depth. Page 1 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cyclopean Image Cyclopean image is a single mental image of a scene created by the brain by combining two images received from the two eyes. The mythical Cyclops with a single eye Page 2 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cues (Cont.) Accommodation of the eyeball (eyeball focus) Focus by changing the curvature of the lens. Interposition Occlusion of one object by another Occlusion Page 3 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cues (Cont.) Linear perspective (convergence of parallel edges) Parallel lines such as railway lines converge with increasing distance. Page 4 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cues (Cont.) Familiar size and Relative size subtended visual angle of an object of known size A retinal image of a small car is also interpreted as a distant car. Page 5 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cues (Cont.) Aerial Perspective Vertical position (objects higher in the scene generally tend to be perceived as further away) Haze, desaturation, and a shift to bluishness Hight Shift to bluishness Haze Page 6 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cues (Cont.) Light and Shade Shadow Page 7 Multimedia Systems, Mahdi Amiri, 3D Technologies Depth Perception Cues (Cont.) Change in size of textured pattern detail.
    [Show full text]
  • The Stereographer
    R Basic Principles The 3D Camera Rig Placing objects in a 3D space A few different ideas have been devised for shooting 3D material over the years, including some interesting cameras using In 3D two images are projected onto the display. By wearing a special pair of glasses the two images are split so that each arrangements of lenses and prisms to make a more portable, easy to use, single bodied camera. However, to date, the most 3D Television & FilmVersion 2.0 eye only sees one of the two images. When comparing the left and right eye images, every object in the scene is horizontally effective way of shooting 3D material in a professional environment is the dual camera 3D rig. There are several configurations Depth Cues displaced by a small amount. The brain assumes these two displaced objects are actually one object, and tries to fuse them of 3D camera rig, each with advantages and disadvantages. Displaying & Viewing 3D The eight depth cues together. The only way it can do this is to assume the object is either in front or behind the screen plane. The direction and Rig configurations amount of displacement defines where each object is in the 3D space. At present there are five methods of displaying and viewing 3D material. Humans have eight depth cues that are used by the brain to estimate the relative distance of the objects in every scene we The parallel rig The opposing rig The mirror rig Anaglyph look at. These are focus, perspective, occlusion, light and shading, colour intensity and contrast, relative movement, vergence Positive parallax Zero parallax Negative parallax The most compact dual camera 3D rig is The opposing rig places the cameras in Bottom mount and stereopsis.
    [Show full text]
  • Stereopsis and Stereoblindness
    Exp. Brain Res. 10, 380-388 (1970) Stereopsis and Stereoblindness WHITMAN RICHARDS Department of Psychology, Massachusetts Institute of Technology, Cambridge (USA) Received December 20, 1969 Summary. Psychophysical tests reveal three classes of wide-field disparity detectors in man, responding respectively to crossed (near), uncrossed (far), and zero disparities. The probability of lacking one of these classes of detectors is about 30% which means that 2.7% of the population possess no wide-field stereopsis in one hemisphere. This small percentage corresponds to the probability of squint among adults, suggesting that fusional mechanisms might be disrupted when stereopsis is absent in one hemisphere. Key Words: Stereopsis -- Squint -- Depth perception -- Visual perception Stereopsis was discovered in 1838, when Wheatstone invented the stereoscope. By presenting separate images to each eye, a three dimensional impression could be obtained from two pictures that differed only in the relative horizontal disparities of the components of the picture. Because the impression of depth from the two combined disparate images is unique and clearly not present when viewing each picture alone, the disparity cue to distance is presumably processed and interpreted by the visual system (Ogle, 1962). This conjecture by the psychophysicists has received recent support from microelectrode recordings, which show that a sizeable portion of the binocular units in the visual cortex of the cat are sensitive to horizontal disparities (Barlow et al., 1967; Pettigrew et al., 1968a). Thus, as expected, there in fact appears to be a physiological basis for stereopsis that involves the analysis of spatially disparate retinal signals from each eye. Without such an analysing mechanism, man would be unable to detect horizontal binocular disparities and would be "stereoblind".
    [Show full text]
  • Stereoscopic Therapy: Fun Or Remedy?
    STEREOSCOPIC THERAPY: FUN OR REMEDY? SARA RAPOSO Abstract (INDEPENDENT SCHOLAR , PORTUGAL ) Once the material of playful gatherings, stereoscop­ ic photographs of cities, the moon, landscapes and fashion scenes are now cherished collectors’ items that keep on inspiring new generations of enthusiasts. Nevertheless, for a stereoblind observer, a stereoscopic photograph will merely be two similar images placed side by side. The perspective created by stereoscop­ ic fusion can only be experienced by those who have binocular vision, or stereopsis. There are several caus­ es of a lack of stereopsis. They include eye disorders such as strabismus with double vision. Interestingly, stereoscopy can be used as a therapy for that con­ dition. This paper approaches this kind of therapy through the exploration of North American collections of stereoscopic charts that were used for diagnosis and training purposes until recently. Keywords. binocular vision; strabismus; amblyopia; ste- reoscopic therapy; optometry. 48 1. Binocular vision and stone (1802­1875), which “seem to have access to the visual system at the same stereopsis escaped the attention of every philos­ time and form a unitary visual impres­ opher and artist” allowed the invention sion. According to the suppression the­ Vision and the process of forming im­ of a “simple instrument” (Wheatstone, ory, both similar and dissimilar images ages, is an issue that has challenged 1838): the stereoscope. Using pictures from the two eyes engage in alternat­ the most curious minds from the time of as a tool for his study (Figure 1) and in­ ing suppression at a low level of visual Aristotle and Euclid to the present day.
    [Show full text]
  • Introduction to Computer Graphics and Animation
    NATIONAL OPEN UNIVERSITY OF NIGERIA COURSE CODE :CIT 371 COURSE TITLE: INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION 1 2 COURSE GUIDE CIT 371 INTRODUCTION TO COMPUTER GRAPHICS AND ANIMATION Course Team Mr. F. E. Ekpenyong (Writer) – NDA Course Editor Programme Leader Course Coordinator 3 NATIONAL OPEN UNIVERSITY OF NIGERIA National Open University of Nigeria Headquarters 14/16 Ahmadu Bello Way Victoria Island Lagos Abuja Office No. 5 Dar es Salaam Street Off Aminu Kano Crescent Wuse II, Abuja Nigeria e-mail: [email protected] URL: www.nou.edu.ng Published By: National Open University of Nigeria Printed 2009 ISBN: All Rights Reserved 4 CONTENTS PAGE Introduction………………………………………………………… 1 What you will Learn in this Course…………………………………. 1 Course Aims… … … … … … … … 4 Course Objectives……….… … … … … … 4 Working through this Course… … … … … … 5 The Course Material… … … … … … 5 Study Units… … … … … … … 6 Presentation Schedule… … … … … … … 7 Assessments… … … … … … … … 7 Tutor Marked Assignment… … … … … … 7 Final Examination and Grading… … … … … … 8 Course Marking Scheme… … … … … … … 8 Facilitators/Tutors and Tutorials… … … … … 9 Summary… … … … … … … … … 9 5 Introduction Computer graphics is concerned with producing images and animations (or sequences of images) using a computer. This includes the hardware and software systems used to make these images. The task of producing photo-realistic images is an extremely complex one, but this is a field that is in great demand because of the nearly limitless variety of applications. The field of computer graphics has grown enormously over the past 10–20 years, and many software systems have been developed for generating computer graphics of various sorts. This can include systems for producing 3-dimensional models of the scene to be drawn, the rendering software for drawing the images, and the associated user- interface software and hardware.
    [Show full text]
  • Dolby 3D Brochure
    Dolby 3D Makes Good Business Sense Premium Quality Dolby 3D provides a thrilling audience experience and a powerful box-office attraction. It’s also more cost-effective and better Widely recognized as producing the best 3D image, Dolby for the environment than systems that use disposable glasses. Another key feature: Dolby 3D is part of an integrated suite of 3D provides an amazingly sharp picture with brighter, more Dolby Digital Cinema products from a company with over 40 years of cinematic innovation. natural colors—giving your audiences an unparalleled 3D experience they’ll want to return for again and again. Keep More 3D Profit Dolby 3D Return on Investment Dolby 3D is a one-time investment—no annual licensing fees ROI or revenue sharing. In fact, a complete Dolby 3D system generally pays for itself after only a few 3D releases. You $80K keep more of the additional revenues generated with Dolby $60K Flexible 3D and your ROI continues to improve over time. $40K Dolby 3D wows audiences on any screen—either on $20K traditional white screens or on silver screens, allowing Durable and Eco-Friendly for greater scheduling flexibility. Easily switch between Engineered to resist damage and hold their shape, Dolby 3D $0 2D and 3D, and move films from one auditorium to glasses deliver a comfortable fit and the highest quality 3D ($20K) another without screen restrictions. experience for hundreds of uses, without ending up in landfills like disposables. Reduce, reuse, recycle—it's good ($40K) for business. 1 5 10 15 Number of Dolby 3D Releases* World-Class Support Dolby provides comprehensive support for our 3D solution, *Based on 7,500 admissions per title.
    [Show full text]
  • This Guide Will Help Producers As They Plan to Create Programming for 3D Television
    This guide will help producers as they plan to create programming for 3D television. Authors Bert Collins Josh Derby Bruce Dobrin Don Eklund Buzz Hays Jim Houston George Joblove Spencer Stephens Editors Bert Collins Josh Derby 3D Production Guide Post production 14 Viewer Comfort Issues 34 Media Offload, Prep, and Ingest 14 View Alignment 34 Table Of Contents Virus scanning 14 Colorimetric Alignment 36 Media ingest 14 Defining Convergence, Parallax, and Interaxial 36 Introduction 3 Media sorting 15 Parallax and Comfort 37 Editing 16 Stereo Window Violations 37 Offline editing 16 Vergence 38 Planning Workflows for 3D 4 Different approaches to 3D offline 16 Vergence and parallax shifting across cuts 38 Pre production 4 Online editing 18 Stereo sweetening and adjustment 18 Occlusion and Text Placement 39 Format planning 4 Text and graphic object placement 39 Output 19 Frame rate and resolution 4 Depth placement 39 Audio 19 Choosing camera systems and rigs 4 Dynamic depth placement 39 Creating masters 20 Recording codec and file type 5 The color and contrast of text and graphics 39 Recording methodology 6 Delivery 20 Workflow and media management 6 Metadata Planning 7 3D Post 40 2D to 3D Conversion 22 Understanding metadata options 7 3D Shot Correction 40 Why use metadata? 7 Geometric alignment errors 40 Operator Training 8 3D Production: Cameras and Tools 24 Colorimetric alignment errors 40 Communication of the metadata plan 8 Cameras 24 Window violations 40 Role of the stereographer 8 Minimum requirements for camera quality 24 Standards and Frame Rate Conversion 42 Choosing the correct lens system 25 Rig and camera set up, alignment, and use 8 Stereo Planning 9 Choosing 3D Production Tools 26 Depth budgets 9 Rigs vs.
    [Show full text]
  • Introduction Week 1, Lecture 1
    CS 430 Computer Graphics Introduction Week 1, Lecture 1 David Breen, William Regli and Maxim Peysakhov Department of Computer Science Drexel University 1 Overview • Course Policies/Issues • Brief History of Computer Graphics • The Field of Computer Graphics: A view from 66,000ft • Structure of this course • Homework overview • Introduction and discussion of homework #1 2 Computer Graphics I: Course Goals • Provide introduction to fundamentals of 2D and 3D computer graphics – Representation (lines/curves/surfaces) – Drawing, clipping, transformations and viewing – Implementation of a basic graphics system • draw lines using Postscript • simple frame buffer with PBM & PPM format • ties together 3D projection and 2D drawing 3 Interactive Computer Graphics CS 432 • Learn and program WebGL • Computer Graphics was a pre-requisite – Not anymore • Looks at graphics “one level up” from CS 430 • Useful for Games classes • Part of the HCI and Game Development & Design tracks? 5 Advanced Rendering Techniques (Advanced Computer Graphics) • Might be offered in the Spring term • 3D Computer Graphics • CS 430/536 is a pre-requisite • Implement Ray Tracing algorithm • Lighting, rendering, photorealism • Study Radiosity and Photon Mapping 7 ART Student Images 8 Computer Graphics I: Technical Material • Course coverage – Mathematical preliminaries – 2D lines and curves – Geometric transformations – Line and polygon drawing – 3D viewing, 3D curves and surfaces – Splines, B-Splines and NURBS – Solid Modeling – Color, hidden surface removal, Z-buffering 9
    [Show full text]