Stereoscopic Filmmaking Whitepaper the Business and Technology of Stereoscopic Filmmaking
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Stereo Capture and Display At
Creation of a Complete Stereoscopic 3D Workflow for SoFA Allison Hettinger and Ian Krassner 1. Introduction 1.1 3D Trend Stereoscopic 3D motion pictures have recently risen to popularity once again following the success of films such as James Cameron’s Avatar. More and more films are being converted to 3D but few films are being shot in 3D. Current available technology and knowledge of that technology (along with cost) is preventing most films from being shot in 3D. Shooting in 3D is an advantage because two slightly different images are produced that mimic the two images the eyes see in normal vision. Many take the cheaper route of shooting in 2D and converting to 3D. This results in a 3D image, but usually nowhere near the quality as if the film was originally shot in 3D. This is because a computer has to create the second image, which can result in errors. It is also important to note that a 3D image does not necessarily mean a stereo image. 3D can be used to describe images that have an appearance of depth, such as 3D animations. Stereo images refer to images that make use of retinal disparity to create the illusion of objects going out of and into the screen plane. Stereo images are optical illusions that make use of several cues that the brain uses to perceive a scene. Examples of monocular cues are relative size and position, texture gradient, perspective and occlusion. These cues help us determine the relative depth positions of objects in an image. Binocular cues such as retinal disparity and convergence are what give the illusion of depth. -
The Stereographer
R Basic Principles The 3D Camera Rig Placing objects in a 3D space A few different ideas have been devised for shooting 3D material over the years, including some interesting cameras using In 3D two images are projected onto the display. By wearing a special pair of glasses the two images are split so that each arrangements of lenses and prisms to make a more portable, easy to use, single bodied camera. However, to date, the most 3D Television & FilmVersion 2.0 eye only sees one of the two images. When comparing the left and right eye images, every object in the scene is horizontally effective way of shooting 3D material in a professional environment is the dual camera 3D rig. There are several configurations Depth Cues displaced by a small amount. The brain assumes these two displaced objects are actually one object, and tries to fuse them of 3D camera rig, each with advantages and disadvantages. Displaying & Viewing 3D The eight depth cues together. The only way it can do this is to assume the object is either in front or behind the screen plane. The direction and Rig configurations amount of displacement defines where each object is in the 3D space. At present there are five methods of displaying and viewing 3D material. Humans have eight depth cues that are used by the brain to estimate the relative distance of the objects in every scene we The parallel rig The opposing rig The mirror rig Anaglyph look at. These are focus, perspective, occlusion, light and shading, colour intensity and contrast, relative movement, vergence Positive parallax Zero parallax Negative parallax The most compact dual camera 3D rig is The opposing rig places the cameras in Bottom mount and stereopsis. -
This Guide Will Help Producers As They Plan to Create Programming for 3D Television
This guide will help producers as they plan to create programming for 3D television. Authors Bert Collins Josh Derby Bruce Dobrin Don Eklund Buzz Hays Jim Houston George Joblove Spencer Stephens Editors Bert Collins Josh Derby 3D Production Guide Post production 14 Viewer Comfort Issues 34 Media Offload, Prep, and Ingest 14 View Alignment 34 Table Of Contents Virus scanning 14 Colorimetric Alignment 36 Media ingest 14 Defining Convergence, Parallax, and Interaxial 36 Introduction 3 Media sorting 15 Parallax and Comfort 37 Editing 16 Stereo Window Violations 37 Offline editing 16 Vergence 38 Planning Workflows for 3D 4 Different approaches to 3D offline 16 Vergence and parallax shifting across cuts 38 Pre production 4 Online editing 18 Stereo sweetening and adjustment 18 Occlusion and Text Placement 39 Format planning 4 Text and graphic object placement 39 Output 19 Frame rate and resolution 4 Depth placement 39 Audio 19 Choosing camera systems and rigs 4 Dynamic depth placement 39 Creating masters 20 Recording codec and file type 5 The color and contrast of text and graphics 39 Recording methodology 6 Delivery 20 Workflow and media management 6 Metadata Planning 7 3D Post 40 2D to 3D Conversion 22 Understanding metadata options 7 3D Shot Correction 40 Why use metadata? 7 Geometric alignment errors 40 Operator Training 8 3D Production: Cameras and Tools 24 Colorimetric alignment errors 40 Communication of the metadata plan 8 Cameras 24 Window violations 40 Role of the stereographer 8 Minimum requirements for camera quality 24 Standards and Frame Rate Conversion 42 Choosing the correct lens system 25 Rig and camera set up, alignment, and use 8 Stereo Planning 9 Choosing 3D Production Tools 26 Depth budgets 9 Rigs vs. -
Fast-Response Switchable Lens for 3D and Wearable Displays
Fast-response switchable lens for 3D and wearable displays Yun-Han Lee, Fenglin Peng, and Shin-Tson Wu* CREOL, The College of Optics and Photonics, University of Central Florida, Orlando, Florida 32816, USA *[email protected] Abstract: We report a switchable lens in which a twisted nematic (TN) liquid crystal cell is utilized to control the input polarization. Different polarization state leads to different path length in the proposed optical system, which in turn results in different focal length. This type of switchable lens has advantages in fast response time, low operation voltage, and inherently lower chromatic aberration. Using a pixelated TN panel, we can create depth information to the selected pixels and thus add depth information to a 2D image. By cascading three such device structures together, we can generate 8 different focuses for 3D displays, wearable virtual/augmented reality, and other head mounted display devices. ©2016 Optical Society of America OCIS codes: (230.3720) Liquid-crystal devices; (080.3620) Lens system design. References and links 1. O. Cakmakci and J. Rolland, “Head-worn displays: a review,” J. Display Technol. 2(3), 199–216 (2006). 2. B. Furht, Handbook of Augmented Reality (Springer, 2011). 3. H. Ren and S. T. Wu, Introduction to Adaptive Lenses (Wiley, 2012). 4. K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004). 5. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010). -
Audiovisual Spatial Congruence, and Applications to 3D Sound and Stereoscopic Video
Audiovisual spatial congruence, and applications to 3D sound and stereoscopic video. C´edricR. Andr´e Department of Electrical Engineering and Computer Science Faculty of Applied Sciences University of Li`ege,Belgium Thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy (PhD) in Engineering Sciences December 2013 This page intentionally left blank. ©University of Li`ege,Belgium This page intentionally left blank. Abstract While 3D cinema is becoming increasingly established, little effort has fo- cused on the general problem of producing a 3D sound scene spatially coher- ent with the visual content of a stereoscopic-3D (s-3D) movie. The percep- tual relevance of such spatial audiovisual coherence is of significant interest. In this thesis, we investigate the possibility of adding spatially accurate sound rendering to regular s-3D cinema. Our goal is to provide a perceptually matched sound source at the position of every object producing sound in the visual scene. We examine and contribute to the understanding of the usefulness and the feasibility of this combination. By usefulness, we mean that the technology should positively contribute to the experience, and in particular to the storytelling. In order to carry out experiments proving the usefulness, it is necessary to have an appropriate s-3D movie and its corresponding 3D audio soundtrack. We first present the procedure followed to obtain this joint 3D video and audio content from an existing animated s-3D movie, problems encountered, and some of the solutions employed. Second, as s-3D cinema aims at providing the spectator with a strong impression of being part of the movie (sense of presence), we investigate the impact of the spatial rendering quality of the soundtrack on the reported sense of presence. -
Digital 3DTV
Digital 3DTV Alexey Polyakov The advent of the digital 3DTV era is a fait accompli. The question is: how is it going to develop in the future? Currently the Standard definition(SD) TV is being changed into High definition(HD) TV. As you know, quantitative changes tend to transform into qualitative ones. Many observers believe that the next quantum leap will be the emergence of 3D TV. They predict that such TV will appear within a decade. In this article I will explain how it is possible to create stereoscopic video systems using commercial devices. Brief historical retrospective The following important stages can be singled out in the history of TV and digital video development: 1—black and white TV – image brightness is transmitted. 2 – colored TV – image brightness and color components are transmitted. From the data volume perspective, adding color is a quantitative change. From the viewer perspective, it is a qualitative change. 3 – digital video emergence (Video CD, DVD) – a qualitative change from the data format perspective. 4 – HD digital video and TV (Blu-Ray, HDTV) – from the data volume perspectiveit is a quantitative change. However, exactly the same components are transmitted: brightness and color. Specialists and viewers have been anticipating for the change that had been predicted by sci-fi writers long ago, - 3D TV emergency. For a long time data volume was a bottleneck preventing stereoscopic video demonstration as the existing media couldn’t transmit it. Digital TV enabled to transmit enough data and became the basis for a number of devices that helped to realize 3D visualization. -
The Role of Focus Cues in Stereopsis, and the Development of a Novel Volumetric Display
The role of focus cues in stereopsis, and the development of a novel volumetric display by David Morris Hoffman A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Vision Science in the GRADUATE DIVISION of the UNIVERSITY OF CALIFORNIA, BERKELEY Committee in charge: Professor Martin S. Banks, Chair Professor Austin Roorda Professor John Canny Spring 2010 1 Abstract The role of focus cues in stereopsis, and the development of a novel volumetric display by David Morris Hoffman Doctor of Philosophy in Vision Science University of California, Berkeley Professor Martin S. Banks, Chair Typical stereoscopic displays produce a vivid impression of depth by presenting each eye with its own image on a flat display screen. This technique produces many depth signals (including disparity) that are consistent with a 3-dimensional (3d) scene; however, by using a flat display panel, focus information will be incorrect. The accommodation distance to the simulated objects will be at the screen distance, and blur will be inconsistent with the simulated depth. In this thesis I will described several studies investigating the importance of focus cues for binocular vision. These studies reveal that there are a number of benefits to presenting correct focus information in a stereoscopic display, such as making it easier to fuse a binocular image, reducing visual fatigue, mitigating disparity scaling errors, and helping to resolve the binocular correspondence problem. For each of these problems, I discuss the theory for how focus cues could be an important factor, and then present psychophysical data showing that indeed focus cues do make a difference. -
Development of a Real 3D Display System
2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C) Development of A Real 3D Display System Chong Zeng Weihua Li Hualong Guo College of Mathematics and College of Mathematics and College of Mathematics and information Engineering information Engineering information Engineering Longyan University Longyan University Longyan University Longyan, China Longyan, China Longyan, China Tung-lung Wu School of Mechanical and Automotive Engineering Dennis Bumsoo Kim Zhaoqing University College of Mathematics and information Engineering Zhaoqing, China Longyan University Longyan, China Abstract—This paper introduces a three-dimensional light achieve high fidelity of the reproduced objects by recording field display system, which is composed of a high-speed total object information on the phase, the amplitude, and the projector, a directional scattering mirror, a circular stainless- intensity at each point in the light wave [4]. steel bearing plate, a rotating shaft and a high-speed micro motor. The system reduces information redundancy and Recently, Cambridge Enterprise invented a novel 360e computational complexity by reconstructing the light intensity 3D light-field capturing and display system which can have distribution of the observed object, thus generating a real three- applications in gaming experiences, an enhanced reality and dimensional suspended image. The experimental results show autonomous vehicle infotainment and so on [5]. Motohiro et that the suspension three-dimensional image can be generated al. presented an interactive 360-degree tabletop display system by properly adjusting the projection rate of the image and the for collaborative works around a round table so that users rotation speed of the rotating mirror (i.e. -
Dynamic Stereoscopic Previz Sergi Pujades, Laurent Boiron, Rémi Ronfard, Frédéric Devernay
Dynamic Stereoscopic Previz Sergi Pujades, Laurent Boiron, Rémi Ronfard, Frédéric Devernay To cite this version: Sergi Pujades, Laurent Boiron, Rémi Ronfard, Frédéric Devernay. Dynamic Stereoscopic Previz. International Conference on 3D Imaging, Professeur Jacques G. Verly, Dec 2014, Liege, Belgium. 10.1109/IC3D.2014.7032600. hal-01083848 HAL Id: hal-01083848 https://hal.inria.fr/hal-01083848 Submitted on 10 Jun 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License DYNAMIC STEREOSCOPIC PREVIZ Sergi Pujades 1, Laurent Boiron 2,Remi´ Ronfard 2, Fred´ eric´ Devernay 1 1 Laboratoire d’Informatique de Grenoble, Univ. Grenoble Alpes & Inria, France. 2 Laboratoire Jean Kuntzmann, Univ. Grenoble Alpes & Inria, France. ABSTRACT come these difficulties, by containing the acquisition param- eters in a ”3D safe zone”. For example the 1/30th rule states The pre-production stage in a film workflow is important to that ”the interaxial distance should be 1/30th of the distance save time during production. To be useful in stereoscopic from the camera to the first foreground object” [8]. This rule 3-D movie-making, storyboards and previz tools need to be is very handy for safe filming, but very limiting in terms of adapted in at least two ways. -
3D TV: a Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes
3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Matusik, Wojciech, and Hanspeter Pfister. 2004. 3D TV: A scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. Proceedings International Conference on Computer Graphics and Interactive Techniques, ACM SIGGRAPH 2004 Papers: August 08-12, 2004, Los Angeles, California, 814-824. New York, NY: ACM. Also published in ACM Transactions on Graphics 23(3): 814-824. Published Version doi:10.1145/1186562.1015805;doi:10.1145/1015706.1015805 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:4726196 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA 3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes Wojciech Matusik Hanspeter Pfister∗ Mitsubishi Electric Research Laboratories, Cambridge, MA. Figure 1: 3D TV system. Left (top to bottom): Array of 16 cameras and projectors. Middle: Rear-projection 3D display with double-lenticular screen. Right: Front-projection 3D display with single-lenticular screen. Abstract 1 Introduction Three-dimensional TV is expected to be the next revolution in the Humans gain three-dimensional information from a variety of cues. history of television. We implemented a 3D TV prototype system Two of the most important ones are binocular parallax, scientif- with real-time acquisition, transmission, and 3D display of dynamic ically studied by Wheatstone in 1838, and motion parallax, de- scenes. -
4. 1. Research Output Iv) Screenings
4. 1. Research Output iv) Screenings Table 4.1.9. is summary for film production done from AY July 2007-till present. I have introduced animated content into my works since 2007 and since 2010 HD DCP/Digital Cinema Projection and Two channel Stereoscopic HD 3D animation for passive/active 3D stereo DCP/Digital Cinema. In addition films were adapted for various immersive stereoscopic visualization platforms such as the following: 320 The Immesrive Room in the NTU/Institute for Media Innovation Immersive Room (5 channels)1 Ars Electronica, presetigious stereo venue Deep Space for the ocasion of the most reputable festival of digital arts in the world Ars Electronica Festival (2011, 2013).2 Glass-free stereoscopic version of the film for Alioscopy 3D HD 55" LV TV Platform British Academy Awards (BAFTA) qualifying festival, renowned Brooklyn Film Festival, New York (2011) The resulting 3D stereoscopic animated films won 15 awards (14 International) and were selected in 47 festivals/theatrical screenings. Film was submitted in more than 70 international juried events such as: The 14th International Beverly Hills Film Festival(2014), SPARK [FWD] 3D, VFX & Advanced Imaging Conference Vancouver by prestigious Emily Carr/S3D Centre and the Canadian 3D and Advance Imaging Society (2013), 67th Edinburgh International Film Festival, UK (2013)3 The 15th Annual Washington DC Independent Film Festival(2013),Canada International Film Festival (2013), Acclaimed Festival Anima Mundi Brazil (2011 and 2013)4 ,6th Annual, Los Angeles, the world's largest all digital 3D stereoscopic Film, Music & Interactive Festival (2013), Dimension 3 Festival, Seine-Saint–Denis, France (2010) The 27th Southeast Asian Games (SEAG) (2013). -
Virtual Stereo Display Techniques for Three-Dimensional Geographic Data
Virtual Stereo Display Techniques for Three-Dimensional Geographic Data Abstract (Beaton et al., 1987; Moellering, 1989; Robinson, 1990; Techniques have been developed which allow display of Hodges, 1992). While these technological innovations are volumetric geographic data in three-dimensionalform using likely to contribute to broader use of stereo images, new con- concepts of parallactic displacement. Image stereo pairs and cepts of vision have established that this leftlright separation synthetic stereo of digitized photographs, SPOT, Landsat m, is not necessary to establish a virtual three-dimensional im- shaded relief, GIs layers, and DLG vector data are imple- age Uules, 1971; Marr, 1982; Jones et al., 1984). mented. The synthetic stereo pair is generated from a single Stereovision is not a reflex action but is in fact an intel- image through computation of a parallax shifl proportional lectual activity in which one depth map, for example, the to the volumetric values at each location in the image. A left image of a stereo pair, is received by the eye-brain com- volumetric dataset, such as elevation or population density, bination and stored, and later merged with a second depth provides the single requirement for generation of the stereo map, the right image of the stereo pair (Marr, 1982). This image from an original digital dataset. To view the resulting concept of vision opens new approaches to the display of stereo images, softcopy methods, including traditional ana- three-dimensional geographic information and, with the use glyph techniques requiring complementary colored lenses of current computer graphics technology, can be imple- and an autostereoscopic procedure requiring no special mented at low-cost.