The Stereoscopic Cinema: from Film to Digital Projection
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Breaking Down the “Cosine Fourth Power Law”
Breaking Down The “Cosine Fourth Power Law” By Ronian Siew, inopticalsolutions.com Why are the corners of the field of view in the image captured by a camera lens usually darker than the center? For one thing, camera lenses by design often introduce “vignetting” into the image, which is the deliberate clipping of rays at the corners of the field of view in order to cut away excessive lens aberrations. But, it is also known that corner areas in an image can get dark even without vignetting, due in part to the so-called “cosine fourth power law.” 1 According to this “law,” when a lens projects the image of a uniform source onto a screen, in the absence of vignetting, the illumination flux density (i.e., the optical power per unit area) across the screen from the center to the edge varies according to the fourth power of the cosine of the angle between the optic axis and the oblique ray striking the screen. Actually, optical designers know this “law” does not apply generally to all lens conditions.2 – 10 Fundamental principles of optical radiative flux transfer in lens systems allow one to tune the illumination distribution across the image by varying lens design characteristics. In this article, we take a tour into the fascinating physics governing the illumination of images in lens systems. Relative Illumination In Lens Systems In lens design, one characterizes the illumination distribution across the screen where the image resides in terms of a quantity known as the lens’ relative illumination — the ratio of the irradiance (i.e., the power per unit area) at any off-axis position of the image to the irradiance at the center of the image. -
Microdisplays - Market, Industry and Technology Trends 2020 Market and Technology Report 2020
From Technologies to Markets Microdisplays - Market, Industry and Technology Trends 2020 Market and Technology Report 2020 Sample © 2020 TABLE OF CONTENTS • Glossary and definition • Industry trends 154 • Table of contents o Established technologies players 156 • Report objectives o Emerging technologies players 158 • Report scope o Ecosystem analysis 160 • Report methodology o Noticeable collaborations and partnerships 170 • About the authors o Company profiles 174 • Companies cited in this report • Who should be interested by this report • Yole Group related reports • Technology trends 187 o Competition benchmarking 189 • Executive Summary 009 o Technology description 191 o Technology roadmaps 209 • Context 048 o Examples of products and future launches 225 • Market forecasts 063 • Outlooks 236 o End-systems 088 o AR headsets 104 • About Yole Group of Companies 238 o Automotive HUDs 110 o Others 127 • Market trends 077 o Focus on AR headsets 088 o A word about VR 104 o Focus on Auto HUDs 110 o Focus on 3D Displays 127 o Summary of other small SLM applications 139 Microdisplays - Market, Industry and Technology Trends 2020 | Sample | www.yole.fr | ©2020 2 ACRONYMS AMOLED: Active Matrix OLED HMD: Head mounted Device/Display PPI: Pixel Per Inch AR: Augmented Reality HOE: Holographic Optical Element PWM: Pulse Width Modulation BLU: Back Lighting Unit HRI: High Refractive Index QD: Quantum Dot CF LCOS: Color Filter LCOS HVS: Human Vision System RGB: Red-Green-Blue CG: Computer Generated IMU: Inertial measurement Unit RMLCM: Reactive Monomer -
Moving Pictures: the History of Early Cinema by Brian Manley
Discovery Guides Moving Pictures: The History of Early Cinema By Brian Manley Introduction The history of film cannot be credited to one individual as an oversimplification of any his- tory often tries to do. Each inventor added to the progress of other inventors, culminating in progress for the entire art and industry. Often masked in mystery and fable, the beginnings of film and the silent era of motion pictures are usually marked by a stigma of crudeness and naiveté, both on the audience's and filmmakers' parts. However, with the landmark depiction of a train hurtling toward and past the camera, the Lumière Brothers’ 1895 picture “La Sortie de l’Usine Lumière à Lyon” (“Workers Leaving the Lumière Factory”), was only one of a series of simultaneous artistic and technological breakthroughs that began to culminate at the end of the nineteenth century. These triumphs that began with the creation of a machine that captured moving images led to one of the most celebrated and distinctive art forms at the start of the 20th century. Audiences had already reveled in Magic Lantern, 1818, Musée des Arts et Métiers motion pictures through clever uses of slides http://en.wikipedia.org/wiki/File:Magic-lantern.jpg and mechanisms creating "moving photographs" with such 16th-century inventions as magic lanterns. These basic concepts, combined with trial and error and the desire of audiences across the world to see entertainment projected onto a large screen in front of them, birthed the movies. From the “actualities” of penny arcades, the idea of telling a story in order to draw larger crowds through the use of differing scenes began to formulate in the minds of early pioneers such as Georges Melies and Edwin S. -
American Scientist the Magazine of Sigma Xi, the Scientific Research Society
A reprint from American Scientist the magazine of Sigma Xi, The Scientific Research Society This reprint is provided for personal and noncommercial use. For any other use, please send a request to Permissions, American Scientist, P.O. Box 13975, Research Triangle Park, NC, 27709, U.S.A., or by electronic mail to [email protected]. ©Sigma Xi, The Scientific Research Society and other rightsholders Engineering Next Slide, Please Henry Petroski n the course of preparing lectures years—against strong opposition from Ibased on the material in my books As the Kodak some in the artistic community—that and columns, I developed during the simple projection devices were used by closing decades of the 20th century a the masters to trace in near exactness good-sized library of 35-millimeter Carousel begins its intricate images, including portraits, that slides. These show structures large and the free hand could not do with fidelity. small, ranging from bridges and build- slide into history, ings to pencils and paperclips. As re- The Magic Lantern cently as about five years ago, when I it joins a series of The most immediate antecedent of the indicated to a host that I would need modern slide projector was the magic the use of a projector during a talk, just previous devices used lantern, a device that might be thought about everyone understood that to mean of as a camera obscura in reverse. Instead a Kodak 35-mm slide projector (or its to add images to talks of squeezing a life-size image through a equivalent), and just about every venue pinhole to produce an inverted minia- had one readily available. -
Motion-In-Depth Perception and Prey Capture in the Praying Mantis Sphodromantis Lineola
© 2019. Published by The Company of Biologists Ltd | Journal of Experimental Biology (2019) 222, jeb198614. doi:10.1242/jeb.198614 RESEARCH ARTICLE Motion-in-depth perception and prey capture in the praying mantis Sphodromantis lineola Vivek Nityananda1,*, Coline Joubier1,2, Jerry Tan1, Ghaith Tarawneh1 and Jenny C. A. Read1 ABSTRACT prey as they come near. Just as with depth perception, several cues Perceiving motion-in-depth is essential to detecting approaching could contribute to the perception of motion-in-depth. or receding objects, predators and prey. This can be achieved Two of the motion-in-depth cues that have received the most using several cues, including binocular stereoscopic cues such attention in humans are binocular: changing disparity and as changing disparity and interocular velocity differences, and interocular velocity differences (IOVDs) (Cormack et al., 2017). monocular cues such as looming. Although these have been Stereoscopic disparity refers to the difference in the position of an studied in detail in humans, only looming responses have been well object as seen by the two eyes. This disparity reflects the distance to characterized in insects and we know nothing about the role of an object. Thus as an object approaches, the disparity between the stereoscopic cues and how they might interact with looming cues. We two views changes. This changing disparity cue suffices to create a used our 3D insect cinema in a series of experiments to investigate perception of motion-in-depth for human observers, even in the the role of the stereoscopic cues mentioned above, as well as absence of other cues (Cumming and Parker, 1994). -
Lecture 4A: Cameras
CS6670: Computer Vision Noah Snavely Lecture 4a: Cameras Source: S. Lazebnik Reading • Szeliski chapter 2.2.3, 2.3 Image formation • Let’s design a camera – Idea 1: put a piece of film in front of an object – Do we get a reasonable image? Pinhole camera • Add a barrier to block off most of the rays – This reduces blurring – The opening known as the aperture – How does this transform the image? Camera Obscura • Basic principle known to Mozi (470‐390 BC), Aristotle (384‐322 BC) • Drawing aid for artists: described by Leonardo da Vinci (1452‐1519) Gemma Frisius, 1558 Source: A. Efros Camera Obscura Home‐made pinhole camera Why so blurry? Slide by A. Efros http://www.debevec.org/Pinhole/ Shrinking the aperture • Why not make the aperture as small as possible? • Less light gets through • Diffraction effects... Shrinking the aperture Adding a lens “circle of confusion” • A lens focuses light onto the film – There is a specific distance at which objects are “in focus” • other points project to a “circle of confusion” in the image – Changing the shape of the lens changes this distance Lenses F focal point • A lens focuses parallel rays onto a single focal point – focal point at a distance f beyond the plane of the lens (the focal length) • f is a function of the shape and index of refraction of the lens – Aperture restricts the range of rays • aperture may be on either side of the lens – Lenses are typically spherical (easier to produce) Thin lenses • Thin lens equation: – Any object point satisfying this equation is in focus – What is the shape -
Digital Light Processing™: a New MEMS-Based Display Technology
Digital Light Processing™: A New MEMS-Based Display Technology Larry J. Hornbeck Texas Instruments 1.0 Introduction Sights and sounds in our world are analog, yet when we electronically acquire, store, and communicate these analog phenomena, there are significant advantages in using digital technology. This was first evident with audio as it was transformed from a technology of analog tape and vinyl records to digital audio CDs. Video is now making the same conversion to digital technology for acquisition, storage, and communication. Witness the development of digital CCD cameras for image acquisition, digital transmission of TV signals (DBS), and video compression techniques for more efficient transmission, higher density storage on a video CD, or for video conference calls. The natural interface to digital video would be a digital display. But until recently, this possibility seemed as remote as developing a digital loudspeaker to interface with digital audio. Now there is a new MEMS-based projection display technology called Digital Light Processing (DLP) that accepts digital video and transmits to the eye a burst of digital light pulses that the eye interprets as a color analog image(Figure 1). Figure 1 DLP Processing System DLP is based on a microelectromechanical systems (MEMS) device known as the Digital Micromirror Device (DMD). Invented in 1987 at Texas Instruments, the DMD microchip[1,2] is a fast, reflective digital light switch. It can be combined with image processing, memory, a light source, and optics to form a DLP system capable of projecting large, bright, seamless, high- contrast color images with better color fidelity and consistency than current displays[3-24]. -
Device-Independent Representation of Photometric Properties of a Camera
CVPR CVPR #**** #**** CVPR 2010 Submission #****. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. 000 054 001 055 002 056 003 Device-Independent Representation of Photometric Properties of a Camera 057 004 058 005 059 006 Anonymous CVPR submission 060 007 061 008 062 009 Paper ID **** 063 010 064 011 065 012 Abstract era change smoothly across different device configurations. 066 013 The primary advantages of this representation are the fol- 067 014 In this paper we propose a compact device-independent lowing: (a) generation of an accurate representation using 068 015 representation of the photometric properties of a camera, the measured photometric properties at a few device config- 069 016 especially the vignetting effect. This representation can urations obtained by sparsely sampling the device settings 070 017 be computed by recovering the photometric parameters at space; (b) accurate prediction of the photometric parame- 071 018 sparsely sampled device configurations(defined by aper- ters from this representation at unsampled device configu- 072 019 ture, focal length and so on). More importantly, this can rations. To the best of our knowledge, we present the first 073 020 then be used for accurate prediction of the photometric device independent representation of photometric proper- 074 021 parameters at other new device configurations where they ties of cameras which has the capability of predicting device 075 022 have not been measured. parameters at configurations where it was not measured. 076 023 Our device independent representation opens up the op- 077 024 portunity of factory calibration of the camera and the data 078 025 1. Introduction then being stored in the camera itself. -
Epson Projector Fact Sheet
EPSON PROJECTOR FACT SHEET 3LCD – A CLEAR DIFFERENCE As the number one projector manufacturer globally1, Epson leads the market in the development of projector technology. Epson projectors use a 3LCD projection engine to deliver bright, clear images that are rich in detail and colour. 2 Many of Epson’s competitors use 1-chip DLP™ projector systems, which create thousands of pulses of coloured light per second. They do this by shining lamp light 3 through red, green, blue and white parts of a rotating colour wheel. These light pulses are 4 then reflected by a DMD™ device, which is on a hinge and has a tiny mirror for each pixel of the image. The series of rapid colour bursts is then projected onto the screen. The viewer’s brain can’t pick out the individual flickers - it mixes the basic colours that appear in succession in each pixel to come up with the final colour the viewer sees. Epson’s 3LCD system works differently, using a combination of dichroic mirrors to separate the white light from the projector lamp into red, green and blue light. Each of the three light colours is then passed through its own LCD panel and recombined using a prism before being projected onto the screen. With 1-chip DLP technology, colour break-up or the ‘rainbow effect’ can sometimes be seen. This occurs when the eye perceives the individual colours, and is a result of the colours being projected sequentially by the colour wheel. Epson’s 3LCD technology avoids this by including all three basic colours in each pixel of the projection, delivering superior Colour Light Output that’s easier on the eyes. -
Review of Display Technologies Focusing on Power Consumption
Sustainability 2015, 7, 10854-10875; doi:10.3390/su70810854 OPEN ACCESS sustainability ISSN 2071-1050 www.mdpi.com/journal/sustainability Review Review of Display Technologies Focusing on Power Consumption María Rodríguez Fernández 1,†, Eduardo Zalama Casanova 2,* and Ignacio González Alonso 3,† 1 Department of Systems Engineering and Automatic Control, University of Valladolid, Paseo del Cauce S/N, 47011 Valladolid, Spain; E-Mail: [email protected] 2 Instituto de las Tecnologías Avanzadas de la Producción, University of Valladolid, Paseo del Cauce S/N, 47011 Valladolid, Spain 3 Department of Computer Science, University of Oviedo, C/González Gutiérrez Quirós, 33600 Mieres, Spain; E-Mail: [email protected] † These authors contributed equally to this work. * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +34-659-782-534. Academic Editor: Marc A. Rosen Received: 16 June 2015 / Accepted: 4 August 2015 / Published: 11 August 2015 Abstract: This paper provides an overview of the main manufacturing technologies of displays, focusing on those with low and ultra-low levels of power consumption, which make them suitable for current societal needs. Considering the typified value obtained from the manufacturer’s specifications, four technologies—Liquid Crystal Displays, electronic paper, Organic Light-Emitting Display and Electroluminescent Displays—were selected in a first iteration. For each of them, several features, including size and brightness, were assessed in order to ascertain possible proportional relationships with the rate of consumption. To normalize the comparison between different display types, relative units such as the surface power density and the display frontal intensity efficiency were proposed. -
Stereoscopic Viewing Enhances Visually Induced Motion Sickness
Behrang Keshavarz* Stereoscopic Viewing Enhances Heiko Hecht Department of General Visually Induced Motion Sickness Experimental Psychology but Sound Does Not Johannes Gutenberg-University 55099 Mainz, Germany Abstract Optic flow in visual displays or virtual environments often induces motion sickness (MS). We conducted two studies to analyze the effects of stereopsis, background sound, and realism (video vs. simulation) on the severity of MS and related feelings of immersion and vection. In Experiment 1, 79 participants watched either a 15-min-long video clip taken during a real roller coaster ride, or a precise simulation of the same ride. Additionally, half of the participants watched the movie in 2D, and the other half in 3D. MS was measured using the Simulator Sickness Questionnaire (SSQ) and the Fast Motion Sickness Scale (FMS). Results showed a significant interaction for both variables, indicating highest sickness scores for the real roller coaster video presented in 3D, while all other videos provoked less MS and did not differ among one another. In Experiment 2, 69 subjects were exposed to a video captured during a bicycle ride. Viewing mode (3D vs. 2D) and sound (on vs. off) were varied between subjects. Response measures were the same as in Experiment 1. Results showed a significant effect of stereopsis; MS was more severe for 3D presentation. Sound did not have a significant effect. Taken together, stereoscopic viewing played a crucial role in MS in both experiments. Our findings imply that stereoscopic videos can amplify visual dis- comfort and should be handled with care. 1 Introduction Over the past decades, the technological progress in entertainment sys- tems has grown rapidly. -
Single-Image Vignetting Correction Using Radial Gradient Symmetry
Single-Image Vignetting Correction Using Radial Gradient Symmetry Yuanjie Zheng1 Jingyi Yu1 Sing Bing Kang2 Stephen Lin3 Chandra Kambhamettu1 1 University of Delaware, Newark, DE, USA {zheng,yu,chandra}@eecis.udel.edu 2 Microsoft Research, Redmond, WA, USA [email protected] 3 Microsoft Research Asia, Beijing, P.R. China [email protected] Abstract lenge is to differentiate the global intensity variation of vi- gnetting from those caused by local texture and lighting. In this paper, we present a novel single-image vignetting Zheng et al.[25] treat intensity variation caused by texture method based on the symmetric distribution of the radial as “noise”; as such, they require some form of robust outlier gradient (RG). The radial gradient is the image gradient rejection in fitting the vignetting function. They also require along the radial direction with respect to the image center. segmentation and must explicitly account for local shading. We show that the RG distribution for natural images with- All these are susceptible to errors. out vignetting is generally symmetric. However, this dis- We are also interested in vignetting correction using a tribution is skewed by vignetting. We develop two variants single image. Our proposed approach is fundamentally dif- of this technique, both of which remove vignetting by min- ferent from Zheng et al.’s—we rely on the property of sym- imizing asymmetry of the RG distribution. Compared with metry of the radial gradient distribution. (By radial gradient, prior approaches to single-image vignetting correction, our we mean the gradient along the radial direction with respect method does not require segmentation and the results are to the image center.) We show that the radial gradient dis- generally better.