Advancements in 3D Display Technologies for Single and Multi-User Applications By: Alex Lyubarsky
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Stereo Capture and Display At
Creation of a Complete Stereoscopic 3D Workflow for SoFA Allison Hettinger and Ian Krassner 1. Introduction 1.1 3D Trend Stereoscopic 3D motion pictures have recently risen to popularity once again following the success of films such as James Cameron’s Avatar. More and more films are being converted to 3D but few films are being shot in 3D. Current available technology and knowledge of that technology (along with cost) is preventing most films from being shot in 3D. Shooting in 3D is an advantage because two slightly different images are produced that mimic the two images the eyes see in normal vision. Many take the cheaper route of shooting in 2D and converting to 3D. This results in a 3D image, but usually nowhere near the quality as if the film was originally shot in 3D. This is because a computer has to create the second image, which can result in errors. It is also important to note that a 3D image does not necessarily mean a stereo image. 3D can be used to describe images that have an appearance of depth, such as 3D animations. Stereo images refer to images that make use of retinal disparity to create the illusion of objects going out of and into the screen plane. Stereo images are optical illusions that make use of several cues that the brain uses to perceive a scene. Examples of monocular cues are relative size and position, texture gradient, perspective and occlusion. These cues help us determine the relative depth positions of objects in an image. Binocular cues such as retinal disparity and convergence are what give the illusion of depth. -
Stereoscopic Vision, Stereoscope, Selection of Stereo Pair and Its Orientation
International Journal of Science and Research (IJSR) ISSN (Online): 2319-7064 Impact Factor (2012): 3.358 Stereoscopic Vision, Stereoscope, Selection of Stereo Pair and Its Orientation Sunita Devi Research Associate, Haryana Space Application Centre (HARSAC), Department of Science & Technology, Government of Haryana, CCS HAU Campus, Hisar – 125 004, India , Abstract: Stereoscope is to deflect normally converging lines of sight, so that each eye views a different image. For deriving maximum benefit from photographs they are normally studied stereoscopically. Instruments in use today for three dimensional studies of aerial photographs. If instead of looking at the original scene, we observe photos of that scene taken from two different viewpoints, we can under suitable conditions, obtain a three dimensional impression from the two dimensional photos. This impression may be very similar to the impression given by the original scene, but in practice this is rarely so. A pair of photograph taken from two cameras station but covering some common area constitutes and stereoscopic pair which when viewed in a certain manner gives an impression as if a three dimensional model of the common area is being seen. Keywords: Remote Sensing (RS), Aerial Photograph, Pocket or Lens Stereoscope, Mirror Stereoscope. Stereopair, Stere. pair’s orientation 1. Introduction characteristics. Two eyes must see two images, which are only slightly different in angle of view, orientation, colour, A stereoscope is a device for viewing a stereoscopic pair of brightness, shape and size. (Figure: 1) Human eyes, fixed on separate images, depicting left-eye and right-eye views of same object provide two points of observation which are the same scene, as a single three-dimensional image. -
Scalable Multi-View Stereo Camera Array for Real World Real-Time Image Capture and Three-Dimensional Displays
Scalable Multi-view Stereo Camera Array for Real World Real-Time Image Capture and Three-Dimensional Displays Samuel L. Hill B.S. Imaging and Photographic Technology Rochester Institute of Technology, 2000 M.S. Optical Sciences University of Arizona, 2002 Submitted to the Program in Media Arts and Sciences, School of Architecture and Planning in Partial Fulfillment of the Requirements for the Degree of Master of Science in Media Arts and Sciences at the Massachusetts Institute of Technology June 2004 © 2004 Massachusetts Institute of Technology. All Rights Reserved. Signature of Author:<_- Samuel L. Hill Program irlg edia Arts and Sciences May 2004 Certified by: / Dr. V. Michael Bove Jr. Principal Research Scientist Program in Media Arts and Sciences ZA Thesis Supervisor Accepted by: Andrew Lippman Chairperson Department Committee on Graduate Students MASSACHUSETTS INSTITUTE OF TECHNOLOGY Program in Media Arts and Sciences JUN 172 ROTCH LIBRARIES Scalable Multi-view Stereo Camera Array for Real World Real-Time Image Capture and Three-Dimensional Displays Samuel L. Hill Submitted to the Program in Media Arts and Sciences School of Architecture and Planning on May 7, 2004 in Partial Fulfillment of the Requirements for the Degree of Master of Science in Media Arts and Sciences Abstract The number of three-dimensional displays available is escalating and yet the capturing devices for multiple view content are focused on either single camera precision rigs that are limited to stationary objects or the use of synthetically created animations. In this work we will use the existence of inexpensive digital CMOS cameras to explore a multi- image capture paradigm and the gathering of real world real-time data of active and static scenes. -
Interaction Methods for Smart Glasses: a Survey
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2018.2831081, IEEE Access Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Digital Object Identifier 10.1109/ACCESS.2018.Doi Number Interaction Methods for Smart Glasses: A survey Lik-Hang LEE1, and Pan HUI1&2 (Fellow, IEEE). 1The Hong Kong University of Science and Technology, Department of Computer Science and Engineering 2The University of Helsinki, Department of Computer Science Corresponding author: Pan HUI (e-mail: panhui@ cse.ust.hk). ABSTRACT Since the launch of Google Glass in 2014, smart glasses have mainly been designed to support micro-interactions. The ultimate goal for them to become an augmented reality interface has not yet been attained due to an encumbrance of controls. Augmented reality involves superimposing interactive computer graphics images onto physical objects in the real world. This survey reviews current research issues in the area of human-computer interaction for smart glasses. The survey first studies the smart glasses available in the market and afterwards investigates the interaction methods proposed in the wide body of literature. The interaction methods can be classified into hand-held, touch, and touchless input. This paper mainly focuses on the touch and touchless input. Touch input can be further divided into on-device and on-body, while touchless input can be classified into hands-free and freehand. Next, we summarize the existing research efforts and trends, in which touch and touchless input are evaluated by a total of eight interaction goals. -
Pinhole Microled Array As Point Source Illumination for Miniaturized Lensless Cell Monitoring Systems †
Proceedings Pinhole microLED Array as Point Source Illumination for Miniaturized Lensless Cell Monitoring Systems † Shinta Mariana 1,*, Gregor Scholz 1, Feng Yu 1, Agus Budi Dharmawan 1,2, Iqbal Syamsu 1,3, Joan Daniel Prades 4, Andreas Waag 1 and Hutomo Suryo Wasisto 1,* 1 Institute of Semiconductor Technology (IHT) and Laboratory for Emerging Nanometrology (LENA), Technische Universität Braunschweig, 38106 Braunschweig, Germany; gregor.scholz@tu‐braunschweig.de (G.S.); f.yu@tu‐braunschweig.de (F.Y.); a.dharmawan@tu‐braunschweig.de (A.B.D.); i.syamsu@tu‐braunschweig.de (I.S.); a.waag@tu‐braunschweig.de (A.W.) 2 Faculty of Information Technology, Universitas Tarumanagara, 11440 Jakarta, Indonesia 3 Research Center for Electronics and Telecommunication, Indonesian Institute of Sciences (LIPI), 40135 Bandung, Indonesia 4 MIND, Department of Electronic and Biomedical Engineering, Universitat de Barcelona, 08028 Barcelona, Spain; [email protected] * Correspondence: s.mariana@tu‐braunschweig.de (S.M.); h.wasisto@tu‐braunschweig.de (H.S.W.) † Presented at the Eurosensors 2018 Conference, Graz, Austria, 9–12 September 2018. Published: 21 November 2018 Abstract: Pinhole‐shaped light‐emitting diode (LED) arrays with dimension ranging from 100 μm down to 5 μm have been developed as point illumination sources. The proposed microLED arrays, which are based on gallium nitride (GaN) technology and emitting in the blue spectral region (λ = 465 nm), are integrated into a compact lensless holographic microscope for a non‐invasive, label‐free cell sensing and imaging. From the experimental results using single pinhole LEDs having a diameter of 90 μm, the reconstructed images display better resolution and enhanced image quality compared to those captured using a commercial surface‐mount device (SMD)‐based LED. -
An Augmented Reality Social Communication Aid for Children and Adults with Autism: User and Caregiver Report of Safety and Lack of Negative Effects
bioRxiv preprint doi: https://doi.org/10.1101/164335; this version posted July 19, 2017. The copyright holder for this preprint (which was not certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. An Augmented Reality Social Communication Aid for Children and Adults with Autism: User and caregiver report of safety and lack of negative effects. An Augmented Reality Social Communication Aid for Children and Adults with Autism: User and caregiver report of safety and lack of negative effects. Ned T. Sahin1,2*, Neha U. Keshav1, Joseph P. Salisbury1, Arshya Vahabzadeh1,3 1Brain Power, 1 Broadway 14th Fl, Cambridge MA 02142, United States 2Department of Psychology, Harvard University, United States 3Department of Psychiatry, Massachusetts General Hospital, Boston * Corresponding Author. Ned T. Sahin, PhD, Brain Power, 1 Broadway 14th Fl, Cambridge, MA 02142, USA. Email: [email protected]. Abstract Background: Interest has been growing in the use of augmented reality (AR) based social communication interventions in autism spectrum disorders (ASD), yet little is known about their safety or negative effects, particularly in head-worn digital smartglasses. Research to understand the safety of smartglasses in people with ASD is crucial given that these individuals may have altered sensory sensitivity, impaired verbal and non-verbal communication, and may experience extreme distress in response to changes in routine or environment. Objective: The objective of this report was to assess the safety and negative effects of the Brain Power Autism System (BPAS), a novel AR smartglasses-based social communication aid for children and adults with ASD. BPAS uses emotion-based artificial intelligence and a smartglasses hardware platform that keeps users engaged in the social world by encouraging “heads-up” interaction, unlike tablet- or phone-based apps. -
Virtual Reality and Visual Perception by Jared Bendis
Virtual Reality and Visual Perception by Jared Bendis Introduction Goldstein (2002) defines perception as a “conscious sensory experience” (p. 6) and as scientists investigate how the human perceptual system works they also find themselves investigating how the human perceptual system doesn’t work and how that system can be fooled, exploited, and even circumvented. The pioneers in the ability to control the human perceptual system have been in the field of Virtual Realty. In Simulated and Virtual Realities – Elements of Perception, Carr (1995) defines Virtual Reality as “…the stimulation of human perceptual experience to create an impression of something which is not really there” (p. 5). Heilig (2001) refers to this form of “realism” as “experience” and in his 1955 article about “The Cinema of the Future” where he addresses the need to look carefully at perception and breaks down the precedence of perceptual attention as: Sight 70% Hearing 20% Smell 5% Touch 4% Taste 1% (p. 247) Not surprisingly sight is considered the most important of the senses as Leonardo da Vinci observed: “They eye deludes itself less than any of the other senses, because it sees by none other than the straight lines which compose a pyramid, the base of which is the object, and the lines conduct the object to the eye… But the ear is strongly subject to delusions about the location and distance of its objects because the images [of sound] do not reach it in straight lines, like those of the eye, but by tortuous and reflexive lines. … The sense of smells is even less able to locate the source of an odour. -
The Artistic & Scientific World in 8K Super Hi-Vision
17th International Display Workshops 2010 (IDW 2010) Fukuoka, Japan 1-3 December 2010 Volume 1 of 3 ISBN: 978-1-61782-701-3 ISSN: 1883-2490 Printed from e-media with permission by: Curran Associates, Inc. 57 Morehouse Lane Red Hook, NY 12571 Some format issues inherent in the e-media version may also appear in this print version. Copyright© (2010) by the Society for Information Display All rights reserved. Printed by Curran Associates, Inc. (2012) For permission requests, please contact the Society for Information Display at the address below. Society for Information Display 1475 S. Bascom Ave. Suite 114 Campbell, California 95008-4006 Phone: (408) 879-3901 Fax: (408) 879-3833 or (408) 516-8306 [email protected] Additional copies of this publication are available from: Curran Associates, Inc. 57 Morehouse Lane Red Hook, NY 12571 USA Phone: 845-758-0400 Fax: 845-758-2634 Email: [email protected] Web: www.proceedings.com TABLE OF CONTENTS VOLUME 1 KEYNOTE ADDRESS The Artistic & Scientific World in 8K Super Hi-Vision.................................................................................................................................1 Yoichiro Kawaguchi INVITED ADDRESS TAOS-TFTs : History and Perspective............................................................................................................................................................5 Hideo Hosono LCT1: PHOTO ALIGNMENT TECHNOLOGY The UV2A Technology for Large Size LCD-TV Panels .................................................................................................................................9 -
Holographic Optics for Thin and Lightweight Virtual Reality
Holographic Optics for Thin and Lightweight Virtual Reality ANDREW MAIMONE, Facebook Reality Labs JUNREN WANG, Facebook Reality Labs Fig. 1. Left: Photo of full color holographic display in benchtop form factor. Center: Prototype VR display in sunglasses-like form factor with display thickness of 8.9 mm. Driving electronics and light sources are external. Right: Photo of content displayed on prototype in center image. Car scenes by komba/Shutterstock. We present a class of display designs combining holographic optics, direc- small text near the limit of human visual acuity. This use case also tional backlighting, laser illumination, and polarization-based optical folding brings VR out of the home and in to work and public spaces where to achieve thin, lightweight, and high performance near-eye displays for socially acceptable sunglasses and eyeglasses form factors prevail. virtual reality. Several design alternatives are proposed, compared, and ex- VR has made good progress in the past few years, and entirely perimentally validated as prototypes. Using only thin, flat films as optical self-contained head-worn systems are now commercially available. components, we demonstrate VR displays with thicknesses of less than 9 However, current headsets still have box-like form factors and pro- mm, fields of view of over 90◦ horizontally, and form factors approach- ing sunglasses. In a benchtop form factor, we also demonstrate a full color vide only a fraction of the resolution of the human eye. Emerging display using wavelength-multiplexed holographic lenses that uses laser optical design techniques, such as polarization-based optical folding, illumination to provide a large gamut and highly saturated color. -
Stereo 3D (Depth) from 2D
Stereo 3D (Depth) from 2D • 3D information is lost by Viewing Stereo projection. Stereograms • How do we recover 3D Autostereograms information? Depth from Stereo Image 3D Model Depth Cues Shadows: Occlusions: Shading: Size Constancy Perspective Illusions (perspective): Height in Plane: Texture Gradient: 3D from 2D + Accommodation • Accommodation (Focus) • Change in lens curvature • Eye Vengeance according to object depth. • Effective depth: 20-300 cm. • Motion. • Stereo Accommodation Eye Vergence • Change in lens curvature • Change in lens curvature according to object depth. according to object depth. • Effective depth: 20-300 cm. • Effective depth: up to 6 m. Motion: Motion: Stereo Vision • In a system with 2 cameras (eyes), 2 different images are captured. • The "disparity" between the images is larger for closer objects: 1 disp ∝ depth • "Fusion" of these 2 images gives disparities depth information. Left Right Right Eye Left Eye Right Eye Left Eye Image Separation for Stereo • Special Glasses • Red/green images with red/green glasses. • Orthogonal Polarization • Alternating Shuttering Optic System Optic System Parlor Stereo Viewer 1850 Viewmaster 1939 ViduTech 2011 Active Shutter System Red/Green Filters Anaglyphs Anaglyphs How they Work Orthogonal Polarization Orthogonal Polarization Linear Polarizers: 2 polarized projectors are used (or alternating polarization) Orthogonal Polarization Orthogonal Polarization Circular Polarizers: Circular Polarizers: Left handed Right handed Orthogonal Polarization TV and Computer Screens Polarized Glasses Circular Polarizer Glasses: Same as polarizers – but reverse light direction Left handed Right handed Glasses Free TV and Glasses Free TV and Computer Screens Computer Screens Parallax Stereogram Parallax Stereogram Parallax Barrier display Uses Vertical Slits Blocks part of screen from each eye Glasses Free TV and Glasses Free TV and Computer Screens Computer Screens Lenticular lens method Lenticular lens method Uses lens arrays to send different Image to each eye. -
A Review and Selective Analysis of 3D Display Technologies for Anatomical Education
University of Central Florida STARS Electronic Theses and Dissertations, 2004-2019 2018 A Review and Selective Analysis of 3D Display Technologies for Anatomical Education Matthew Hackett University of Central Florida Part of the Anatomy Commons Find similar works at: https://stars.library.ucf.edu/etd University of Central Florida Libraries http://library.ucf.edu This Doctoral Dissertation (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations, 2004-2019 by an authorized administrator of STARS. For more information, please contact [email protected]. STARS Citation Hackett, Matthew, "A Review and Selective Analysis of 3D Display Technologies for Anatomical Education" (2018). Electronic Theses and Dissertations, 2004-2019. 6408. https://stars.library.ucf.edu/etd/6408 A REVIEW AND SELECTIVE ANALYSIS OF 3D DISPLAY TECHNOLOGIES FOR ANATOMICAL EDUCATION by: MATTHEW G. HACKETT BSE University of Central Florida 2007, MSE University of Florida 2009, MS University of Central Florida 2012 A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Modeling and Simulation program in the College of Engineering and Computer Science at the University of Central Florida Orlando, Florida Summer Term 2018 Major Professor: Michael Proctor ©2018 Matthew Hackett ii ABSTRACT The study of anatomy is complex and difficult for students in both graduate and undergraduate education. Researchers have attempted to improve anatomical education with the inclusion of three-dimensional visualization, with the prevailing finding that 3D is beneficial to students. However, there is limited research on the relative efficacy of different 3D modalities, including monoscopic, stereoscopic, and autostereoscopic displays. -
Light Engines for XR Smartglasses by Jonathan Waldern, Ph.D
August 28, 2020 Light Engines for XR Smartglasses By Jonathan Waldern, Ph.D. The near-term obstacle to meeting an elegant form factor for Extended Reality1 (XR) glasses is the size of the light engine2 that projects an image into the waveguide, providing a daylight-bright, wide field-of-view mobile display For original equipment manufacturers (OEMs) developing XR smartglasses that employ diffractive wave- guide lenses, there are several light engine architectures contending for the throne. Highly transmissive daylight-bright glasses demanded by early adopting customers translate to a level of display efficiency, 2k-by-2k and up resolution plus high contrast, simply do not exist today in the required less than ~2cc (cubic centimeter) package size. This thought piece examines both Laser and LED contenders. It becomes clear that even if MicroLED (µLED) solutions do actually emerge as forecast in the next five years, fundamentally, diffractive wave- guides are not ideally paired to broadband LED illumination and so only laser based light engines, are the realistic option over the next 5+ years. Bottom Line Up Front • µLED, a new emissive panel technology causing considerable excitement in the XR community, does dispense with some bulky refractive illumination optics and beam splitters, but still re- quires a bulky projection lens. Yet an even greater fundamental problem of µLEDs is that while bright compared with OLED, the technology falls short of the maximum & focused brightness needed for diffractive and holographic waveguides due to the fundamental inefficiencies of LED divergence. • A laser diode (LD) based light engine has a pencil like beam of light which permits higher effi- ciency at a higher F#.