Holographic Near-Eye Displays Based on Overlap-Add Stereograms
Total Page:16
File Type:pdf, Size:1020Kb

Load more
Recommended publications
-
The Artistic & Scientific World in 8K Super Hi-Vision
17th International Display Workshops 2010 (IDW 2010) Fukuoka, Japan 1-3 December 2010 Volume 1 of 3 ISBN: 978-1-61782-701-3 ISSN: 1883-2490 Printed from e-media with permission by: Curran Associates, Inc. 57 Morehouse Lane Red Hook, NY 12571 Some format issues inherent in the e-media version may also appear in this print version. Copyright© (2010) by the Society for Information Display All rights reserved. Printed by Curran Associates, Inc. (2012) For permission requests, please contact the Society for Information Display at the address below. Society for Information Display 1475 S. Bascom Ave. Suite 114 Campbell, California 95008-4006 Phone: (408) 879-3901 Fax: (408) 879-3833 or (408) 516-8306 [email protected] Additional copies of this publication are available from: Curran Associates, Inc. 57 Morehouse Lane Red Hook, NY 12571 USA Phone: 845-758-0400 Fax: 845-758-2634 Email: [email protected] Web: www.proceedings.com TABLE OF CONTENTS VOLUME 1 KEYNOTE ADDRESS The Artistic & Scientific World in 8K Super Hi-Vision.................................................................................................................................1 Yoichiro Kawaguchi INVITED ADDRESS TAOS-TFTs : History and Perspective............................................................................................................................................................5 Hideo Hosono LCT1: PHOTO ALIGNMENT TECHNOLOGY The UV2A Technology for Large Size LCD-TV Panels .................................................................................................................................9 -
Holographic Optics for Thin and Lightweight Virtual Reality
Holographic Optics for Thin and Lightweight Virtual Reality ANDREW MAIMONE, Facebook Reality Labs JUNREN WANG, Facebook Reality Labs Fig. 1. Left: Photo of full color holographic display in benchtop form factor. Center: Prototype VR display in sunglasses-like form factor with display thickness of 8.9 mm. Driving electronics and light sources are external. Right: Photo of content displayed on prototype in center image. Car scenes by komba/Shutterstock. We present a class of display designs combining holographic optics, direc- small text near the limit of human visual acuity. This use case also tional backlighting, laser illumination, and polarization-based optical folding brings VR out of the home and in to work and public spaces where to achieve thin, lightweight, and high performance near-eye displays for socially acceptable sunglasses and eyeglasses form factors prevail. virtual reality. Several design alternatives are proposed, compared, and ex- VR has made good progress in the past few years, and entirely perimentally validated as prototypes. Using only thin, flat films as optical self-contained head-worn systems are now commercially available. components, we demonstrate VR displays with thicknesses of less than 9 However, current headsets still have box-like form factors and pro- mm, fields of view of over 90◦ horizontally, and form factors approach- ing sunglasses. In a benchtop form factor, we also demonstrate a full color vide only a fraction of the resolution of the human eye. Emerging display using wavelength-multiplexed holographic lenses that uses laser optical design techniques, such as polarization-based optical folding, illumination to provide a large gamut and highly saturated color. -
A Review and Selective Analysis of 3D Display Technologies for Anatomical Education
University of Central Florida STARS Electronic Theses and Dissertations, 2004-2019 2018 A Review and Selective Analysis of 3D Display Technologies for Anatomical Education Matthew Hackett University of Central Florida Part of the Anatomy Commons Find similar works at: https://stars.library.ucf.edu/etd University of Central Florida Libraries http://library.ucf.edu This Doctoral Dissertation (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations, 2004-2019 by an authorized administrator of STARS. For more information, please contact [email protected]. STARS Citation Hackett, Matthew, "A Review and Selective Analysis of 3D Display Technologies for Anatomical Education" (2018). Electronic Theses and Dissertations, 2004-2019. 6408. https://stars.library.ucf.edu/etd/6408 A REVIEW AND SELECTIVE ANALYSIS OF 3D DISPLAY TECHNOLOGIES FOR ANATOMICAL EDUCATION by: MATTHEW G. HACKETT BSE University of Central Florida 2007, MSE University of Florida 2009, MS University of Central Florida 2012 A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Modeling and Simulation program in the College of Engineering and Computer Science at the University of Central Florida Orlando, Florida Summer Term 2018 Major Professor: Michael Proctor ©2018 Matthew Hackett ii ABSTRACT The study of anatomy is complex and difficult for students in both graduate and undergraduate education. Researchers have attempted to improve anatomical education with the inclusion of three-dimensional visualization, with the prevailing finding that 3D is beneficial to students. However, there is limited research on the relative efficacy of different 3D modalities, including monoscopic, stereoscopic, and autostereoscopic displays. -
Multi-Perspective Stereoscopy from Light Fields
Multi-Perspective stereoscopy from light fields The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Changil Kim, Alexander Hornung, Simon Heinzle, Wojciech Matusik, and Markus Gross. 2011. Multi-perspective stereoscopy from light fields. ACM Trans. Graph. 30, 6, Article 190 (December 2011), 10 pages. As Published http://dx.doi.org/10.1145/2070781.2024224 Publisher Association for Computing Machinery (ACM) Version Author's final manuscript Citable link http://hdl.handle.net/1721.1/73503 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike 3.0 Detailed Terms http://creativecommons.org/licenses/by-nc-sa/3.0/ Multi-Perspective Stereoscopy from Light Fields Changil Kim1,2 Alexander Hornung2 Simon Heinzle2 Wojciech Matusik2,3 Markus Gross1,2 1ETH Zurich 2Disney Research Zurich 3MIT CSAIL v u s c Disney Enterprises, Inc. Input Images 3D Light Field Multi-perspective Cuts Stereoscopic Output Figure 1: We propose a framework for flexible stereoscopic disparity manipulation and content post-production. Our method computes multi-perspective stereoscopic output images from a 3D light field that satisfy arbitrary prescribed disparity constraints. We achieve this by computing piecewise continuous cuts (shown in red) through the light field that enable per-pixel disparity control. In this particular example we employed gradient domain processing to emphasize the depth of the airplane while suppressing disparities in the rest of the scene. Abstract tions of autostereoscopic and multi-view autostereoscopic displays even glasses-free solutions become available to the consumer. This paper addresses stereoscopic view generation from a light field. We present a framework that allows for the generation However, the task of creating convincing yet perceptually pleasing of stereoscopic image pairs with per-pixel control over disparity, stereoscopic content remains difficult. -
Advance Program 2018 Display Week International Symposium
ADVANCE PROGRAM 2018 DISPLAY WEEK INTERNATIONAL SYMPOSIUM May 22-25, 2018 (Tuesday – Friday) Los Angeles Convention Center Los Angeles, California, US Session 1: Annual SID Business Meeting Tuesday, May 22 / 8:00 – 8:20 am / Concourse Hall 151-153 Session 2: Opening Remarks / Keynote Addresses Tuesday, May 22 / 8:20 – 10:20 am / Concourse Hall 151-153 Chair: Cheng Chen, Apple, Inc., Cupertino, CA, US 2.1: Keynote Address 1: Deqiang Zhang, Visionox 2.2: Keynote Address 2: Douglas Lanman, Oculus 2.3: Keynote Address 3: Hiroshi Amano, Nagoya University Session 3: AR/VR I: Display Systems (AI and AR & VR / Display Systems / Emerging Technologies and Applications) Tuesday, May 22, 2018 / 11:10 am - 12:30 pm / Room 515A Chair: David Eccles, Rockwell Collins Co-Chair: Vincent Gu, Apple, Inc. 3.1: Invited Paper: VR Standards and Guidelines Matthew Brennesholtz, Brennesholtz Consulting, Pleasantville, NY, US 3.2: Distinguished Paper: An 18 Mpixel 4.3-in. 1443-ppi 120-Hz OLED Display for Wide-Field-of-View High-Acuity Head-Mounted Displays Carlin Vieri, Google LLC, Mountain View, CA, US 3.3: Distinguished Student Paper: Resolution-Enhanced Light-Field Near-to-Eye Display Using E-Shifting with an Birefringent Plate Kuei-En Peng, National Chiao Tung University, Hsinchu, Taiwan, ROC 3.4: Doubling the Pixel Density of Near-to-Eye Displays Tao Zhan, College of Optics and Photonics, University of Central Florida, Orlando, FL, US 3.5: RGB Superluminescent Diodes for AR Microdisplays Marco Rossetti, Exalos AG, Schlieren, Switzerland Session 4: Quantum-Dot -
State-Of-The-Art in Holography and Auto-Stereoscopic Displays
State-of-the-art in holography and auto-stereoscopic displays Daniel Jönsson <Ersätt med egen bild> 2019-05-13 Contents Introduction .................................................................................................................................................. 3 Auto-stereoscopic displays ........................................................................................................................... 5 Two-View Autostereoscopic Displays ....................................................................................................... 5 Multi-view Autostereoscopic Displays ...................................................................................................... 7 Light Field Displays .................................................................................................................................. 10 Market ......................................................................................................................................................... 14 Display panels ......................................................................................................................................... 14 AR ............................................................................................................................................................ 14 Application Fields ........................................................................................................................................ 15 Companies ................................................................................................................................................. -
Light Field Saliency Detection with Deep Convolutional Networks Jun Zhang, Yamei Liu, Shengping Zhang, Ronald Poppe, and Meng Wang
1 Light Field Saliency Detection with Deep Convolutional Networks Jun Zhang, Yamei Liu, Shengping Zhang, Ronald Poppe, and Meng Wang Abstract—Light field imaging presents an attractive alternative � � � �, �, �∗, �∗ … to RGB imaging because of the recording of the direction of the � incoming light. The detection of salient regions in a light field �� �, �, �1, �1 �� �, �, �1, �540 … image benefits from the additional modeling of angular patterns. … For RGB imaging, methods using CNNs have achieved excellent � � results on a range of tasks, including saliency detection. However, �� �, �, ��, �� Micro-lens … it is not trivial to use CNN-based methods for saliency detection Main lens Micro-lens Photosensor image on light field images because these methods are not specifically array �� �, �, �375, �1 �� �, �, �375, �540 (a) (b) designed for processing light field inputs. In addition, current � � ∗ ∗ light field datasets are not sufficiently large to train CNNs. To �� � , � , �, � … overcome these issues, we present a new Lytro Illum dataset, �� �−4, �−4, �, � �� �−4, �4, �, � … which contains 640 light fields and their corresponding ground- … truth saliency maps. Compared to current light field saliency � � � � , � , �, � datasets [1], [2], our new dataset is larger, of higher quality, � 0 0 … contains more variation and more types of light field inputs. Sub-aperture Main lens Micro-lens Photosensor image This makes our dataset suitable for training deeper networks array �� �4, �−4, �, � �� �4, �4, �, � and benchmarking. Furthermore, we propose a novel end-to- (c) (d) end CNN-based framework for light field saliency detection. Specifically, we propose three novel MAC (Model Angular Fig. 1. Illustrations of light field representations. (a) Micro-lens image Changes) blocks to process light field micro-lens images. -
Towards Mobile Embodied 3D Avatar As Telepresence Vehicle
Towards Mobile Embodied 3D Avatar as Telepresence Vehicle Yutaka Tokuda1, Atsushi Hiyama2, Takahiro Miura2, Tomohiro Tanikawa2, and Michitaka Hirose2 1 Graduate School of Engineering, The University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo 153-8904, Japan 2 Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8656, Japan {ytokuda,atsushi,miu,tani,hirose}@cyber.t.u-tokyo.ac.jp Abstract. In this paper, we present mobile embodied 3D avatar to shift a rich experience of avatar from a virtual world to our real life with a new style of te- lepresence. Conventional telepresence research have focused on the exact re- creation of face-to-face communication at a fixed position in a specialized room, so there have been much less research on a life-sized mobile telepresence system despite many off-the-shelf mobile telepresence robots available. We propose various scalable holographic displays to visualize a life-sized avatar in an actual life. In addition, we introduce architecture to control embodied avatar according to user’s intention by extending popular architecture for a multimodal virtual human, namely SAIBA. Our primitive prototype system was tested with 5 simple avatar animations to embody with a wheeled platform robot and a life- sized transparent holographic display and proved realistic avatar’s movement complying user’s intention and the situation at the remote location of avatar. Keywords: interaction techniques, avatar, telepresence, telework, mobility, transparent display, -
Holographic Three-Dimensional Virtual Reality and Augmented Reality Display Based on 4K-Spatial Light Modulators
applied sciences Article Holographic Three-Dimensional Virtual Reality and Augmented Reality Display Based on 4K-Spatial Light Modulators Hongyue Gao †, Fan Xu † , Jicheng Liu *,†, Zehang Dai , Wen Zhou , Suna Li and Yingjie Yu and Huadong Zheng Ultra-precision Optoelectronic Metrology and Information Display Technologies Research Center, Department of Precision Mechanical Engineering, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200072, China; [email protected] (H.G.); [email protected] (F.X.); [email protected] (Z.D.); [email protected] (W.Z.); [email protected] (S.L.); [email protected] (Y.Y.); [email protected] (H.Z.) * Correspondence: [email protected] † These authors contributed equally to this work. Received: 11 January 2019; Accepted: 11 March 2019; Published: 20 March 2019 Abstract: In this paper, we propose a holographic three-dimensional (3D) head-mounted display based on 4K-spatial light modulators (SLMs). This work is to overcome the limitation of stereoscopic 3D virtual reality and augmented reality head-mounted display. We build and compare two systems using 2K and 4K SLMs with pixel pitches 8.1 µm and 3.74 µm, respectively. One is a monocular system for each eye, and the other is a binocular system using two tiled SLMs for two eyes. The viewing angle of the holographic head-mounted 3D display is enlarged from 3.8◦ to 16.4◦ by SLM tiling, which demonstrates potential applications of true 3D displays in virtual reality and augmented reality. Keywords: holography; holographic display; digital holography; virtual reality 1. Introduction Virtual reality (VR) and augmented reality (AR) have been hot topics recently, and 3D technology is an important part of VR technology [1]. -
Convolutional Networks for Shape from Light Field
Convolutional Networks for Shape from Light Field Stefan Heber1 Thomas Pock1,2 1Graz University of Technology 2Austrian Institute of Technology [email protected] [email protected] Abstract x η x η y y y y Convolutional Neural Networks (CNNs) have recently been successfully applied to various Computer Vision (CV) applications. In this paper we utilize CNNs to predict depth information for given Light Field (LF) data. The proposed method learns an end-to-end mapping between the 4D light x x field and a representation of the corresponding 4D depth ξ ξ field in terms of 2D hyperplane orientations. The obtained (a) LF (b) depth field prediction is then further refined in a post processing step by applying a higher-order regularization. Figure 1. Illustration of LF data. (a) shows a sub-aperture im- Existing LF datasets are not sufficient for the purpose of age with vertical and horizontal EPIs. The EPIs correspond to the training scheme tackled in this paper. This is mainly due the positions indicated with dashed lines in the sub-aperture im- to the fact that the ground truth depth of existing datasets is age. (b) shows the corresponding depth field, where red regions inaccurate and/or the datasets are limited to a small num- are close to the camera and blue regions are further away. In the EPIs a set of 2D hyperplanes is indicated with yellow lines, where ber of LFs. This made it necessary to generate a new syn- corresponding scene points are highlighted with the same color in thetic LF dataset, which is based on the raytracing software the sub-aperture representation in (b). -
3D Display System
3D Display System [email protected] 3D Display Human Visual System 3D Display Systems • Stereoscopic display • Autostereoscopic display • Holographic display Stereo Disparity 2 [email protected] 5/7/2020 Classification of 3D Display Glasses Non-glasses Stereoscopic display Autostereoscopic display - Polarization method - Lenticular-type - Barrier-type - Sutter glasses method Floating Integral imaging Holography Real 3D 3 [email protected] 5/7/2020 3D Display Stereoscopic display Reasons for not buying 3D TV (NPD Group, 2011) Multiview display Integral Image display Holographic display 4 [email protected] 5/7/2020 3D Display Stereoscopic display 15 views 50” 1999 Multiview display Integral Image display Holographic display Cambridge Univ. 5 [email protected] 5/7/2020 http://www.uv.es/imaging3/Structure.htm 3D Display Stereoscopic display Multiview display Integral Image display Holographic display Super multi-view camera system M x N CCD Camera [1024X768] Resolution : 1024 x 768 x M x N 6 [email protected] 5/7/2020 3D Display Stereoscopic display Multiview display Integral Image display Holographic display Ideal 3D Display Technique • Based on Wave Optics • Real images in space • Full parallax • Continuous viewing points • Coherent illumination • Color images Fig. 4–1 Chart of the principle of holographic display operation 1 – computer, 2 – RF processor, 3 – RF signal, 4 – acoustic-optical modulator, 5 – laser, 6 – lens, 7 – vertical scanner, 8 – beam splitter, 9 – horizontally connected scanners, 10 – output lens, 11 – -
Head Mounted Display Optics II
Head Mounted Display Optics II Gordon Wetzstein Stanford University EE 267 Virtual Reality Lecture 8 stanford.edu/class/ee267/ Lecture Overview • focus cues & the vergence-accommodation conflict • advanced optics for VR with focus cues: • gaze-contingent varifocal displays • volumetric and multi-plane displays • near-eye light field displays • Maxwellian-type displays • AR displays Magnified Display • big challenge: virtual image appears at fixed focal plane! d • no focus cues d’ f 1 1 1 + = d d ' f Importance of Focus Cues Decreases with Age - Presbyopia 0D (∞cm) 4D (25cm) 8D (12.5cm) 12D (8cm) Nearest focus distance focus Nearest 16D (6cm) 8 16 24 32 40 48 56 64 72 Age (years) Duane, 1912 Cutting & Vishton, 1995 Relative Importance of Depth Cues The Vergence-Accommodation Conflict (VAC) Real World: Vergence & Accommodation Match! Current VR Displays: Vergence & Accommodation Mismatch Accommodation and Retinal Blur Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Top View Real World: Vergence & Accommodation Match! Top View Screen Stereo Displays Today (including HMDs): Vergence-Accommodation Mismatch! Consequences of Vergence-Accommodation Conflict • Visual discomfort (eye tiredness & eyestrain) after ~20 minutes of stereoscopic depth judgments (Hoffman et al. 2008; Shibata et al. 2011) • Degrades visual performance in terms of reaction times and acuity for stereoscopic vision