Demystifying the Future of the Screen
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
UNIVERSITY of CALGARY Two-Sided Transparent
UNIVERSITY OF CALGARY Two-Sided Transparent Display as a Collaborative Medium by Jiannan Li A THESIS SUBMITTED TO THE FACULTY OF GRADUATE STUDIES IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE GRADUATE PROGRAM IN COMPUTER SCIENCE CALGARY, ALBERTA JANUARY, 2015 © Jiannan Li 2015 THE UNIVERSITY OF CALGARY FACULTY OF GRADUATE STUDIES The undersigned certify that they have read, and recommend to the Faculty of Graduate Studies for acceptance, a thesis entitled “Two-Sided Transparent Display as a Collaborative Medium” submitted by Jiannan Li in partial fulfillment of the requirements for the degree Master of Science. Supervisor, Ehud Sharlin Department of Computer Science Co-Supervisor, Saul Greenberg Department of Computer Science Examiner, Sonny Chan Department of Computer Science External Examiner, Joshua Taron Faculty of Environmental Design Date Abstract Transparent displays are ‘see-through’ screens: a person can simultaneously view both the graphics on the screen and real-world content visible through the screen. Interactive transparent displays can serve as an important medium supporting face-to-face collaboration, where people interact with both sides of the display and work together. Such displays enhance workspace awareness, which smooths collaboration: when a person is working on one side of a transparent display, the person on the other side can see the other’s hand gestures, gaze, and what s/he is currently manipulating on the shared screen. Even so, we argue that in order to provide effective support for collaboration, designing such transparent displays must go beyond current offerings. We propose using two-sided transparent displays, which can present different content on both sides. -
Chapter 7 Immersive Journalism: the New Narrative Doron Friedman and Candice Kotzen
“9.61x6.69” b3187 Robot Journalism: Can Human Journalism Survive? FA Chapter 7 Immersive journalism: The new narrative Doron Friedman and Candice Kotzen Immersive journalism is a subcategory of journalism that uses virtual reality (VR) and similar technologies to provide those engaging in such technologies with a sense of being wholly engrossed in the news story, thus allowing the news audience to form a direct impression of the ambience of the story. This chapter is intended to serve as a primer of VR use for news storytelling for individuals with an interest or background in journalism. The first section presents some essential background on VR and related technologies. Next, we present some research findings on the impact of VR, and review some of the early work in immersive journalism. We conclude by delineating a collection of thoughts and questions for journalists wishing to enter into this new exciting field. 1. The Technology More than 50 years after the first demonstration of virtual reality (VR) technologies [Sutherland, 1965], it is apparent that VR is on the brink of becoming a form of mass media as VR documentary and journalism has been a central theme. Triggered by Facebook’s acquisition of Oculus Rift in 2014, the technology industry launched the race to deliver compelling VR hardware, software, and content. In this chapter, we present the essential background for non-experts who are intrigued by immer- sive journalism. For a recent comprehensive review of VR research in general, we recommend Slater and Sanchez-Vives [2016]. Relevant issues from this review are elaborated below. -
Advanced Volumetric Capture and Processing
ADVANCED VOLUMETRIC CAPTURE AND PROCESSING O. Schreer, I. Feldmann, T. Ebner, S. Renault, C. Weissig, D. Tatzelt, P. Kauff Fraunhofer Heinrich Hertz Institute, Berlin, Germany ABSTRACT Volumetric video is regarded worldwide as the next important development step in media production. Especially in the context of rapidly evolving Virtual and Augmented Reality markets, volumetric video is becoming a key technology. Fraunhofer HHI has developed a novel technology for volumetric video: 3D Human Body Reconstruction (3DHBR). The 3D Human Body Reconstruction technology captures real persons with our novel volumetric capture system and creates naturally moving dynamic 3D models, which can then be observed from arbitrary viewpoints in a virtual or augmented reality scene. The capture system consists of an integrated multi-camera and lighting system for full 360 degree acquisition. A cylindrical studio has been developed with a diameter of 6m and it consists of 32 20MPixel cameras and 120 LED panels that allow for arbitrary lit background. Hence, diffuse lighting and automatic keying is supported. The avoidance of green screen and provision of diffuse lighting offers best possible conditions for re-lighting of the dynamic 3D models afterwards at design stage of the VR experience. In contrast to classical character animation, facial expressions and moving clothes are reconstructed at high geometrical detail and texture quality. The complete workflow is fully automatic, requires about 12 hours per minute of mesh sequence and provides a high level of quality for immediate integration in virtual scenes. Meanwhile a second, professional studio has been built up on the film campus of Potsdam Babelsberg. This studio is operated by VoluCap GmbH, a joint venture between Studio Babelsberg, ARRI, UFA, Interlake and Fraunhofer HHI. -
A Novel Walk-Through 3D Display
A Novel Walk-through 3D Display Stephen DiVerdia, Ismo Rakkolainena & b, Tobias Höllerera, Alex Olwala & c a University of California at Santa Barbara, Santa Barbara, CA 93106, USA b FogScreen Inc., Tekniikantie 12, 02150 Espoo, Finland c Kungliga Tekniska Högskolan, 100 44 Stockholm, Sweden ABSTRACT We present a novel walk-through 3D display based on the patented FogScreen, an “immaterial” indoor 2D projection screen, which enables high-quality projected images in free space. We extend the basic 2D FogScreen setup in three ma- jor ways. First, we use head tracking to provide correct perspective rendering for a single user. Second, we add support for multiple types of stereoscopic imagery. Third, we present the front and back views of the graphics content on the two sides of the FogScreen, so that the viewer can cross the screen to see the content from the back. The result is a wall- sized, immaterial display that creates an engaging 3D visual. Keywords: Fog screen, display technology, walk-through, two-sided, 3D, stereoscopic, volumetric, tracking 1. INTRODUCTION Stereoscopic images have captivated a wide scientific, media and public interest for well over 100 years. The principle of stereoscopic images was invented by Wheatstone in 1838 [1]. The general public has been excited about 3D imagery since the 19th century – 3D movies and View-Master images in the 1950's, holograms in the 1960's, and 3D computer graphics and virtual reality today. Science fiction movies and books have also featured many 3D displays, including the popular Star Wars and Star Trek series. In addition to entertainment opportunities, 3D displays also have numerous ap- plications in scientific visualization, medical imaging, and telepresence. -
State-Of-The-Art in Holography and Auto-Stereoscopic Displays
State-of-the-art in holography and auto-stereoscopic displays Daniel Jönsson <Ersätt med egen bild> 2019-05-13 Contents Introduction .................................................................................................................................................. 3 Auto-stereoscopic displays ........................................................................................................................... 5 Two-View Autostereoscopic Displays ....................................................................................................... 5 Multi-view Autostereoscopic Displays ...................................................................................................... 7 Light Field Displays .................................................................................................................................. 10 Market ......................................................................................................................................................... 14 Display panels ......................................................................................................................................... 14 AR ............................................................................................................................................................ 14 Application Fields ........................................................................................................................................ 15 Companies ................................................................................................................................................. -
2014 Annual Report
Sandia National Laboratories 2014 LDRD Annual Report Laboratory Directed Research and Development 2014 ANNUAL REPORT Exceptional service in the national interest 1 Cover photos (clockwise): Randall Schunk and Rekha Rao discuss an image generated by Goma 6.0, an R&D 100 Laboratory Directed Research and Development Award winner that has origins in several LDRD projects. 2014 ANNUAL REPORT A modeled simulation of heat flux in a grandular material, showing what appear to be several preferential “channels” for heat flow developed in Project 171054. Melissa Finley examines a portable diagnostic device, for Bacillus anthracis detection in ultra-low resource environments, developed in Project 158813, which won an R&D 100 award. The Gauss–Newton with Approximated Tensors (GNAT) Reduced Order Models technique applied to a large-scale computational fluid dynamics problem was developed in Project 158789. Edward Jimenez, Principal Investigator for a “High Performance Graphics Processor- Based Computed Tomography Reconstruction Algorithms for Nuclear and Other Large Exceptional service in the national interest Scale Applications” for Project 158182. 1 Issued by Sandia National Laboratories, operated for the United States Abstract Department of Energy by Sandia Corporation. This report summarizes progress from the NOTICE: This report was prepared as an account of work sponsored by an agency of Laboratory Directed Research and Development the United States Government. Neither the United States Government, nor any agency (LDRD) program during fiscal year 2014. In thereof, nor any of their employees, nor any of their contractors, subcontractors, or addition to the programmatic and financial their employees, make any warranty, express or implied, or assume any legal liability overview, the report includes progress reports or responsibility for the accuracy, completeness, or usefulness of any information, from 419 individual R&D projects in 16 apparatus, product, or process disclosed, or represent that its use would not infringe categories. -
NETRA: Interactive Display for Estimating Refractive Errors and Focal Range
NETRA: Interactive Display for Estimating Refractive Errors and Focal Range The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Vitor F. Pamplona, Ankit Mohan, Manuel M. Oliveira, and Ramesh Raskar. 2010. NETRA: interactive display for estimating refractive errors and focal range. In ACM SIGGRAPH 2010 papers (SIGGRAPH '10), Hugues Hoppe (Ed.). ACM, New York, NY, USA, Article 77 , 8 pages. As Published http://dx.doi.org/10.1145/1778765.1778814 Publisher Association for Computing Machinery (ACM) Version Author's final manuscript Citable link http://hdl.handle.net/1721.1/80392 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike 3.0 Detailed Terms http://creativecommons.org/licenses/by-nc-sa/3.0/ NETRA: Interactive Display for Estimating Refractive Errors and Focal Range Vitor F. Pamplona1;2 Ankit Mohan1 Manuel M. Oliveira1;2 Ramesh Raskar1 1Camera Culture Group - MIT Media Lab 2Instituto de Informatica´ - UFRGS http://cameraculture.media.mit.edu/netra Abstract We introduce an interactive, portable, and inexpensive solution for estimating refractive errors in the human eye. While expensive op- tical devices for automatic estimation of refractive correction exist, our goal is to greatly simplify the mechanism by putting the hu- man subject in the loop. Our solution is based on a high-resolution programmable display and combines inexpensive optical elements, interactive GUI, and computational reconstruction. The key idea is to interface a lenticular view-dependent display with the human eye in close range - a few millimeters apart. Via this platform, we create a new range of interactivity that is extremely sensitive to parame- ters of the human eye, like refractive errors, focal range, focusing speed, lens opacity, etc. -
Idw ’18 the 25Th International Display Workshops
IDW ’18 THE 25TH INTERNATIONAL DISPLAY WORKSHOPS Special Topics of Interest on • Oxide-Semiconductor TFT • Quantum Dot Technologies • AR/VR and Hyper Reality • Automotive Displays • Wide Color Gamut and Color Reproduction • Haptics Technologies Topical Session on • User Experience and Cognitive Engineering (UXC) Workshops on • LC Science and Technologies (LCT) • Active Matrix Displays (AMD) • FPD Manufacturing, Materials and Components (FMC) • Inorganic Emissive Display and Phosphors (PH) • OLED Displays and Related Technologies (OLED) • 3D/Hyper-Realistic Displays and Systems (3D) • Applied Vision and Human Factors (VHF) • Projection and Large-Area Displays and Their Components (PRJ) • Electronic Paper (EP) • MEMS and Emerging Technologies for Future Displays and Devices (MEET) • Display Electronic Systems (DES) • Flexible Electronics (FLX) • Touch Panels and Input Technologies (INP) Final Program Nagoya Congress Center Nagoya, Japan December 12 – 14, 2018 CONTENTS Program Highlights ................................................................................9 General Information .............................................................................15 Travel Information ................................................................................18 Plenary Sessions Wednesday, December 12 IDW '18 Opening .................................................................................22 IDW 25th Ceremony for Certificate of Appreciation .............................22 IDW '18 Keynote Addresses ................................................................22 -
Present Status of 3D Display
Recent 3D Display Technologies (Excluding Holography [that will be lectured later]) Byoungho Lee School of Electrical Engineering Seoul National University Seoul, Korea [email protected] Contents • Introduction to 3D display • Present status of 3D display • Hardware system • Stereoscopic display • Autostereoscopic display • Volumetric display • Other recent techniques • Software • 3D information processing: depth extraction, depth plane image reconstruction, view image reconstruction • 3D correlator using 2D sub-images • 2D to 3D conversion Outline of presentation Display device (3D display) Display device • Stereoscopic High resolution pickup device display • Autostereoscopic display Image Consumer pickup Image Consumer pickup • Depth • Image perception processing • Visual (2D : 3D) fatigue Brief history of 3D display Stereoscope: Wheatstone (1838) Lenticular: Hess (1915) Lenticular stereoscope (prism): Parallax barrier: Brewster (1844) Kanolt (1915) Electro-holography: Benton (1989) 1850 1900 1950 1830 2000 Autostereoscopic: Hologram: Maxwell (1868) Gabor (1948) Stereoscopic movie camera: Integral Edison & Dickson (1891) photography: Anaglyph: Du Hauron (1891) Lippmann (1908) Integram: de Montebello (1970) 3D movie: La’arrivee du train (1903) 3D movies Starwars (1977) Avatar (2009) Minority Report (2002) Superman returns (2006) Cues for depth perception of human (I) • Physiological cues • Psychological cues • Accommodation • Linear perspective • Convergence • Overlapping (occlusion) Binocular parallax • • Shading and shadow • Motion -
The XRSI Definitions of Extended Reality (XR)
XRSI Standard Publication XR-001 The XRSI Defi nition of Extended Reality The XRSI Defi nitions of Extended Reality (XR) XR Safety Initiative Standard Publication XR-001 www.xrsi.org XR Safety Initiative xrsidotorg 1 CC BY-NC-SA 4.0 XRSI Standard Publication XR-001 The XRSI Defi nition of Extended Reality XR Data Classifi cation Framework Public Working Group XR-DCF Public Working Group XR Safety Initiative, California, USA Liaison Organization: Open AR Cloud 2 CC BY-NC-SA 4.0 xrsidotorg XRSI Standard Publication XR-001 The XRSI Defi nition of Extended Reality Abstract The Extended Reality – Data Classifi cation Framework – Public Working Group (XR- DCF-PWG) at the XR Safety Initiative (XRSI) develops and promotes a fundamental understanding of the properties and classifi cation of data in XR environments by providing technical leadership in the XR domain. XR-DCF-PWG develops tests, test methods, reference data, proof of concept implementations, and technical analysis to advance the development and productive use of immersive technology. XR-DCF-PWG’s responsibilities include the development of technical, physical, administrative, and management standards and guidelines for the risk-based security and privacy of sensitive information in XR environments. This Special Publication XR- series reports on XR-DCF-PWG’s research, guidance, and outreach efforts in XR Safety and its collaborative activities with industry, government, and academic organizations. This specifi c report is an enumeration of terms for the purposes of consistency in communication. Certain commercial entities, equipment, or materials may be identifi ed in this document in order to describe an experimental procedure or concept adequately. -
Arxiv:2009.00922V1 [Cs.CV] 2 Sep 2020 a Lot of Attention in the Computer Graphics Community in Recent New Facial Textures
IET Research Journals Special Issue on Computer Vision for the Creative Industry This paper is a postprint of a paper submitted to and accepted for publication in [journal] and is subject to Institution of Engineering and Technology Copyright. The copy of record is available at the IET Digital Library ISSN 1751-8644 Going beyond Free Viewpoint: Creating doi: 0000000000 Animatable Volumetric Video of Human www.ietdl.org Performances Anna Hilsmann1, Philipp Fechteler1, Wieland Morgenstern1, Wolfgang Paier1, Ingo Feldmann1, Oliver Schreer1, Peter Eisert12∗ 1 Vision & Imaging Technologies, Fraunhofer HHI, Berlin, Germany 2 Visual Computing, Humboldt University, Berlin, Germany * E-mail: [email protected] Abstract: In this paper, we present an end-to-end pipeline for the creation of high-quality animatable volumetric video content of human performances. Going beyond the application of free-viewpoint volumetric video, we allow re-animation and alteration of an actor’s performance through (i) the enrichment of the captured data with semantics and animation properties and (ii) applying hybrid geometry- and video-based animation methods that allow a direct animation of the high-quality data itself instead of creating an animatable model that resembles the captured data. Semantic enrichment and geometric animation ability are achieved by establishing temporal consistency in the 3D data, followed by an automatic rigging of each frame using a parametric shape- adaptive full human body model. Our hybrid geometry- and video-based animation approaches combine the flexibility of classical CG animation with the realism of real captured data. For pose editing, we exploit the captured data as much as possible and kinematically deform the captured frames to fit a desired pose. -
Vision & Visuals
Overview Vision & Visuals Human Visual Systems Visual Perceptions 3D Depth Cues 3D Stereographics Terminology Stereo Approximation 3D Displays & Auto-Stereoscopic Displays 071011-1 Design Issues for VR Visual Displays 2017년 가을학기 9/20/2017 박경신 Human Perception System Vision Obtain Information about environment through vision, Vision is one of the most important research areas in audition, haptic/touch, olfaction, gustation, HCI/VR because designers should know vestibular/kinesthetic senses. What can be seen by users Human perception capability provides HCI design What a user can see better issues. What can attract user’s attention Vision Physical reception of stimulus Processing and interpretation of stimulus No clear boundary between the two Human Visual System Human Eyes Light is focused by the cornea and the lens onto the retina Light passing through the center of the cornea and the lens hits the fovea (or Macula) Iris permits the eye to adapt to varying light levels, controlling the amount of light entering the eye. Retina is optically receptive layer like a film in a camera Retina translate light into nerve signals. Retina has photoreceptors (rods & cones) and inter- neurons. Photoreceptors Rods & Cones Distribution Rods Rods & Cones Distribution Operate at lower illumination levels Fovea Luminance-only only cone receptors with The most sensitive to light very high density At night where the cones cannot detect the light, the rods provide us with a black and white view of the world no rods The rods