Coincident Display Using Haptics and Holographic Video

Total Page:16

File Type:pdf, Size:1020Kb

Coincident Display Using Haptics and Holographic Video Published in Proceedings of Conference on Human Factors in Computing Systems (CHI‘98), ACM, April, 1998. Coincident Display Using Haptics and Holographic Video Wendy Plesniak and Ravikanth Pappu Spatial Imaging Group, MIT Media Laboratory Cambridge, MA {wjp,pappu}@media.mit.edu ABSTRACT displayward and hands to controllers elsewhere, comfort- In this paper, we describe the implementation of a novel ably and naturally facilitate some activities (like driving a system which enables a user to “carve” a simple free-stand- car, playing a musical score, or moving a cursor with a ing electronic holographic image using a force-feedback mouse). In such familiar manual tasks, vision is useful for device. The force-feedback (or haptic) device has a stylus transporting the arm/hand to an object, but manipulation can which is held by the hand like an ordinary cutting tool. The often proceed quite well either in the absence of vision 3D position of the stylus tip is reported by the device, and (after physical contact is made) or with the monitoring of appropriate forces can be displayed to the hand as it inter- visual feedback provided elsewhere. However, this may not acts with 3D objects in the haptic workspace. The haptic be the best paradigm for all tasks—especially, those which workspace is spatially overlapped and registered with the are harder to control, require constant and precise visual and holographic video display volume. Within the resulting haptic monitoring and near-constant manual response. coincident visuo-haptic workspace, a 3D synthetic cylinder is presented, spinning about its long axis, which a person In this paper, we describe an early prototype system which can see, feel, and lathe with the stylus. This paper intro- spatially reunites the focus of eye and hand and also takes a duces the concept of coincident visuo-haptic display and step toward bringing materials-working pleasure to com- describes the implementation of the lathe simulation. After puter-assisted design. While there are several conventional situating the work in a research context, we present the kinds of visual display hardware suitable for coincident details of system design and implementation, including the visuo-haptic display of 3D information—head tracked LCD haptic and holographic modeling. Finally, we discuss the shutter glasses or head mounted displays (HMDs) combined performance of this prototype system and future work. with stereo computer graphics for instance—and while many of these visual display options currently offer ade- KEYWORDS quate image quality and frame rate, they are cumbersome to Haptics, holography, electro-holography, autostereoscopic wear and have attendant viewing problems. Instead, we are display, offset display, coincident display. using a prototype glasses-free autostereoscopic display which allows untethered movement throughout the view- INTRODUCTION zone. To recognize the intimate dialog between materials and the skilled eyes, hands, and intuition of the craftsperson is to This prototype display device, MIT’s second-generation acknowledge the enormity of the technology and interaction holographic video (holovideo) system, is capable of render- design tasks which still lie ahead of us. Ideally, we would ing moving, monochromatic, free-standing, three-dimen- rally the full exploratory and manipulative dexterity of the sional holographic images. Currently, this device has its hand, and the rich sensory capabilities of both hand and eye own shortcomings, but many will be addressed by future to the tasks we engineer for. research and routine advances in technology. For position tracking and force display, we use the PhantomTM haptic Consider the domain of traditional craft, in which gaze and interface, a three degree-of-freedom (d.o.f) mechanical link- touch convene in the same location: vision directs the hand age with a three d.o.f passive gimbal that supports a simple and tool; the hand senses, manipulates tools and coaxes thimble or stylus used by the hand. The haptic and visual material to take an envisioned form. Such tight alliance of workspaces are physically co-located so that a single, free- eye and hand has traditionally been fundamental to tasks in standing multimodal image of a cylinder to be “carved” is which material is artfully worked into form, and a similar presented. condition may hold for other domains as well, like surgery, component assembly, or repair and maintenance training. In the coincident workspace, a user can see the stylus inter- acting with the holographic image while feeling forces that Yet, in most computer-assisted applications, the hands result from contact with its force model (Figure 1). As the manipulate a pointing device while the gaze is turned to a user pushes the tool into the simulated cylinder, it deforms screen. Such offset display configurations, which direct eyes in a non-volume-conserving way and an arbitrary surface of revolution can be fashioned. Ultimately, the finished com- puter model can be dispatched to a 3D printer providing an actual hardcopy of the design (Figure 5b). In effect, a user “sculpts light” and produces a physical result. With these combined apparati and supporting computation, we are beginning to investigate high-quality multimodal dis- Published in Proceedings of Conference on Human Factors in Computing Systems (CHI‘98), ACM, April, 1998. play and interaction that is more Newtonian than symbolic, tion of active objects (metaDESK) as input. which may be preferable for tasks which have traditionally been practiced in this fashion. The Digital Desk project represents an attempt to render the computer desktop onto a real desk surface, and to merge common physical desk-objects with computational desktop functionality. The system employs a video projector situated above the desk for display of information, and a nearly co- located camera to monitor a person’s movements in the workspace. Hand gestures are interpreted by a computa- overlapping holographic tional vision algorithm to be requests for various utilities and force that the system offers. images The metaDESK project attempts to physically instantiate many of the familiar GUI mechanisms (menus, windows, icons, widgets, etc.) in the form of tangible user interfaces (TUI’s). The mapping between physical icons and virtual ones can be literally or poetically assigned; for instance placing a small physical model of MIT’s Great Dome on the desk surface might cause an elaborate map of MIT to be dis- Figure 1. Coincident display played. In addition to summoning the map to the display and indicating its position, the physical Great Dome icon BACKGROUND can be moved or rotated to correspondingly transform the One well-established approach to joining the eyes and hands map. in a coincident workspace is to employ manipulable “wired” physical objects as controllers for digital objects or pro- The metaDESK system design includes a flat rear-projected cesses. Several research efforts are investigating the use of desk surface, physical icons and functional instruments for physical handles to virtual objects by attaching interfacing use on the surface. The state of these physical objects is sensors or other electronics to real objects. These tangible sensed and used as application input. Not only can the state objects then act as physical controllers for virtual processes, of virtual objects be changed by manual interaction with providing whole-hand interaction and rich visuo-haptic physical objects, but part of the display itself can be “hand- feedback that seems both natural and obvious. In these held” and likewise manipulated. The metaDESK project applications, a participant perceives his or her own body underscores the seemingly inexhaustible palette of ideas for interacting with physical interface objects, but usually also instrumenting interactive space, harkening to the rich set of monitors the action-outcome on another separate display or sensibilities and skills people develop from years of experi- in the ambient environment. ence with real world objects, tools, and their physics. One such project, called Graspable User Interface: Bricks A wide variety of virtual reality (VR) and augmented reality [12], employed basic physical objects called “bricks” which (AR) application areas such as telesurgery, maintenance were physical instantiations of virtual objects or functions. repair and training, computer modeling and entertainment Once a brick was attached to a virtual object, the computa- employ haptic interaction and high-quality computer graph- tional model became itself functionally graspable. A brick ics to study, interact with or modify data. Here, many appli- might be used, for instance, to geometrically transform a cations employ instrumented force-feedback, rather than virtual object to which it was attached, availing direct con- physical objects and whole-hand interaction, and trade off trol through physical handles. Tactile and kinesthetic feed- sensory richness for flexibility in physical modeling and back are also present and exploitable with such an interface; visual / force rendering. thus the ability to operate quickly and efficiently, using two- handed input is possible. Extending this work to incorporate Most existing applications offset the visual and manual a small set of differentiable geometries and material textures workspaces, but several diverse efforts to conjoin eye and among the bricks could increase a person’s ability to iden- hand in interactive applications exist. An example themati-
Recommended publications
  • Stereo Capture and Display At
    Creation of a Complete Stereoscopic 3D Workflow for SoFA Allison Hettinger and Ian Krassner 1. Introduction 1.1 3D Trend Stereoscopic 3D motion pictures have recently risen to popularity once again following the success of films such as James Cameron’s Avatar. More and more films are being converted to 3D but few films are being shot in 3D. Current available technology and knowledge of that technology (along with cost) is preventing most films from being shot in 3D. Shooting in 3D is an advantage because two slightly different images are produced that mimic the two images the eyes see in normal vision. Many take the cheaper route of shooting in 2D and converting to 3D. This results in a 3D image, but usually nowhere near the quality as if the film was originally shot in 3D. This is because a computer has to create the second image, which can result in errors. It is also important to note that a 3D image does not necessarily mean a stereo image. 3D can be used to describe images that have an appearance of depth, such as 3D animations. Stereo images refer to images that make use of retinal disparity to create the illusion of objects going out of and into the screen plane. Stereo images are optical illusions that make use of several cues that the brain uses to perceive a scene. Examples of monocular cues are relative size and position, texture gradient, perspective and occlusion. These cues help us determine the relative depth positions of objects in an image. Binocular cues such as retinal disparity and convergence are what give the illusion of depth.
    [Show full text]
  • Fast-Response Switchable Lens for 3D and Wearable Displays
    Fast-response switchable lens for 3D and wearable displays Yun-Han Lee, Fenglin Peng, and Shin-Tson Wu* CREOL, The College of Optics and Photonics, University of Central Florida, Orlando, Florida 32816, USA *[email protected] Abstract: We report a switchable lens in which a twisted nematic (TN) liquid crystal cell is utilized to control the input polarization. Different polarization state leads to different path length in the proposed optical system, which in turn results in different focal length. This type of switchable lens has advantages in fast response time, low operation voltage, and inherently lower chromatic aberration. Using a pixelated TN panel, we can create depth information to the selected pixels and thus add depth information to a 2D image. By cascading three such device structures together, we can generate 8 different focuses for 3D displays, wearable virtual/augmented reality, and other head mounted display devices. ©2016 Optical Society of America OCIS codes: (230.3720) Liquid-crystal devices; (080.3620) Lens system design. References and links 1. O. Cakmakci and J. Rolland, “Head-worn displays: a review,” J. Display Technol. 2(3), 199–216 (2006). 2. B. Furht, Handbook of Augmented Reality (Springer, 2011). 3. H. Ren and S. T. Wu, Introduction to Adaptive Lenses (Wiley, 2012). 4. K. Akeley, S. J. Watt, A. R. Girshick, and M. S. Banks, “A stereo display prototype with multiple focal distances,” ACM Trans. Graph. 23(3), 804–813 (2004). 5. S. Liu and H. Hua, “A systematic method for designing depth-fused multi-focal plane three-dimensional displays,” Opt. Express 18(11), 11562–11573 (2010).
    [Show full text]
  • Digital 3DTV
    Digital 3DTV Alexey Polyakov The advent of the digital 3DTV era is a fait accompli. The question is: how is it going to develop in the future? Currently the Standard definition(SD) TV is being changed into High definition(HD) TV. As you know, quantitative changes tend to transform into qualitative ones. Many observers believe that the next quantum leap will be the emergence of 3D TV. They predict that such TV will appear within a decade. In this article I will explain how it is possible to create stereoscopic video systems using commercial devices. Brief historical retrospective The following important stages can be singled out in the history of TV and digital video development: 1—black and white TV – image brightness is transmitted. 2 – colored TV – image brightness and color components are transmitted. From the data volume perspective, adding color is a quantitative change. From the viewer perspective, it is a qualitative change. 3 – digital video emergence (Video CD, DVD) – a qualitative change from the data format perspective. 4 – HD digital video and TV (Blu-Ray, HDTV) – from the data volume perspectiveit is a quantitative change. However, exactly the same components are transmitted: brightness and color. Specialists and viewers have been anticipating for the change that had been predicted by sci-fi writers long ago, - 3D TV emergency. For a long time data volume was a bottleneck preventing stereoscopic video demonstration as the existing media couldn’t transmit it. Digital TV enabled to transmit enough data and became the basis for a number of devices that helped to realize 3D visualization.
    [Show full text]
  • The Role of Focus Cues in Stereopsis, and the Development of a Novel Volumetric Display
    The role of focus cues in stereopsis, and the development of a novel volumetric display by David Morris Hoffman A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Vision Science in the GRADUATE DIVISION of the UNIVERSITY OF CALIFORNIA, BERKELEY Committee in charge: Professor Martin S. Banks, Chair Professor Austin Roorda Professor John Canny Spring 2010 1 Abstract The role of focus cues in stereopsis, and the development of a novel volumetric display by David Morris Hoffman Doctor of Philosophy in Vision Science University of California, Berkeley Professor Martin S. Banks, Chair Typical stereoscopic displays produce a vivid impression of depth by presenting each eye with its own image on a flat display screen. This technique produces many depth signals (including disparity) that are consistent with a 3-dimensional (3d) scene; however, by using a flat display panel, focus information will be incorrect. The accommodation distance to the simulated objects will be at the screen distance, and blur will be inconsistent with the simulated depth. In this thesis I will described several studies investigating the importance of focus cues for binocular vision. These studies reveal that there are a number of benefits to presenting correct focus information in a stereoscopic display, such as making it easier to fuse a binocular image, reducing visual fatigue, mitigating disparity scaling errors, and helping to resolve the binocular correspondence problem. For each of these problems, I discuss the theory for how focus cues could be an important factor, and then present psychophysical data showing that indeed focus cues do make a difference.
    [Show full text]
  • Development of a Real 3D Display System
    2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C) Development of A Real 3D Display System Chong Zeng Weihua Li Hualong Guo College of Mathematics and College of Mathematics and College of Mathematics and information Engineering information Engineering information Engineering Longyan University Longyan University Longyan University Longyan, China Longyan, China Longyan, China Tung-lung Wu School of Mechanical and Automotive Engineering Dennis Bumsoo Kim Zhaoqing University College of Mathematics and information Engineering Zhaoqing, China Longyan University Longyan, China Abstract—This paper introduces a three-dimensional light achieve high fidelity of the reproduced objects by recording field display system, which is composed of a high-speed total object information on the phase, the amplitude, and the projector, a directional scattering mirror, a circular stainless- intensity at each point in the light wave [4]. steel bearing plate, a rotating shaft and a high-speed micro motor. The system reduces information redundancy and Recently, Cambridge Enterprise invented a novel 360e computational complexity by reconstructing the light intensity 3D light-field capturing and display system which can have distribution of the observed object, thus generating a real three- applications in gaming experiences, an enhanced reality and dimensional suspended image. The experimental results show autonomous vehicle infotainment and so on [5]. Motohiro et that the suspension three-dimensional image can be generated al. presented an interactive 360-degree tabletop display system by properly adjusting the projection rate of the image and the for collaborative works around a round table so that users rotation speed of the rotating mirror (i.e.
    [Show full text]
  • Digital Display Circuits This Worksheet and All Related Files Are Licensed Under the Creative Commons Attribution License, Versi
    Digital display circuits This worksheet and all related files are licensed under the Creative Commons Attribution License, version 1.0. To view a copy of this license, visit http://creativecommons.org/licenses/by/1.0/, or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA. The terms and conditions of this license allow for free copying, distribution, and/or modification of all licensed works by the general public. Resources and methods for learning about these subjects (list a few here, in preparation for your research): 1 Questions Question 1 What is the purpose of a seven-segment decoder circuit? What is a ”seven-segment” display, and why do we need a decoder circuit to drive it? Research the part number for a typical seven-segment decoder circuit (either CMOS or TTL). file 01417 Question 2 A seven segment decoder is a digital circuit designed to drive a very common type of digital display device: a set of LED (or LCD) segments that render numerals 0 through 9 at the command of a four-bit code: Display driver IC Seven-segment display VDD a a . A b . f b c . g Inputs B d . C e . f . D g . e c d The behavior of the display driver IC may be represented by a truth table with seven outputs: one for each segment of the seven-segment display (a through g). In the following table, a ”1” output represents an active display segment, while a ”0” output represents an inactive segment: D C B A a b c d e f g Display 0 0 0 0 1 1 1 1 1 1 0 ”0” 0 0 0 1 0 1 1 0 0 0 0 ”1” 0 0 1 0 1 1 0 1 1 0 1 ”2” 0 0 1 1 1 1 1 1 0 0 1 ”3” 0 1 0 0 0 1 1 0 0 1 1 ”4” 0 1 0 1 1 0 1 1 0 1 1 ”5” 0 1 1 0 1 0 1 1 1 1 1 ”6” 0 1 1 1 1 1 1 0 0 0 0 ”7” 1 0 0 0 1 1 1 1 1 1 1 ”8” 1 0 0 1 1 1 1 1 0 1 1 ”9” Write the unsimplified SOP or POS expressions (choose the most appropriate form) for outputs a, b, c, and e.
    [Show full text]
  • 3D TV: a Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes
    3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Matusik, Wojciech, and Hanspeter Pfister. 2004. 3D TV: A scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. Proceedings International Conference on Computer Graphics and Interactive Techniques, ACM SIGGRAPH 2004 Papers: August 08-12, 2004, Los Angeles, California, 814-824. New York, NY: ACM. Also published in ACM Transactions on Graphics 23(3): 814-824. Published Version doi:10.1145/1186562.1015805;doi:10.1145/1015706.1015805 Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:4726196 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA 3D TV: A Scalable System for Real-Time Acquisition, Transmission, and Autostereoscopic Display of Dynamic Scenes Wojciech Matusik Hanspeter Pfister∗ Mitsubishi Electric Research Laboratories, Cambridge, MA. Figure 1: 3D TV system. Left (top to bottom): Array of 16 cameras and projectors. Middle: Rear-projection 3D display with double-lenticular screen. Right: Front-projection 3D display with single-lenticular screen. Abstract 1 Introduction Three-dimensional TV is expected to be the next revolution in the Humans gain three-dimensional information from a variety of cues. history of television. We implemented a 3D TV prototype system Two of the most important ones are binocular parallax, scientif- with real-time acquisition, transmission, and 3D display of dynamic ically studied by Wheatstone in 1838, and motion parallax, de- scenes.
    [Show full text]
  • Virtual Stereo Display Techniques for Three-Dimensional Geographic Data
    Virtual Stereo Display Techniques for Three-Dimensional Geographic Data Abstract (Beaton et al., 1987; Moellering, 1989; Robinson, 1990; Techniques have been developed which allow display of Hodges, 1992). While these technological innovations are volumetric geographic data in three-dimensionalform using likely to contribute to broader use of stereo images, new con- concepts of parallactic displacement. Image stereo pairs and cepts of vision have established that this leftlright separation synthetic stereo of digitized photographs, SPOT, Landsat m, is not necessary to establish a virtual three-dimensional im- shaded relief, GIs layers, and DLG vector data are imple- age Uules, 1971; Marr, 1982; Jones et al., 1984). mented. The synthetic stereo pair is generated from a single Stereovision is not a reflex action but is in fact an intel- image through computation of a parallax shifl proportional lectual activity in which one depth map, for example, the to the volumetric values at each location in the image. A left image of a stereo pair, is received by the eye-brain com- volumetric dataset, such as elevation or population density, bination and stored, and later merged with a second depth provides the single requirement for generation of the stereo map, the right image of the stereo pair (Marr, 1982). This image from an original digital dataset. To view the resulting concept of vision opens new approaches to the display of stereo images, softcopy methods, including traditional ana- three-dimensional geographic information and, with the use glyph techniques requiring complementary colored lenses of current computer graphics technology, can be imple- and an autostereoscopic procedure requiring no special mented at low-cost.
    [Show full text]
  • Symposium Digest Articles Listed Chronologically 1963-1988
    Preliminary Table of Contents for SID Symposium Digests 1963-1988 ( author company affiliations not listed) Notes: 1 The index covers full papers and keynote speeches for SID Symposia from 1963 through 1988; panel sessions, seminars and luncheon speakers are not listed 2 Authors' company affiliations are not shown (to be added) 3 Lead author is designated with asterisk ( * ) 4 The first ten (10) digests were designated by number as follows 5 The first ten (10 Symposia were identified by numbers, as follows: 2/1963 -#1; 9/63 - #2; 5/64-#3; 9/64 - #4; 2/65 - #5; 9/65 - #6; 10/66 - #7; 5/67 - #8; 5/68 - #9; 5/69 - #10 Session # Year Page Title Author(s) 1963-1 1 User Requirements for Display Debons, Col, Anthony * 1963-1 13 Scan Conversion and Bright Display Porter, Richard * 1963-1 21 Multicolor Projection System Smith, Fred E.* 1963-1 31 Advanced Display Techniques through the Charactron Redman, James H.* 1963-1 53 Light Valve Display Albanese, Augustine * 1963-1 63 Colordata, A Photographic Large Screen Display system Baron, Peter C.* 1963-1 77 Human Performance Engineering and Information Display Silvern, Leonard * 1963-1 83 Multiple Projection Techniques Klein, R.C.* 1963-1 93 Photochromic Dynamic Display Hines, Logan J.* 1963-1 99 Datachrome Display System Parker, Philip A.* 1963-1 117 High Ambient Light Display Systems Miller, Wendell S.* 1963-1 127 Electroluminescent Displays Hallett, Joseph L.* 1963-1 139 Pseudo 3-D Display Perdue, Joseph L.* 1963-1 155 Aims and Purposes of the S.I.D. Vlahos, Petro * 1963-2 1 Iinformation Systems,
    [Show full text]
  • Hmds, Caves & Chameleon: a Human-Centric Analysis of Interaction in Virtual Space
    HMDs, Caves & Chameleon: A Human-Centric Analysis of Interaction in Virtual Space tied to what is displayed to the user, such which we contrast the various approaches. It is systems also typically permit some means of obvious that other concerns such as cost, input, such as a dataglove [21] or some other speed, fidelity, space requirements, etc. affect high degree of freedom input to support inter- the choice of which technology to adopt_We action with the displayed virtual world. will touch on some of these, but our overall As the art has progressed, alternative tech- objective is more modest: to shed some light nical approaches to VR have emerged. Of on those areas that we feel we best under- these, we distinguish among three: stand. • Head mounted VR: systems as described briefly above, where one typically has a Head Mounted Display head mounted wide view stereo display (HMD)VR coupled with head tracking, and some In HMD VR, the user '~vears" a stereo display, other means of input to support interac- much like a pair of glasses that provides a view tion. into the virtual world. The physical form of • Cave based VI~ where some or all of the these "glasses" can range from something on walls of a room are rear-projection the scale of a motorcycle helmet to a pair of Figure I : Modern inexpensiveHMD: The General Reality stereo displays.The user wears glasses to sunglasses. Figure I illustrates one example of CE-2OOW.(Photo:General Reality Carp.) See page 104 enable viewing the stereo images, and a HMD.
    [Show full text]
  • Head Mounted Display Optics II
    Head Mounted Display Optics II Gordon Wetzstein Stanford University EE 267 Virtual Reality Lecture 8 stanford.edu/class/ee267/ Lecture Overview • focus cues & the vergence-accommodation conflict • advanced optics for VR with focus cues: • gaze-contingent varifocal displays • volumetric and multi-plane displays • near-eye light field displays • Maxwellian-type displays • AR displays Magnified Display • big challenge: virtual image appears at fixed focal plane! d • no focus cues d’ f 1 1 1 + = d d ' f Importance of Focus Cues Decreases with Age - Presbyopia 0D (∞cm) 4D (25cm) 8D (12.5cm) 12D (8cm) Nearest focus distance focus Nearest 16D (6cm) 8 16 24 32 40 48 56 64 72 Age (years) Duane, 1912 Cutting & Vishton, 1995 Relative Importance of Depth Cues The Vergence-Accommodation Conflict (VAC) Real World: Vergence & Accommodation Match! Current VR Displays: Vergence & Accommodation Mismatch Accommodation and Retinal Blur Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Blur Gradient Driven Accommodation Top View Real World: Vergence & Accommodation Match! Top View Screen Stereo Displays Today (including HMDs): Vergence-Accommodation Mismatch! Consequences of Vergence-Accommodation Conflict • Visual discomfort (eye tiredness & eyestrain) after ~20 minutes of stereoscopic depth judgments (Hoffman et al. 2008; Shibata et al. 2011) • Degrades visual performance in terms of reaction times and acuity for stereoscopic vision
    [Show full text]
  • Near-Eye Display and Tracking Technologies for Virtual and Augmented Reality
    EUROGRAPHICS 2019 Volume 38 (2019), Number 2 A. Giachetti and H. Rushmeier STAR – State of The Art Report (Guest Editors) Near-Eye Display and Tracking Technologies for Virtual and Augmented Reality G. A. Koulieris1 , K. Ak¸sit2 , M. Stengel2, R. K. Mantiuk3 , K. Mania4 and C. Richardt5 1Durham University, United Kingdom 2NVIDIA Corporation, United States of America 3University of Cambridge, United Kingdom 4Technical University of Crete, Greece 5University of Bath, United Kingdom Abstract Virtual and augmented reality (VR/AR) are expected to revolutionise entertainment, healthcare, communication and the manufac- turing industries among many others. Near-eye displays are an enabling vessel for VR/AR applications, which have to tackle many challenges related to ergonomics, comfort, visual quality and natural interaction. These challenges are related to the core elements of these near-eye display hardware and tracking technologies. In this state-of-the-art report, we investigate the background theory of perception and vision as well as the latest advancements in display engineering and tracking technologies. We begin our discussion by describing the basics of light and image formation. Later, we recount principles of visual perception by relating to the human visual system. We provide two structured overviews on state-of-the-art near-eye display and tracking technologies involved in such near-eye displays. We conclude by outlining unresolved research questions to inspire the next generation of researchers. 1. Introduction it take to go from VR ‘barely working’ as Brooks described VR’s technological status in 1999, to technologies being seamlessly inte- Near-eye displays are an enabling head-mounted technology that grated in the everyday lives of the consumer, going from prototype immerses the user in a virtual world (VR) or augments the real world to production status? (AR) by overlaying digital information, or anything in between on the spectrum that is becoming known as ‘cross/extended reality’ The shortcomings of current near-eye displays stem from both im- (XR).
    [Show full text]