Scaling Multi-User Virtual and Augmented Reality

Total Page:16

File Type:pdf, Size:1020Kb

Scaling Multi-User Virtual and Augmented Reality Scaling Multi-user Virtual and Augmented Reality by Sebastian Herscher A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF COMPUTER SCIENCE NEW YORK UNIVERSITY JANUARY, 2020 Ken Perlin © SEBASTIAN HERSCHER ALL RIGHTS RESERVED, 2020 Dedication TO MY PUZZLE PIECE. iii Acknowledgments I WOULD LIKE TO THANK MY ADVISOR,PROF.KEN PERLIN, FOR HIS PHENOME- NAL AND UNWAVERING SUPPORT THROUGHOUT THE PHD PROCESS. I WOULD LIKE TO THANK THE MEMBERS AND ALUMNI OF THE NYU FUTURE REALITY LAB, WHO ARE TRUE PIONEERS IN THIS FANTASTIC DIGITAL AGE. iv Abstract The Virtual and Augmented Reality (XR) ecosystems have been gaining substantial momentum and traction within the gaming, entertainment, enterprise, and training markets in the past half-decade, but have been hampered by limitations in concur- rent user count, throughput, and accessibility to mass audiences. Although a litany of XR devices have been made available for public purchase, most XR experiences have been developed for either a single user or a small set of users at a time. Few systems or experiments in co-located XR environments have expanded past a small set of users, leaving the paradigm of being part of a larger virtual audience relatively untested. This thesis presents a set of components, systems, and experiments that as- sist in the creation and scaling of multi-user virtual and augmented reality experiences, and outlines the strengths of techniques found in traditional co-located media for the design space of scaled co-located XR. v Contents DEDICATION iii ACKNOWLEDGMENTS iv ABSTRACT v LISTING OF FIGURES viii LIST OF TABLES xi 1 INTRODUCTION 1 1.1 Collective Audience Perception Spaces.................. 2 1.2 Research Questions............................ 3 1.3 Research Contributions.......................... 4 2 CAVRN: SCALING UP VIRTUAL REALITY 6 2.1 Related Work ............................... 8 vi 2.2 Design Constraints ............................ 10 2.3 CAVRN.................................. 11 2.4 CAVE ................................... 18 2.5 Evaluation of CAVRN and CAVE ..................... 20 2.6 Results................................... 21 2.7 Discussion................................. 31 3 MARY:EXPERIMENTING ACROSS PLATFORMS 35 3.1 Related Work ............................... 37 3.2 Design Constraints ............................ 39 3.3 Expanding CAVRN............................ 40 3.4 Mary and the Monster ........................... 44 3.5 View-Enablement Tools.......................... 49 3.6 Initial Evaluation of MARY ........................ 54 3.7 Results................................... 56 3.8 Discussion................................. 62 CONCLUSION 65 REFERENCES 68 vii Listing of figures 2.1 The software architecture relationship between the server and all clients. (1) All client applications report their position and orientation to the server. (2) The server sends experience and client state data to all client applica- tions. (3) The server sends an event signal (e.g. seat assignment) to the client. 11 2.2 An example layout of the server. Note that the status of each client is vis- ible, and that the server can assign a seat to a client, and control the play- back of the experience. .......................... 13 2.3 The representation of the audience in CAVE. Each audience avatar was driven by the position and orientation of the corresponding headset. Note that all audience members can see the position and orientation of the heads of all other audience members, which are calculated using the data received from all other users’ VR headsets. ....................... 16 viii 2.4 An example of how a piece of content in CAVE, the mammoth, is rendered in comparison to the virtual audience. Note that the front row of the au- dience avatars is rendered in between the virtual content and the back row of the audience avatars. This is vital in evoking a sense of audience pres- ence in the virtual experience........................ 19 2.5 A rendering of the physical layout used for CAVE. The audience was split into five rows: front left, back left, front right, middle right, and back right. 21 2.6 An audience watching CAVE at the technical conference. 24 3.1 An example of mapping an Oculus Touch controller to an audience avatar. The images show that the hand mapping occurs using button states, instead of physical hand poses........................... 41 3.2 An overhead view of the VR audience of MARY.............. 44 3.3 An example avatar of MARY........................ 46 3.4 A view of the VR version of MARY through the Oculus Quest. 47 3.5 A view of the AR version of MARY through the Magic Leap One. 48 3.6 The telescopic interface tool implemented in MARY. The rendered style of the opera glasses was made to match that of the avatars. The third im- age shows a vignette rendered while the opera glasses are in use, in order to alleviate motion sickness. ....................... 50 3.7 The magnifier tool implemented in the AR version of MARY. The handle of the magnifier was removed, as the lens was positioned directly above the hand controller. ............................ 52 3.8 The viewing window tool implemented in the VR version of MARY. 53 ix 3.9 An audience watching MARY at SIGGRAPH 2019 in Los Angeles. 54 x List of Tables 2.1 Summary of interview participants..................... 22 2.2 Aggregate survey responses for reported professions. .......... 23 2.3 Aggregate survey responses for prior experience with VR devices. 23 2.4 First part of the aggregate survey responses for what users believe can be replicated in traditional forms of media, and what elements of CAVE stood out. Note that the Scale category refers to the scale of the content in rela- tion to the scale of the audience in the experience............. 27 2.5 Second part of aggregate survey responses................. 27 3.1 Summary of interview participants..................... 55 xi 1 Introduction Virtual and augmented reality (VR and AR, often combined as XR) have been around since the beginnings of 3D rendering, with Sutherland et al. developing the fabled Sword of Damocles VR/AR headset1 a full four years before Catmull et al. released their seminal work A Computer Animated Hand 2. A digital world either sitting on top of or replacing our physical world has been a core trope of science fiction for 1 more than half of a century, with a plethora of research being conducted to make said fiction a reality. Historically, these mediums have been locked behind both monetary and expertise barriers, with limited functionality due to a lack of processing power for realtime rendering. However, this most recent iteration of virtual and augmented reality hardwares have piggybacked on vast increases in compute power, and have been successfully delivered to the world at both consumer and enterprise price points. The wide availability of affordable XR hardware has occurred in tandem with an explosion of interest, investment, and development of awesome immersive experi- ences and games. These have taken the form of Academy award winning immersive experiences30, phenomenally successful at home VR games52, and a new industry of "hyper reality" out of home location based experiences for small groups25. Compa- nies have begun to empower their employees no longer develop 3D content using 2D interfaces, instead building and creating with the full fidelity of 3D input50. It is clear that our future includes both the use XR for both out of home and in home entertain- ment, as well as for everyday communication and collaboration in both professional and personal use. 1.1C OLLECTIVE AUDIENCE PERCEPTION SPACES However, for this future to come to fruition, two fundamental issues with the current wave of XR must be addressed. First, development of real-time immersive 3D content must be unified and standardized, such that when an immersive experience is created, it can be ubiquitously distributed across both the litany of current XR devices as well as those that will be released in the future. Second, the design space of XR must be 2 taken past its current limit of 4-6 concurrent users at a time. The social pattern of congregating together in large, collective groups is consis- tent across all layers of human social practice. In both professional and personal en- vironments, we come together to experience, learn, and perceive en masse. In the immersive space, the ability to join into a large virtual congregation from a remote access point has been successfully built56,35, but their colocated counterparts have only brought together a smattering of people in the same physical space. We must endeavour to build and explore Collective Audience Perception Spaces, or CAPS - physically shared design spaces that include dozens if not hundreds of users that can see, hear, and share an immersive environment as a collective, through both virtual and augmented reality. Each audience member must have their own viewpoint and representation, while being synchronized with the aggregate. These environments mirror their non-immersive counterparts of lecture halls, cinema, and theater, with the added power and capability of spatial computing and immersive design. Whether through using virtual reality to create a shared virtual space, or augmented reality to add to the physical world, CAPS must enable an audience to congregate in a syn- chronous immersive experience, without need for participation outside of contributing to the collective knowledge and focus of the whole. 1.2R ESEARCH QUESTIONS Many research questions arise out of the design space of Collective Audience Percep- tion Spaces. First and foremost, the presence of a representation of an audience in an immersive environment changes both the design and implementation of an experience, 3 as well as the reception and enjoyment of it by each viewer. Furthermore, there is a tension between the amount of agency that can be provided to each individual audi- ence member, and the synchronicity of the experience to the collective. Limitations on both the design of content and systems to support managing the large scale shared system must also be investigated.
Recommended publications
  • Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration
    ORIGINAL RESEARCH published: 14 June 2021 doi: 10.3389/frvir.2021.697367 Eye See What You See: Exploring How Bi-Directional Augmented Reality Gaze Visualisation Influences Co-Located Symmetric Collaboration Allison Jing*, Kieran May, Gun Lee and Mark Billinghurst Empathic Computing Lab, Australian Research Centre for Interactive and Virtual Environment, STEM, The University of South Australia, Mawson Lakes, SA, Australia Gaze is one of the predominant communication cues and can provide valuable implicit information such as intention or focus when performing collaborative tasks. However, little research has been done on how virtual gaze cues combining spatial and temporal characteristics impact real-life physical tasks during face to face collaboration. In this study, we explore the effect of showing joint gaze interaction in an Augmented Reality (AR) interface by evaluating three bi-directional collaborative (BDC) gaze visualisations with three levels of gaze behaviours. Using three independent tasks, we found that all bi- directional collaborative BDC visualisations are rated significantly better at representing Edited by: joint attention and user intention compared to a non-collaborative (NC) condition, and Parinya Punpongsanon, hence are considered more engaging. The Laser Eye condition, spatially embodied with Osaka University, Japan gaze direction, is perceived significantly more effective as it encourages mutual gaze Reviewed by: awareness with a relatively low mental effort in a less constrained workspace. In addition, Naoya Isoyama, Nara Institute of Science and by offering additional virtual representation that compensates for verbal descriptions and Technology (NAIST), Japan hand pointing, BDC gaze visualisations can encourage more conscious use of gaze cues Thuong Hoang, Deakin University, Australia coupled with deictic references during co-located symmetric collaboration.
    [Show full text]
  • Advanced Displays and Techniques for Telepresence COAUTHORS for Papers in This Talk
    Advanced Displays and Techniques for Telepresence COAUTHORS for papers in this talk: Young-Woon Cha Rohan Chabra Nate Dierk Mingsong Dou Wil GarreM Gentaro Hirota Kur/s Keller Doug Lanman Mark Livingston David Luebke Andrei State (UNC) 1994 Andrew Maimone Surgical Consulta/on Telepresence Federico Menozzi EMa Pisano, MD Henry Fuchs Kishore Rathinavel UNC Chapel Hill Andrei State Eric Wallen, MD ARPA-E Telepresence Workshop Greg Welch Mary WhiMon April 26, 2016 Xubo Yang Support gratefully acknowledged: CISCO, Microsoft Research, NIH, NVIDIA, NSF Awards IIS-CHS-1423059, HCC-CGV-1319567, II-1405847 (“Seeing the Future: Ubiquitous Computing in EyeGlasses”), and the BeingThere Int’l Research Centre, a collaboration of ETH Zurich, NTU Singapore, UNC Chapel Hill and Singapore National Research Foundation, Media Development Authority, and Interactive Digital Media Program Office. 1 Video Teleconferencing vs Telepresence • Video Teleconferencing • Telepresence – Conven/onal 2D video capture and – Provides illusion of presence in the display remote or combined local&remote space – Single camera, single display at each – Provides proper stereo views from the site is common configura/on for precise loca/on of the user Skype, Google Hangout, etc. – Stereo views change appropriately as user moves – Provides proper eye contact and eye gaze cues among all the par/cipants Cisco TelePresence 3000 Three distant rooms combined into a single space with wall-sized 3D displays 2 Telepresence Component Technologies • Acquisi/on (cameras) • 3D reconstruc/on Cisco
    [Show full text]
  • New Realities Risks in the Virtual World 2
    Emerging Risk Report 2018 Technology New realities Risks in the virtual world 2 Lloyd’s disclaimer About the author This report has been co-produced by Lloyd's and Amelia Kallman is a leading London futurist, speaker, Amelia Kallman for general information purposes only. and author. As an innovation and technology While care has been taken in gathering the data and communicator, Amelia regularly writes, consults, and preparing the report Lloyd's does not make any speaks on the impact of new technologies on the future representations or warranties as to its accuracy or of business and our lives. She is an expert on the completeness and expressly excludes to the maximum emerging risks of The New Realities (VR-AR-MR), and extent permitted by law all those that might otherwise also specialises in the future of retail. be implied. Coming from a theatrical background, Amelia started Lloyd's accepts no responsibility or liability for any loss her tech career by chance in 2013 at a creative or damage of any nature occasioned to any person as a technology agency where she worked her way up to result of acting or refraining from acting as a result of, or become their Global Head of Innovation. She opened, in reliance on, any statement, fact, figure or expression operated and curated innovation lounges in both of opinion or belief contained in this report. This report London and Dubai, working with start-ups and corporate does not constitute advice of any kind. clients to develop connections and future-proof strategies. Today she continues to discover and bring © Lloyd’s 2018 attention to cutting-edge start-ups, regularly curating All rights reserved events for WIRED UK.
    [Show full text]
  • Gameplay in Mixed Reality: How It Differs from Mobile Gameplay Weerachet Sinlapanuntakul, William J
    Gameplay in Mixed Reality: How It Differs from Mobile Gameplay Weerachet Sinlapanuntakul, William J. Shelstad, Jessyca L. Derby, & Barbara S. Chaparro Department of Human Factors and Behavioral Neurobiology Introduction Results Results (continued) What is Mixed Reality? Game User Experience Satisfaction Scale (GUESS-18) Simulator Sickness Questionnaire (SSQ) • Mixed Reality (MR) expands a user’s physical environment by The overall satisfaction scores (out of 56) indicated a statistically significance between: In this study, the 20-minute use of Magic Leap is resulted with overlaying the real-world with interactive virtual elements • Magic Leap (M = 42.31, SD = 5.61) and mobile device (M = 38.31, SD = 5.49) concerning symptoms (M = 16.83, SD = 15.51): • Magic Leap 1 is an MR wearable spatial computer that brings the • With α = 0.05, Magic Leap was rated with higher satisfaction, t(15) = 3.09, p = 0.007. • Nausea is considered minimal (M = 9.54, SD = 12.07) physical and digital worlds together (Figure 1). • A comparison of GUESS-18 subscales is shown below (Figure 2). • Oculomotor is considered concerning (M = 17.53, SD = 16.31) • Disorientation is considered concerning (M = 16.53, SD = 17.02) Comparison of the Magic Leap & Mobile Versions of Angry Total scores can be associated with negligible (< 5), minimal (5 – 10), Birds with the GUESS-18 significant (10 – 15), concerning (15 – 20), and bad (> 20) symptoms. Usability 7 Magic Leap User Performance 6 The stars (out of 3) collected from each level played represented a statistically significant difference in performance outcomes: Visual Aesthetics 5 Narratives Mobile Figure 1.
    [Show full text]
  • The Openxr Specification
    The OpenXR Specification Copyright (c) 2017-2021, The Khronos Group Inc. Version 1.0.19, Tue, 24 Aug 2021 16:39:15 +0000: from git ref release-1.0.19 commit: 808fdcd5fbf02c9f4b801126489b73d902495ad9 Table of Contents 1. Introduction . 2 1.1. What is OpenXR?. 2 1.2. The Programmer’s View of OpenXR. 2 1.3. The Implementor’s View of OpenXR . 2 1.4. Our View of OpenXR. 3 1.5. Filing Bug Reports. 3 1.6. Document Conventions . 3 2. Fundamentals . 5 2.1. API Version Numbers and Semantics. 5 2.2. String Encoding . 7 2.3. Threading Behavior . 7 2.4. Multiprocessing Behavior . 8 2.5. Runtime . 8 2.6. Extensions. 9 2.7. API Layers. 9 2.8. Return Codes . 16 2.9. Handles . 23 2.10. Object Handle Types . 24 2.11. Buffer Size Parameters . 25 2.12. Time . 27 2.13. Duration . 28 2.14. Prediction Time Limits . 29 2.15. Colors. 29 2.16. Coordinate System . 30 2.17. Common Object Types. 33 2.18. Angles . 36 2.19. Boolean Values . 37 2.20. Events . 37 2.21. System resource lifetime. 42 3. API Initialization. 43 3.1. Exported Functions . 43 3.2. Function Pointers . 43 4. Instance. 47 4.1. API Layers and Extensions . 47 4.2. Instance Lifecycle . 53 4.3. Instance Information . 58 4.4. Platform-Specific Instance Creation. 60 4.5. Instance Enumerated Type String Functions. 61 5. System . 64 5.1. Form Factors . 64 5.2. Getting the XrSystemId . 65 5.3. System Properties . 68 6. Path Tree and Semantic Paths.
    [Show full text]
  • Augmented Reality Jon Peddie
    Augmented Reality Jon Peddie Augmented Reality Where We Will All Live Jon Peddie Jon Peddie Research Inc. Tiburon, CA, USA ISBN 978-3-319-54501-1 ISBN 978-3-319-54502-8 (eBook) DOI 10.1007/978-3-319-54502-8 Library of Congress Control Number: 2017936981 © Springer International Publishing AG 2017 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland To Kathleen Maher, my best friend, supporter, nudger, editor, and wife—couldn’t have done it without you darling Foreword by Thomas A.
    [Show full text]
  • Virtual Reality: Measuring User Experiences, Providing a Justification for Investment
    Virtual Reality: Measuring User Experiences, Providing a Justification for Investment a white paper by The value of immersive experiences is widely recognized, but brands hesitate to invest without a way to measure effectiveness and justify the cost. Isobar, in collaboration with the MIT Media Lab, has developed a way to capture and analyze behavioral and biometric data, and understand emotional states created by virtual, augmented, and mixed reality experiences. Suddenly, brands have precise, sensitive methods that measure success. Companies working with us can know consumers better than they know themselves, in the moment, like never before. Virtual Reality: Measuring User Experiences, Justifying Investment Virtual, augmented, and mixed/merged reality have already begun to transform our world. New standards are being set in gaming and entertainment. Education is creating more effective ways to teach. Retailers are using virtual, augmented, and mixed/merged reality to experiment with new store layouts. They’re also using consumer-facing applications to let people try clothing, makeup, and new paint colors on their walls. New digital experiences are making life more interesting, but also creating confusion and apprehension for brands. They’re struggling to justify the cost and complexity without a way to measure effectiveness — and account for the changes that will impact the market in the next few years. Virtual reality has traction. But like all new media formats, it has a chicken-and-egg problem. In 2016, sales of VR headsets fell shy of original forecasts. The total number of units (not including their smartphone cousins) are estimated at 1.2 million — much less than the 2 million predicted.
    [Show full text]
  • Virtual & Augmented Reality
    EXCERPTED FROM THE ORIGINAL: See inside cover for details. FINANCIAL BACKING GLOBAL INTEREST SHIPPING OUT SELLING OUT INTEREST IN THE PAST READY TO BUILD A WAVE OF CONTENT ON THE WAY RETAIL VALUE HOME REDESIGN, REIMAGINED EASIER TO IMAGINE YOURSELF AT HOME Video • Jaunt The Ecosystem • NextVR Virtual Reality / Augmented Reality • VRSE • Oculus Story Studio • GoPro • IG Port Processors Games • TI Applications • Sony • Qualcomm • Ubisoft 3D Audio • STMicro Graphics • CCP Games • Nvidia • TI • Realtek • Oculus Story Studio • AMD • Wolfson • Himax • Tammeka Games • Qualcomm • Realtek • MediaTek • Pixel Titans • Intel • Capcom Augmented Reality Virtual Reality Engineering • Microsoft HoloLens • Facebook Oculus Head–mounted devices • Autodesk • Google Glass • Samsung Gear VR • Dassault Systèmes • Magic Leap • Google Cardboard • IrisVR • Atheer • HTC Vive • Visidraft • Osterhout Design • Sony PSVR • MakeVR Memory Group • Vuzix iWear (DRAM/SSD) • VR Union Claire • Micron Healthcare • Samsung • Psious • SK Hynix • zSpace • Toshiba • Conquer Mobile • 3D Systems Social • Altspace VR • High Fidelity • Podrift Commerce • Sixense {shopping} • Matterport {real estate} Display • Samsung • JDI Cameras • Himax • 360Heros • Crystal • GoPro Odyssey 3D Lenses • Nokia OZO • Wearality • Jaunt NEO • Zeiss • Matterport Pro 3D Components • Canon • Nikon Haptics • Largan • Alps Position/ Room Tracker • AAC • Hon Hai • Nidec • Pegatron • Flex • Jabil • HTC Motion Sensors • Leap Motion • InvenSense • TI • STMicro • Honeywell The Ecosystem Virtual Reality / Augmented
    [Show full text]
  • Towards Mobile Immersive Analysis: a Study of Applications
    Towards Mobile Immersive Analysis: A Study of Applications Aidong Lu∗ Jian Huangy Shaoting Zhangz University of North Carolina at Charlotte University of Tennessee at Knoxville University of North Carolina at Charlotte Chuang Wangx Weichao Wang{ University of North Carolina at Charlotte University of North Carolina at Charlotte ABSTRACT While virtual reality has been studied for years [61, 57], the three The latest new technologies of virtual and mixed realities are ready key features differentiate mobile and immersive environments from to be commercialized. Their innovative communication channels the traditional VR technologies significantly. have created new opportunities and challenges for visualization and visual analytics. At this stage, it is important to understand the state-of-the-art of relevant technologies as well as the impacts to the research fields. This position paper presents the key features of mobile, immersive environments that are different from tradi- tional visualization and visual analytics systems. Through a study of applications in the mixed reality, we present a rich example set of capabilities and applications that address discovery and visual com- munication in real-world uses. We hope that this work will provide references and promote the research on the topic of immersive vi- sualization and visual analytics. Index Terms: I.3.6 [Computer Graphics]: Methodology and Techniques—Interaction techniques; I.3.8 [Computer Graphics]— Applications 1 INTRODUCTION Over the past few years, newly developed technologies, includ- Figure 1: Examples of Microsoft HoloLens [33] illustrate the mobile ing big data, cloud, mobile and social networks, have significantly and immersive environments, providing innovative tools as well as changed our everyday lives.
    [Show full text]
  • Mixed Reality & Microsoft Hololens
    Mixed Reality & Microsoft HoloLens Laboratorio di Realtà Virtuale Manuel Pezzera – [email protected] Mixed Reality • “Mixed reality (MR) is a spectrum of immersive experiences, connecting and blending physical and digital worlds together in augmented-reality and virtual-reality applications” [cit. Microsoft] Mixed Reality Augmented and Virtual Reality • Virtual Reality: it tricks your eyes into thinking that you are somewhere else. • Augmented reality: it is an extra dimension over VR since it can show the real world around wearer with graphic overlayed. • AR can also totally immerse user just like VR by simply showing pixels everywhere with no transparency to the real world. • In virtual world, users are best advised to stay seated to avoid collision with physical objects which they cannot see in the real world. What is a hologram? • A hologram is an object like any other object in the real world, with only one difference: instead of being made of physical matter, a hologram is made entirely of light. • Holographic objects can be viewed from different angles and distances, just like physical objects. They can also interact with the environment (e.g., bouncing ball on a table) • Hologram do not offer any physical resistance when touched or pushed because they don’t have a mass. Mixed reality vs Augmented Reality • MR can be mentioned as an enhanced form of Augmented Reality (AR). In AR the visible natural environment is overlaid with a layer of digital content. • But the case is slightly different with MR, as virtual objects are integrated into and responsive to the real world. For instance, a virtual ball under your desk would be blocked from a user’s view unless he bends.
    [Show full text]
  • An Augmented Reality U-Academy Module: from Basic Principles to Connected…
    Special Focus Paper—An Augmented Reality U-Academy Module: From Basic Principles to Connected… An Augmented Reality U-Academy Module: From Basic Principles to Connected Subjects https://doi.org/10.3991/ijim.v11i5.7074 Paulo Menezes University of Coimbra, Coimbra, Portugal [email protected] Abstract—A module for learning about virtual and augmented reality is be- ing developed under the U-Academy project. The module is composed of three parts. The first part is an introduction to the basic concepts of virtual and aug- mented reality with the help of illustrative examples. The second part presents some of the current uses of augmented reality and its prospective use in several areas that range from industry to medicine. The final part aims at those students interested in the insights of this technology by presenting the underlying con- cepts such as: camera models, computer graphics, pattern detection and pose es- timation from inertial sensors or camera images. Keywords—Augmented Reality, Cognitive Processes, Direct Manipulation, Hand Tracking 1 Introduction Augmented reality has recently received an enormous amount of attention from both the general public and companies. Naturally, the game industry has been quite attentive to the long promised technologies to support it. There are indeed several companies doing important investments in the development of products for support- ing augmented reality (AR), like Microsoft Hololens, Vufuria, Magic Leap, and Meta 2, or for virtual reality (VR) such as HTC Vibe and Oculus Rift. As a matter of fact, some technical difficulties have limited the achievable quality of visualization in AR, and for this reason its inclusion in the games offered by the major players in this industry has been postponed until recently.
    [Show full text]
  • 1-2-4 Openxr
    © The Khronos® Group Inc. 2018 - Page 1 Unifying Reality Standardizing XR: OpenXR Brent E. Insko, PhD Lead VR Architect at Intel & OpenXR Working Group Chair 3GPP, VRIF & AIS Workshop on VR Ecosystem & Standards, April 2019 © Khronos® Group Inc. 2019 - Page 2 Agenda • What is OpenXR? • A Brief History of the Standard • What are the Problems we are trying to Solve • OpenXR Timeline of Development • Provisional Release • What’s Next? • Recap • Call To Action • Questions © The Khronos® Group Inc. 2018 - Page 3 OpenXR Provisional 0.90 Released! Press announcement Updated landing page © The Khronos® Group Inc. 2018 - Page 4 What is OpenXR? OpenXR is a royalty-free, open standard that provides high-performance access to Augmented Reality (AR) and Virtual Reality (VR)—collectively known as XR—platforms and devices. © The Khronos® Group Inc. 2018 - Page 5 A Brief History of OpenXR • Among the first VR hardware available 2016 • Need applications… - Each platform provided an SDK to interface with the hardware - Each was different from the other © The Khronos® Group Inc. 2018 - Page 6 XR Ecosystem Fragmentation •Increased development time and therefore cost. •Increased validation overhead and therefore cost. •Time and resources spent developing one title, impacts developers’ ability to create more titles. © The Khronos® Group Inc. 2018 - Page 7 Major XR Runtimes © The Khronos® Group Inc. 2018 - Page 8 VR Barriers to Adoption… •Content •Comfort •Cost © The Khronos® Group Inc. 2018 - Page 9 OpenXR • Recognizing the problem, several companies got together in late 2016 / early 2017 and formed the OpenXR working group in Khronos. Virtual Reality V A Augmented Reality © The Khronos® Group Inc.
    [Show full text]