The Robot Null Space: New Uses for New Robotic Systems

Total Page:16

File Type:pdf, Size:1020Kb

The Robot Null Space: New Uses for New Robotic Systems TECHNICAL UNIVERSITY OF CATALONIA BARCELONATECH Doctoral Programme AUTOMATIC CONTROL,ROBOTICS AND COMPUTER VISION PhD Dissertation THE ROBOT NULL SPACE: NEW USES FOR NEW ROBOTIC SYSTEMS Josep-Arnau Claret i Robert Supervisor: Luis Basañez Villaluenga May 2019 To my parents. Abstract With the new advances in the field of robotics, the kinematic complexity of robotic systems has dramatically increased, whether by the addition of more joints to standard robots, or by the appearance of new types of robots, like humanoid robots and Unmanned Air Vehicles (UAV), and new scenarios, like cooperative exploration of scenarios by groups of robots, or the interaction between robots and humans, among others. This is specially sensitive in shared tasks between a user and a machine, like in a teleoperation scenario. In this situations, a good compromise between the user input and the degrees of freedom managed by the system is critical, and a failure to properly adjust its autonomy can lead to under-performance and hazardous situations. This dissertation proposes new approaches to address the autonomy of a robotic system through the robot redundancy, that is, the extra degrees of freedom available to solve the main task. These approaches allow to cope with the increasing complexity of the robotic systems, and to exploit them to simultaneously solve a hierarchy of multiple tasks with different levels of priority. In particular, in this work this redundancy is addressed using the robot null-space for two different robotic systems and goals. Envisioning a world where users and humanoid robots tend to increasingly share space and tasks, in the first scenario, the redundancy of a humanoid robot is exploited to execute a novel task with low priority: the conveyance of emotional states to humans. A model to transform emotions as points in a three dimensional space to kinematic features of the robot is developed, as well as the implementation of the proposed approach in a Pepper robot. Further, the results of a user study to assess its validity are shown, its conclusions, and future directions. In the second scenario, a novel robotic system for teleoperation is presented. This robotic system results from the integration of a mobile manipulator, an UAV and a haptic device. The user commands the end effector of the mobile manipulator using the haptic while getting real- time feedback from a camera mounted on the UAV, which takes the role of a free-flying camera. A description of this robotic system and its task hierarchy is given in the second part of the thesis, along with algorithms to coordinate the camera and the mobile manipulator in such a way that the command of the camera is transparent to the operator. Also, the null-space of the robot is exploited to improve the teleoperation experience in two directions. The first, by proposing two enhancements on the continuous inverse. The second, by using the null-space of the mobile manipulator to prevent its TCP from being occluded to the camera by the robot itself, which, to the authors knowledge, it a novel use of the robot null-space. Finally, the proposed algorithms are implemented in the Barcelona Mobile Manipulator and a Parrot AR.Drone. Overall, new contributions on the management of the robot redundancy are presented in this dissertation, along with its implementation and validation, which expand the knowledge of this particular sub-field of robotics. v Acknowledgments I thank Professor Luis Basañez for the opportunity to deep into the realm of robotics and accom- plish this dissertation. This thesis would not have possible without his knowledge, guidance, words of encouragement at difficult times, and amusing conversations. I would also like to thank Raúl Suárez and Jan Rosell. The first, for opening to me the doors of the IOC, many years ago, when I was still an undergraduate student, full of futuristic dreams of robots and spaceships. The second, for his invaluable support during all these years, for the trip to Lausanne, and for giving me the opportunity to contribute to the Kautham Project. I thank Gentiane Venture for the warm welcome during my four-month internship in her lab, which resulted in the first part of the thesis. Also, to the members of the GVLab for the translations to Japanese, and attending the local participants during the user study. I cannot stress enough the impact those months in Tokyo had on me, among the temples and giant buildings of this amazing city, which memories I will treasure for the rest of my life. This dissertation would not have been possible without the technical discussions, chats and laughs at the IOC. I want to thank Leo for always being ready to set up the hardware and software which are at the base of this work. And, also, Orestes, Carlos Aldana, Isiah, Marcos, Andrés, Henry, Gema, Néstor, Diana, José, Nacho, Abiud, Niliana, Carlos Rosales, Sergi, Em- manuel, Fernando, Carlos Rodríguez, Alexander, Paolo, Ali, Muhayyuddin, Diab, and to many others I have met during these exciting years. Finally, thanks to my friends and my relatives, who will be pleased to know that I successfully closed this period of my life, and willing to see, as I am, which ones will come now. And, last but not least, to my parents, for their patience, faith, and unconditional love. Among so many things, I owe them my interest in books since I was a child, which is at the root of my admiration and awe for scientists, science and its endless possibilities. Josep-Arnau Claret i Robert. Barcelona, Spain. May 2019. This work has been partially supported by the Spanish MINECO projects DPI2013-40882-P, DPI2014-57757-R and DPI2016-80077-R, the Spanish predoctoral grant BES-2012-054899, and the Japanese challenging exploratory research grant 15K12124. vii Contents Abstract v Acknowledgments vii Notation and Acronyms xvi 1 Introduction 1 1.1 Context and motivation . 1 1.2 Objectives . 4 1.3 Outline of the Thesis . 7 2 State of the Art 9 2.1 Robotic Systems . 9 2.1.1 Teleoperation . 10 2.2 Exploiting the Redundancy . 12 2.3 Using a Robot to Convey Emotions . 13 2.3.1 Voice Intonation . 13 2.3.2 Head Motion and Facial Expression . 14 2.3.3 Expressing Emotions through Body Motions . 15 3 Modeling 17 3.1 The Jacobian Null Space and The Task Priority Formulations . 17 3.2 Modeling Emotions . 19 3.2.1 Measuring Emotions: The SAM Scale . 22 3.3 The Continuous Inverse . 23 3.4 The Pinhole Camera Model . 26 I Conveying Emotions Using the Null Space 29 4 Emotion Mapping 31 4.1 Introduction . 31 4.2 The Pepper Robot . 33 4.3 Emotion Conveyance . 36 ix x TABLE OF CONTENTS 4.3.1 Proposed approach . 36 4.3.2 From PAD to motion features: fJVG ...................... 38 4.3.3 From JVG to the emotional configuration: fm . 39 4.4 The Multi-Priority Inverse Kinematic Algorithm . 42 4.5 Implementation . 43 4.6 Chapter Contributions . 45 5 The User Study 47 5.1 Description . 48 5.2 Results . 50 5.3 Discussion . 55 5.4 Chapter Contributions . 57 II Teleoperating a Mobile Manipulator with a UAV as visual feedback 59 6 Workspace Mapping 61 6.1 Introduction . 61 6.2 The Teleoperation System . 62 6.2.1 The frames . 63 6.2.2 Inverse kinematics . 66 6.2.3 The Position Mapping . 67 6.2.4 The Orientation Mapping . 67 6.3 The User Study . 71 6.3.1 Results . 72 6.4 Chapter Contributions . 75 7 Enhancing the Continuous Inverse 77 7.1 Introduction . 77 7.2 Limitations of the Continuous Inverse . 79 7.2.1 On the boundaries of the null space matrix . 79 7.2.2 On the lower priority levels . 80 7.2.3 Example . 81 7.3 Enhancements . 86 7.3.1 On the boundaries of the null space matrix . 86 7.3.2 On the lower priority levels . 89 7.4 Chapter Contributions . 90 8 The Object Best View Function 93 8.1 Introduction . 93 8.2 Proposed Solution . 94 8.3 Simulation Results . 100 8.4 Chapter Contributions . 109 TABLE OF CONTENTS xi 9 The Drone Track Function 111 9.1 Introduction . 111 9.2 Proposed Solution . 113 9.2.1 Requirements . 113 9.2.2 Kinematic model . 114 9.2.3 Subtasks . 115 9.2.4 Algorithm Development . 117 9.2.5 Summary . 124 9.3 Simulations . 124 9.4 Chapter Contributions . 134 10 Experimental Validation 135 10.1 Introduction . 135 10.2 The robots . 136 10.2.1 The Barcelona Mobile Manipulator I . 136 10.2.2 The Parrot AR.Drone . 138 10.2.3 The Phantom Omni . 138 10.3 Integration . 139 10.4 Simulation environment: The Kautham Project . 143 10.5 Experimentation . 144 10.5.1 BMM-I Inverse Kinematics Test . 144 10.5.2 Object Best View Function Experiment 1 . 144 10.5.3 Object Best View Function Experiment 2 . 146 10.5.4 Drone Track Experiment . 146 11 Conclusions 149 11.1 Contributions . ..
Recommended publications
  • Evaluating Performance Benefits of Head Tracking in Modern Video
    Evaluating Performance Benefits of Head Tracking in Modern Video Games Arun Kulshreshth Joseph J. LaViola Jr. Department of EECS Department of EECS University of Central Florida University of Central Florida 4000 Central Florida Blvd 4000 Central Florida Blvd Orlando, FL 32816, USA Orlando, FL 32816, USA [email protected] [email protected] ABSTRACT PlayStation Move, TrackIR 5) that support 3D spatial in- teraction have been implemented and made available to con- We present a study that investigates user performance ben- sumers. Head tracking is one example of an interaction tech- efits of using head tracking in modern video games. We nique, commonly used in the virtual and augmented reality explored four di↵erent carefully chosen commercial games communities [2, 7, 9], that has potential to be a useful ap- with tasks which can potentially benefit from head tracking. proach for controlling certain gaming tasks. Recent work on For each game, quantitative and qualitative measures were head tracking and video games has shown some potential taken to determine if users performed better and learned for this type of gaming interface. For example, Sko et al. faster in the experimental group (with head tracking) than [10] proposed a taxonomy of head gestures for first person in the control group (without head tracking). A game ex- shooter (FPS) games and showed that some of their tech- pertise pre-questionnaire was used to classify participants niques (peering, zooming, iron-sighting and spinning) are into casual and expert categories to analyze a possible im- useful in games. In addition, previous studies [13, 14] have pact on performance di↵erences.
    [Show full text]
  • PROGRAMS for LIBRARIES Alastore.Ala.Org
    32 VIRTUAL, AUGMENTED, & MIXED REALITY PROGRAMS FOR LIBRARIES edited by ELLYSSA KROSKI CHICAGO | 2021 alastore.ala.org ELLYSSA KROSKI is the director of Information Technology and Marketing at the New York Law Institute as well as an award-winning editor and author of sixty books including Law Librarianship in the Age of AI for which she won AALL’s 2020 Joseph L. Andrews Legal Literature Award. She is a librarian, an adjunct faculty member at Drexel University and San Jose State University, and an international conference speaker. She received the 2017 Library Hi Tech Award from the ALA/LITA for her long-term contributions in the area of Library and Information Science technology and its application. She can be found at www.amazon.com/author/ellyssa. © 2021 by the American Library Association Extensive effort has gone into ensuring the reliability of the information in this book; however, the publisher makes no warranty, express or implied, with respect to the material contained herein. ISBNs 978-0-8389-4948-1 (paper) Library of Congress Cataloging-in-Publication Data Names: Kroski, Ellyssa, editor. Title: 32 virtual, augmented, and mixed reality programs for libraries / edited by Ellyssa Kroski. Other titles: Thirty-two virtual, augmented, and mixed reality programs for libraries Description: Chicago : ALA Editions, 2021. | Includes bibliographical references and index. | Summary: “Ranging from gaming activities utilizing VR headsets to augmented reality tours, exhibits, immersive experiences, and STEM educational programs, the program ideas in this guide include events for every size and type of academic, public, and school library” —Provided by publisher. Identifiers: LCCN 2021004662 | ISBN 9780838949481 (paperback) Subjects: LCSH: Virtual reality—Library applications—United States.
    [Show full text]
  • Augmented Reality and Its Aspects: a Case Study for Heating Systems
    Augmented Reality and its aspects: a case study for heating systems. Lucas Cavalcanti Viveiros Dissertation presented to the School of Technology and Management of Bragança to obtain a Master’s Degree in Information Systems. Under the double diploma course with the Federal Technological University of Paraná Work oriented by: Prof. Paulo Jorge Teixeira Matos Prof. Jorge Aikes Junior Bragança 2018-2019 ii Augmented Reality and its aspects: a case study for heating systems. Lucas Cavalcanti Viveiros Dissertation presented to the School of Technology and Management of Bragança to obtain a Master’s Degree in Information Systems. Under the double diploma course with the Federal Technological University of Paraná Work oriented by: Prof. Paulo Jorge Teixeira Matos Prof. Jorge Aikes Junior Bragança 2018-2019 iv Dedication I dedicate this work to my friends and my family, especially to my parents Tadeu José Viveiros and Vera Neide Cavalcanti, who have always supported me to continue my stud- ies, despite the physical distance has been a demand factor from the beginning of the studies by the change of state and country. v Acknowledgment First of all, I thank God for the opportunity. All the teachers who helped me throughout my journey. Especially, the mentors Paulo Matos and Jorge Aikes Junior, who not only provided the necessary support but also the opportunity to explore a recent area that is still under development. Moreover, the professors Paulo Leitão and Leonel Deusdado from CeDRI’s laboratory for allowing me to make use of the HoloLens device from Microsoft. vi Abstract Thanks to the advances of technology in various domains, and the mixing between real and virtual worlds.
    [Show full text]
  • Off-The-Shelf Stylus: Using XR Devices for Handwriting and Sketching on Physically Aligned Virtual Surfaces
    TECHNOLOGY AND CODE published: 04 June 2021 doi: 10.3389/frvir.2021.684498 Off-The-Shelf Stylus: Using XR Devices for Handwriting and Sketching on Physically Aligned Virtual Surfaces Florian Kern*, Peter Kullmann, Elisabeth Ganal, Kristof Korwisi, René Stingl, Florian Niebling and Marc Erich Latoschik Human-Computer Interaction (HCI) Group, Informatik, University of Würzburg, Würzburg, Germany This article introduces the Off-The-Shelf Stylus (OTSS), a framework for 2D interaction (in 3D) as well as for handwriting and sketching with digital pen, ink, and paper on physically aligned virtual surfaces in Virtual, Augmented, and Mixed Reality (VR, AR, MR: XR for short). OTSS supports self-made XR styluses based on consumer-grade six-degrees-of-freedom XR controllers and commercially available styluses. The framework provides separate modules for three basic but vital features: 1) The stylus module provides stylus construction and calibration features. 2) The surface module provides surface calibration and visual feedback features for virtual-physical 2D surface alignment using our so-called 3ViSuAl procedure, and Edited by: surface interaction features. 3) The evaluation suite provides a comprehensive test bed Daniel Zielasko, combining technical measurements for precision, accuracy, and latency with extensive University of Trier, Germany usability evaluations including handwriting and sketching tasks based on established Reviewed by: visuomotor, graphomotor, and handwriting research. The framework’s development is Wolfgang Stuerzlinger, Simon Fraser University, Canada accompanied by an extensive open source reference implementation targeting the Unity Thammathip Piumsomboon, game engine using an Oculus Rift S headset and Oculus Touch controllers. The University of Canterbury, New Zealand development compares three low-cost and low-tech options to equip controllers with a *Correspondence: tip and includes a web browser-based surface providing support for interacting, Florian Kern fl[email protected] handwriting, and sketching.
    [Show full text]
  • Operational Capabilities of a Six Degrees of Freedom Spacecraft Simulator
    AIAA Guidance, Navigation, and Control (GNC) Conference AIAA 2013-5253 August 19-22, 2013, Boston, MA Operational Capabilities of a Six Degrees of Freedom Spacecraft Simulator K. Saulnier*, D. Pérez†, G. Tilton‡, D. Gallardo§, C. Shake**, R. Huang††, and R. Bevilacqua ‡‡ Rensselaer Polytechnic Institute, Troy, NY, 12180 This paper presents a novel six degrees of freedom ground-based experimental testbed, designed for testing new guidance, navigation, and control algorithms for nano-satellites. The development of innovative guidance, navigation and control methodologies is a necessary step in the advance of autonomous spacecraft. The testbed allows for testing these algorithms in a one-g laboratory environment, increasing system reliability while reducing development costs. The system stands out among the existing experimental platforms because all degrees of freedom of motion are dynamically reproduced. The hardware and software components of the testbed are detailed in the paper, as well as the motion tracking system used to perform its navigation. A Lyapunov-based strategy for closed loop control is used in hardware-in-the loop experiments to successfully demonstrate the system’s capabilities. Nomenclature ̅ = reference model dynamics matrix ̅ = control distribution matrix = moment arms of the thrusters = tracking error vector ̅ = thrust distribution matrix = mass of spacecraft ̅ = solution to the Lyapunov equation ̅ = selected matrix for the Lyapunov equation ̅ = rotation matrix from the body to the inertial frame = binary thrusts vector = thrust force of a single thruster = input to the linear reference model = ideal control vector = Lyapunov function = attitude vector = position vector Downloaded by Riccardo Bevilacqua on August 22, 2013 | http://arc.aiaa.org DOI: 10.2514/6.2013-5253 * Graduate Student, Department of Electrical, Computer, and Systems Engineering, 110 8th Street Troy NY JEC Room 1034.
    [Show full text]
  • Global Interoperability in Telerobotics and Telemedicine
    2010 IEEE International Conference on Robotics and Automation Anchorage Convention District May 3-8, 2010, Anchorage, Alaska, USA Plugfest 2009: Global Interoperability in Telerobotics and Telemedicine H. Hawkeye King1, Blake Hannaford1, Ka-Wai Kwok2, Guang-Zhong Yang2, Paul Griffiths3, Allison Okamura3, Ildar Farkhatdinov4, Jee-Hwan Ryu4, Ganesh Sankaranarayanan5, Venkata Arikatla 5, Kotaro Tadano6, Kenji Kawashima6, Angelika Peer7, Thomas Schauß7, Martin Buss7, Levi Miller8, Daniel Glozman8, Jacob Rosen8, Thomas Low9 1University of Washington, Seattle, WA, USA, 2Imperial College London, London, UK, 3Johns Hopkins University, Baltimore, MD, USA, 4Korea University of Technology and Education, Cheonan, Korea, 5Rensselaer Polytechnic Institute, Troy, NY, USA, 6Tokyo Institute of Technology, Yokohama, Japan, 7Technische Universitat¨ Munchen,¨ Munich, Germany, 8University of California, Santa Cruz, CA, USA, 9SRI International, Menlo Park, CA, USA Abstract— Despite the great diversity of teleoperator designs systems (e.g. [1], [2]) and network characterizations have and applications, their underlying control systems have many shown that under normal circumstances Internet can be used similarities. These similarities can be exploited to enable inter- for teleoperation with force feedback [3]. Fiorini and Oboe, operability between heterogeneous systems. We have developed a network data specification, the Interoperable Telerobotics for example carried out early studies in force-reflecting tele- Protocol, that can be used for Internet based control of a wide operation over the Internet [4]. Furthermore, long distance range of teleoperators. telesurgery has been demonstrated in successful operation In this work we test interoperable telerobotics on the using dedicated point-to-point network links [5], [6]. Mean- global Internet, focusing on the telesurgery application do- while, work by the Tachi Lab at the University of Tokyo main.
    [Show full text]
  • Cinematic Virtual Reality: Evaluating the Effect of Display Type on the Viewing Experience for Panoramic Video
    Cinematic Virtual Reality: Evaluating the Effect of Display Type on the Viewing Experience for Panoramic Video Andrew MacQuarrie, Anthony Steed Department of Computer Science, University College London, United Kingdom [email protected], [email protected] Figure 1: A 360° video being watched on a head-mounted display (left), a TV (right), and our SurroundVideo+ system (centre) ABSTRACT 1 INTRODUCTION The proliferation of head-mounted displays (HMD) in the market Cinematic virtual reality (CVR) is a broad term that could be con- means that cinematic virtual reality (CVR) is an increasingly popu- sidered to encompass a growing range of concepts, from passive lar format. We explore several metrics that may indicate advantages 360° videos, to interactive narrative videos that allow the viewer to and disadvantages of CVR compared to traditional viewing formats affect the story. Work in lightfield playback [25] and free-viewpoint such as TV. We explored the consumption of panoramic videos in video [8] means that soon viewers will be able to move around in- three different display systems: a HMD, a SurroundVideo+ (SV+), side captured or pre-rendered media with six degrees of freedom. and a standard 16:9 TV. The SV+ display features a TV with pro- Real-time rendered, story-led experiences also straddle the bound- jected peripheral content. A between-groups experiment of 63 par- ary between film and virtual reality. ticipants was conducted, in which participants watched panoramic By far the majority of CVR experiences are currently passive videos in one of these three display conditions. Aspects examined 360° videos.
    [Show full text]
  • INSPECT: Extending Plane‑Casting for 6‑DOF Control
    Katzakis et al. Hum. Cent. Comput. Inf. Sci. (2015) 5:22 DOI 10.1186/s13673-015-0037-y RESEARCH Open Access INSPECT: extending plane‑casting for 6‑DOF control Nicholas Katzakis1*, Robert J Teather2, Kiyoshi Kiyokawa1,3 and Haruo Takemura2 *Correspondence: [email protected]‑u.ac.jp Abstract 1 Graduate School INSPECT is a novel interaction technique for 3D object manipulation using a rotation- of Information Science and Technology, Osaka only tracked touch panel. Motivated by the applicability of the technique on smart- University, Yamadaoka 1‑1, phones, we explore this design space by introducing a way to map the available Suita, Osaka, Japan degrees of freedom and discuss the design decisions that were made. We subjected Full list of author information is available at the end of the INSPECT to a formal user study against a baseline wand interaction technique using a article Polhemus tracker. Results show that INSPECT is 12% faster in a 3D translation task while at the same time being 40% more accurate. INSPECT also performed similar to the wand at a 3D rotation task and was preferred by the users overall. Keywords: Interaction, 3D, Manipulation, Docking, Smartphone, Graphics, Presentation, Wand, Touch, Rotation Background Virtual object manipulation is required in a wide variety of application domains. From city planning [1] and CAD in immersive virtual reality [2], interior design [3] reality and virtual prototyping [4], to manipulating a multi-dimensional dataset for scientific data explora- tion [5], educational applications [6], medical training [7] and even sandtray therapy [8]. There is, in all these application domains, a demand for low-cost, intuitive and fatigue-free control of six degrees of freedom (DOF).
    [Show full text]
  • Arxiv:2103.05842V1 [Cs.CV] 10 Mar 2021 Generators to Produce More Immersive Contents
    LEARNING TO COMPOSE 6-DOF OMNIDIRECTIONAL VIDEOS USING MULTI-SPHERE IMAGES Jisheng Li∗, Yuze He∗, Yubin Hu∗, Yuxing Hany, Jiangtao Wen∗ ∗Tsinghua University, Beijing, China yResearch Institute of Tsinghua University in Shenzhen, Shenzhen, China ABSTRACT Omnidirectional video is an essential component of Virtual Real- ity. Although various methods have been proposed to generate con- Fig. 1: Our proposed 6-DoF omnidirectional video composition tent that can be viewed with six degrees of freedom (6-DoF), ex- framework. isting systems usually involve complex depth estimation, image in- painting or stitching pre-processing. In this paper, we propose a sys- tem that uses a 3D ConvNet to generate a multi-sphere images (MSI) representation that can be experienced in 6-DoF VR. The system uti- lizes conventional omnidirectional VR camera footage directly with- out the need for a depth map or segmentation mask, thereby sig- nificantly simplifying the overall complexity of the 6-DoF omnidi- rectional video composition. By using a newly designed weighted sphere sweep volume (WSSV) fusing technique, our approach is compatible with most panoramic VR camera setups. A ground truth generation approach for high-quality artifact-free 6-DoF contents is proposed and can be used by the research and development commu- nity for 6-DoF content generation. Index Terms— Omnidirectional video composition, multi- Fig. 2: An example of conventional 6-DoF content generation sphere images, 6-DoF VR method 1. INTRODUCTION widely available high-quality 6-DoF content that
    [Show full text]
  • Social Virtual Reality: Ethical Considerations and Future
    Social Virtual Reality: Ethical Considerations and Future Directions for An Emerging Research Space Divine Maloney* Guo Freeman† Andrew Robb‡ Clemson University Clemson University Clemson University ABSTRACT Therefore, in this paper we provide an overview of modern So- cial VR, critically review current scholarship in the area, point re- The boom of commercial social virtual reality (VR) platforms in searchers towards unexplored areas, and raise ethical considerations recent years has signaled the growth and wide-spread adoption of for how to ethically conduct research on these sites and which re- consumer VR. Social VR platforms draw aspects from traditional search areas require additional attention. 2D virtual worlds where users engage in various immersive experi- ences, interactive activities, and choices in avatar-based representa- 2 Social VIRTUAL REALITY tion. However, social VR also demonstrates specific nuances that extend traditional 2D virtual worlds and other online social spaces, 2.1 Defining and Characterizing Commercial Social VR such as full/partial body tracked avatars, experiencing mundane ev- For the purpose of this paper, we adopt and expand McVeigh et eryday activities in a new way (e.g., sleeping), and an immersive al.’s [49, 50] work to define social VR as any commercial 3D vir- means to explore new and complex identities. The growing pop- tual environment where multiple users can interact with one another ularity has signaled interest and investment from top technology through VR HMDs. We focus on commercial applications of Social companies who each have their own social VR platforms. Thus far, VR for two reasons. First, VR’s relevance and societal impact will social VR has become an emerging research space, mainly focus- likely have a more significant impact on commercial VR uses than ing on design strategies, communication and interaction modalities, non-commercial uses.
    [Show full text]
  • USE of a SIX-DEGREES-OF-FREEDOM MOTION SIMULATOR for VTOL HOVERING TASKS by Emmett B
    LOAN COPY: RETURN TO AWL (W LI L-21 KIRTLAND AFB, N MEX USE OF A SIX-DEGREES-OF-FREEDOM MOTION SIMULATOR FOR VTOL HOVERING TASKS by Emmett B. Fry, Rz'churd K. Grezf; and Ronald M. Gerdes Ames Research Center Moffett Field CuZzF NATIONAL AERONAUTICS AND SPACE ADMINISTRATION WASHINGTON, D. C. AUGUST 1969 TECH LIBRARY KAFB, NM I1IIIII1lllllllll1111111111 IIIIIllllIll 1. Report No. 2. Government Accession No. 3. Recipient's Catalog No. I NASA TN 0-5383 I 4. Title and Subtitle 5. Report Date USE OF A SM-DEGREES-OF-FREEDOM MOTION SIMULATOR FOR VTOL August 1969 HOVERING TASKS 6. Performing Organization Code 7. Author(s) Emmett B. Fry, Richard K. Greif, and Ronald M. Gerdes 8. Performing organization Report No. I A-2260 9. Performing Organization Name and Address 0. Work Unit No. NASA Ames Research Center 721-04-00-01-00-21 Moffett Field, California 94035 1. Contract or Grant No. 13. Type of Report and Period Covered 12. Sponsoring Agency Name and Address Technical Note National Aeronautics and Silnce Administration Washington. D.C. 20546 14. Sponsoring Agency Code 15. Supplementory Notes I 16. Abstroct A piloted, six-degrees-of-freedom motion simulator has been evaluated with regard to its ability to simulate VTOL visual hovering tasks. Characteristics of the variable stability jet-lift Bell X-14A aircraft were simulated, and results for the roll and pitch axes were compared with flight data. The roll-axis data were also compared with data from two- and single-degree of-freedom simulators. Control power and damping requirements for the roll and pitch axes compared very well with flight data.
    [Show full text]
  • Towards Perceptual Evaluation of Six Degrees of Freedom Virtual Reality Rendering from Stacked Omnistereo Representation
    https://doi.org/10.2352/ISSN.2470-1173.2018.05.PMII-352 This work is licensed under the Creative Commons Attribution 4.0 International License To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ Towards Perceptual Evaluation of Six Degrees of Freedom Virtual Reality Rendering from Stacked OmniStereo Representation Jayant Thatte, Stanford University, Stanford, CA, Unites States Bernd Girod, Stanford University, Stanford, CA, United States Abstract The rest of the paper is organized as follows. The following Allowing viewers to explore virtual reality in a head- section gives an overview of the related work. The next three sec- mounted display with six degrees of freedom (6-DoF) greatly en- tions detail the three contributions of our work: the results of the hances the associated immersion and comfort. It makes the ex- subjective tests, the estimated head movement distribution, and perience more compelling compared to a fixed-viewpoint 2-DoF the proposed real-time rendering system. The last two sections rendering produced by conventional algorithms using data from outline the future work and the conclusions respectively. a stationary camera rig. In this work, we use subjective testing to study the relative importance of, and the interaction between, mo- Related Work tion parallax and binocular disparity as depth cues that shape the While several studies have demonstrated the importance of perception of 3D environments by human viewers. Additionally, motion parallax as a prominent depth cue [4], [3], few have done we use the recorded head trajectories to estimate the distribution so in the context of immersive media.
    [Show full text]