Human-Robot Interaction and Mapping with a Service Robot: Human Augmented Mapping

Total Page:16

File Type:pdf, Size:1020Kb

Human-Robot Interaction and Mapping with a Service Robot: Human Augmented Mapping Human-Robot Interaction and Mapping with a Service Robot: Human Augmented Mapping ELIN ANNA TOPP Doctoral Thesis Stockholm, Sweden 2008 TRITA-CSC-A 2008:12 KTH ISSN-1653-5723 School of Computer Science and Communication ISRN-KTH/CSC/A--08/12--SE SE-100 44 Stockholm ISBN 978–91–7415–118–3 SWEDEN Akademisk avhandling som med tillst˚and av Kungl Tekniska h¨ogskolan framl¨ag- ges till offentlig granskning f¨or avl¨aggande av teknologie doktorsexamen i datalogi m˚andagen den 13 oktober 2008 klockan 10.15 i sal F3, Kungl Tekniska h¨ogskolan, Lindstedtsv¨agen 26, Stockholm. c Elin Anna Topp, September 2008 Tryck: Universitetsservice US AB Abstract An issue widely discussed in robotics research is the ageing society with its conse- quences for care-giving institutions and opportunities for developments in the area of service robots and robot companions. The general idea of using robotic systems in a per- sonal or private context to support an independent way of living not only for the elderly but also for the physically impaired is pursued in different ways, ranging from socially ori- ented robotic pets to mobile assistants. Thus, the idea of the personalised general service robot is not too far fetched. Crucial for such a service robot is the ability to navigate in its working environment, which has to be assumed an arbitrary domestic or office-like environment that is shared with human users and bystanders. With methods developed and investigated in the field of simultaneous localisation and mapping it has become pos- sible for mobile robots to explore and map an unknown environment, while they can stay localised with respect to their starting point and the surroundings. These approaches though do not consider the representation of the environment that is used by humans to refer to particular places. Robotic maps are often metric representations of features that can be obtained from sensory data. Humans have a more topological, in fact partially hierarchical way of representing environments. Especially for the communication between a user and her personal robot it is thus necessary to provide a link between the robotic map and the human understanding of the robot’s workspace. The term Human Augmented Mapping is used for a framework that allows to integrate a robotic map with human concepts. Communication about the environment can thus be facilitated. By assuming an interactive setting for the map acquisition process it is possible for the user to influence the process significantly. Personal preferences can be made part of the environment representation that is acquired by the robot. Advantages become also obvious for the mapping process itself, since in an interactive setting the robot can ask for information and resolve ambiguities with the help of the user. Thus, a scenario of a “guided tour” in which a user can ask a robot to follow and present the surroundings is assumed as the starting point for a system for the integration of robotic mapping, interaction and human environment representations. A central point is the development of a generic, partially hierarchical environment model, that is applied in a topological graph structure as part of an overall experimental Human Augmented Mapping system implementation. Different aspects regarding the rep- resentation of entities of the spatial concepts used in this hierarchical model, particularly considering regions, are investigated. The proposed representation is evaluated both as description of delimited regions and for the detection of transitions between them. In three user studies different aspects of the human-robot interaction issues of Human Augmented Mapping are investigated and discussed. Results from the studies support the proposed model and representation approaches and can serve as basis for further studies in this area. III Preface The present doctoral thesis consolidates results of four years of work with a concep- tual design for an approach to hierarchical, interactively controlled robotic mapping and localisation, conducted at the Centre for Autonomous Systems (CAS) hosted by the Computational Vision and Active Perception (CVAP) group of the School of Computer Science and Communication (CSC) at the Royal Institute of Technology (KTH) in Stockholm. “Human Augmented Mapping” (HAM) is a central term which is introduced and used to subsume the discussed aspects of robotic mapping and human robot interaction. The general concept of HAM is described and dis- cussed together with results that have been achieved with a respective experimental system implementation. Those results also include observations made in three dif- ferent user studies, that were conducted to test the assumptions underlying the models used for the work. The work contributed to a large extent to the integrated EU-project “COGN- IRON - The Cognitive robot companion”1, in particular to the Key Experiment 1 – “The Home Tour”, that served as a demonstrator experiment to document the results achieved in the project’s research activities on “Models of space” and “Multi- modal dialogue”. Additionally the work was to a large extent funded through the project by the European Commission Division FP6-IST Future and Emerging Tech- nologies under Contract FP6-002020. The funding is gratefully acknowledged. Within and related to the project work a number of collaborations inside and outside the Royal Institute of Technology could be established, that contributed to the results presented in this thesis. My own contributions within these collabora- tions are pointed out specifically in the respective chapters and sections. Anders Green and Helge H¨uttenrauch, at the time of writing both affiliated with the Human-Computer Interaction group of CSC at KTH, conducted a user study early in the project that inspired my thoughts about the environment model and the first user study setup described in this thesis. It was also possible for me to use data collected during this first user study to evaluate the tracking method that is part of my approach to a complete system to Human Augmented Mapping. At this point I want to thank Anders and Helge for these opportunities and the inspiring discussions, particularly with Anders on “spatial prompting”, a concept he developed and is about to publish in his doctoral thesis. Further, I designed and conducted all three user studies described in this thesis in cooperation with Helge H¨uttenrauch. Thanks again to Helge, in this case partic- ularly for having the somewhat “different” idea of taking the robot out of the lab and into people’s homes to conduct one of the studies, this was a great experience. One of the studies conducted in the laboratory was actually assigned as a mas- ter’s project to Farah Hassan Ibrahim, jointly supervised by Helge and myself, and at the time of writing a registered student in the computer science programme of CSC/KTH. Other master’s projects and one short-term undergraduate project 1www.cogniron.org (URL verified August 19, 2008) V VI Preface (a German “Studienarbeit”) related to my work and supervised by myself were assigned to and conducted by Alvaro Canivell Garc´ıa de Paredes, Maryamossadat Nematollahi Mahani, and Stephan Platzek. Thanks for working with me and coping with the often vague and exploratory ideas for your tasks! The COGNIRON project’s aim to demonstrate results from different research activities as integrated demonstrators in different Key Experiments built the basis for a very fruitful collaboration with the Applied Computer Science group at the University of Bielefeld, Germany. The efforts put into the transfer of a significant part of my experimental implementation to an integrated interactive framework made by Marc Hanheide, Julia Peltason, Frederic Siepmann, Thorsten Spexard, and Sebastian Wrede (in alphabetical order), and myself led to a prototypical fully integrated interactive system for Human Augmented Mapping, that could be demonstrated successfully in the context of the project. Thanks, for helping to get all that code to work! A joint effort within the research activity on “Models of space” made me travel to the Intelligent Autonomous Systems group at the University of Amsterdam with a SICK laser range finder in my carry-on luggage – also this being an interesting experience in itself – to collect a data set that was made public to give research groups dealing with some form of semantic or interactive mapping a basis to com- pare their approaches. This cooperation with Olaf Booij, Bas Terwijn, and Zoran Zivkovic led to one of the publications listed as related to this thesis. Thanks! The cooperations already indicate that this thesis project involved a lot of trav- elling: A robot travelled through the greater Stockholm area, students travelled back and forth to pursue their international study programmes, a laser range finder got to fly to Amsterdam, and I myself travelled, well ... I was warned right before I travelled back to Germany after my master’s project (which I already travelled to Stockholm for), that I would be travelling alotduring my time as a PhD student, due to the European project. I thought “great, I love travelling”. After more than 30 take-offs and landings – both for professional and for private reasons – during my first year in the graduate program and actually recognising members of the SAS in-flight staff on particular routes after the second I started to revise that statement slightly. I was also warned that my adviser was a person difficult to meet, due to him travelling a lot more than I did anyway. “If you want to talk to him, book the same flight”, I figured – I tried, it did not work, he would end up being seated somewhere completely different on the plane. Nevertheless, I managed to achieve during 48 months of doctoral studies spread over five years what I planned to, actually a bit more, this “bit” now being roughly 1 1 2 years old.
Recommended publications
  • Neuron C Reference Guide Iii • Introduction to the LONWORKS Platform (078-0391-01A)
    Neuron C Provides reference info for writing programs using the Reference Guide Neuron C programming language. 078-0140-01G Echelon, LONWORKS, LONMARK, NodeBuilder, LonTalk, Neuron, 3120, 3150, ShortStack, LonMaker, and the Echelon logo are trademarks of Echelon Corporation that may be registered in the United States and other countries. Other brand and product names are trademarks or registered trademarks of their respective holders. Neuron Chips and other OEM Products were not designed for use in equipment or systems, which involve danger to human health or safety, or a risk of property damage and Echelon assumes no responsibility or liability for use of the Neuron Chips in such applications. Parts manufactured by vendors other than Echelon and referenced in this document have been described for illustrative purposes only, and may not have been tested by Echelon. It is the responsibility of the customer to determine the suitability of these parts for each application. ECHELON MAKES AND YOU RECEIVE NO WARRANTIES OR CONDITIONS, EXPRESS, IMPLIED, STATUTORY OR IN ANY COMMUNICATION WITH YOU, AND ECHELON SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of Echelon Corporation. Printed in the United States of America. Copyright © 2006, 2014 Echelon Corporation. Echelon Corporation www.echelon.com Welcome This manual describes the Neuron® C Version 2.3 programming language. It is a companion piece to the Neuron C Programmer's Guide.
    [Show full text]
  • Navigation Techniques in Augmented and Mixed Reality: Crossing the Virtuality Continuum
    Chapter 20 NAVIGATION TECHNIQUES IN AUGMENTED AND MIXED REALITY: CROSSING THE VIRTUALITY CONTINUUM Raphael Grasset 1,2, Alessandro Mulloni 2, Mark Billinghurst 1 and Dieter Schmalstieg 2 1 HIT Lab NZ University of Canterbury, New Zealand 2 Institute for Computer Graphics and Vision Graz University of Technology, Austria 1. Introduction Exploring and surveying the world has been an important goal of humankind for thousands of years. Entering the 21st century, the Earth has almost been fully digitally mapped. Widespread deployment of GIS (Geographic Information Systems) technology and a tremendous increase of both satellite and street-level mapping over the last decade enables the public to view large portions of the 1 2 world using computer applications such as Bing Maps or Google Earth . Mobile context-aware applications further enhance the exploration of spatial information, as users have now access to it while on the move. These applications can present a view of the spatial information that is personalised to the user’s current context (context-aware), such as their physical location and personal interests. For example, a person visiting an unknown city can open a map application on her smartphone to instantly obtain a view of the surrounding points of interest. Augmented Reality (AR) is one increasingly popular technology that supports the exploration of spatial information. AR merges virtual and real spaces and offers new tools for exploring and navigating through space [1]. AR navigation aims to enhance navigation in the real world or to provide techniques for viewpoint control for other tasks within an AR system. AR navigation can be naively thought to have a high degree of similarity with real world navigation.
    [Show full text]
  • Mapping Why Mapping?
    Mapping Why Mapping? Learning maps is one of the fundamental problems in mobile robotics Maps allow robots to efficiently carry out their tasks, allow localization … Successful robot systems rely on maps for localization, path planning, activity planning etc. The General Problem of Mapping What does the environment look like? The General Problem of Mapping The problem of robotic mapping is that of acquiring a spatial model of a robot’s environment Formally, mapping involves, given the sensor data, d {u1, z1,u2 , z2 ,,un , zn} to calculate the most likely map m* arg max P(m | d) m Mapping Challenges A key challenge arises from the measurement errors, which are statistically dependent Errors accumulate over time They affect future measurement Other Challenges of Mapping The high dimensionality of the entities that are being mapped Mapping can be high dimensional Correspondence problem Determine if sensor measurements taken at different points in time correspond to the same physical object in the world Environment changes over time Slower changes: trees in different seasons Faster changes: people walking by Robots must choose their way during mapping Robotic exploration problem Factors that Influence Mapping Size: the larger the environment, the more difficult Perceptual ambiguity the more frequent different places look alike, the more difficult Cycles cycles make robots return via different paths, the accumulated odometric error can be huge The following discussion assumes mapping with known poses Mapping vs. Localization Learning maps is a “chicken-and-egg” problem First, there is a localization problem. Errors can be easily accumulated in odometry, making it less certain about where it is Methods exist to correct the error given a perfect map Second, there is a mapping problem.
    [Show full text]
  • (12) United States Patent (10) Patent No.: US 8,337,482 B2 Wood, Jr
    US008337482B2 (12) United States Patent (10) Patent No.: US 8,337,482 B2 Wood, Jr. (45) Date of Patent: *Dec. 25, 2012 (54) SYSTEM FOR PERFUSION MANAGEMENT (56) References Cited (75) Inventor: Lowell L. Wood, Jr., Livermore, CA U.S. PATENT DOCUMENTS (US) 3,391,697 A 7, 1968 Greatbatch 3,821,469 A 6, 1974 Whetstone et al. (73) Assignee: The Invention Science Fund I, LLC, 3,941,1273,837,339 A 3,9, 19761974 flagAisenb e tal. Bellevue, WA (US) 3,983.474. A 9/1976 Kuipers 4,054,881 A 10, 1977 Raab (*)c Notice:- r Subject to any disclaimer, the term of this 4,202,3494,119,900 A 10,5/1980 1978 JonesKremnitz patent is extended or adjusted under 35 4.262.306 A 4, 1981 Renner U.S.C. 154(b) by 1417 days. 4.267,831. A 5/1981 Aguilar This patent is Subject to a terminal dis- 2. A 3.18: R et al. claimer. 4,339.953 A 7, 1982 Iwasaki 4,367,741 A 1/1983 Michaels 4,396,885 A 8, 1983 Constant (21) Appl. No.: 10/827,576 4,403,321 A 9/1983 Kriger 4.418,422 A 11/1983 Richter et al. (22) Filed: Apr. 19, 2004 4,431,005 A 2f1984 McCormick 4,583, 190 A 4, 1986 Sab O O 4,585,652 A 4, 1986 Miller et al. (65) Prior Publication Data 4,628,928 A 12/1986 Lowell 4,638,798 A 1/1987 Shelden et al. US 2005/O234399 A1 Oct.
    [Show full text]
  • Simultaneous Localization and Mapping in Marine Environments
    Chapter 8 Simultaneous Localization and Mapping in Marine Environments Maurice F. Fallon∗, Hordur Johannsson, Michael Kaess, John Folkesson, Hunter McClelland, Brendan J. Englot, Franz S. Hover and John J. Leonard Abstract Accurate navigation is a fundamental requirement for robotic systems— marine and terrestrial. For an intelligent autonomous system to interact effectively and safely with its environment, it needs to accurately perceive its surroundings. While traditional dead-reckoning filtering can achieve extremely high performance, the localization accuracy decays monotonically with distance traveled. Other ap- proaches (such as external beacons) can help; nonetheless, the typical prerogative is to remain at a safe distance and to avoid engaging with the environment. In this chapter we discuss alternative approaches which utilize onboard sensors so that the robot can estimate the location of sensed objects and use these observations to improve its own navigation as well its perception of the environment. This approach allows for meaningful interaction and autonomy. Three motivating autonomous underwater vehicle (AUV) applications are outlined herein. The first fuses external range sensing with relative sonar measurements. The second application localizes relative to a prior map so as to revisit a specific feature, while the third builds an accurate model of an underwater structure which is consistent and complete. In particular we demonstrate that each approach can be abstracted to a core problem of incremental estimation within a sparse graph of the AUV’s trajectory and the locations of features of interest which can be updated and optimized in real time on board the AUV. Maurice F. Fallon · Hordur Johannsson · Michael Kaess · John Folkesson · Hunter McClelland · Brendan J.
    [Show full text]
  • Parallel Range, Segment and Rectangle Queries with Augmented Maps
    Parallel Range, Segment and Rectangle Queries with Augmented Maps Yihan Sun Guy E. Blelloch Carnegie Mellon University Carnegie Mellon University [email protected] [email protected] Abstract The range, segment and rectangle query problems are fundamental problems in computational geometry, and have extensive applications in many domains. Despite the significant theoretical work on these problems, efficient implementations can be complicated. We know of very few practical implementations of the algorithms in parallel, and most implementations do not have tight theoretical bounds. In this paper, we focus on simple and efficient parallel algorithms and implementations for range, segment and rectangle queries, which have tight worst-case bound in theory and good parallel performance in practice. We propose to use a simple framework (the augmented map) to model the problem. Based on the augmented map interface, we develop both multi-level tree structures and sweepline algorithms supporting range, segment and rectangle queries in two dimensions. For the sweepline algorithms, we also propose a parallel paradigm and show corresponding cost bounds. All of our data structures are work-efficient to build in theory (O(n log n) sequential work) and achieve a low parallel depth (polylogarithmic for the multi-level tree structures, and O(n) for sweepline algorithms). The query time is almost linear to the output size. We have implemented all the data structures described in the paper using a parallel augmented map library. Based on the library each data structure only requires about 100 lines of C++ code. We test their performance on large data sets (up to 108 elements) and a machine with 72-cores (144 hyperthreads).
    [Show full text]
  • 2012 IEEE International Conference on Robotics and Automation (ICRA 2012)
    2012 IEEE International Conference on Robotics and Automation (ICRA 2012) St. Paul, Minnesota, USA 14 – 18 May 2012 Pages 1-798 IEEE Catalog Number: CFP12RAA-PRT ISBN: 978-1-4673-1403-9 1/7 Content List of 2012 IEEE International Conference on Robotics and Automation Technical Program for Tuesday May 15, 2012 TuA01 Meeting Room 1 (Mini-sota) Estimation and Control for UAVs (Regular Session) Chair: Spletzer, John Lehigh Univ. Co-Chair: Robuffo Giordano, Paolo Max Planck Inst. for Biological Cybernetics 08:30-08:45 TuA01.1 State Estimation for Aggressive Flight in GPS-Denied Environments Using Onboard Sensing, pp. 1-8. Bry, Adam Massachusetts Inst. of Tech. Bachrach, Abraham Massachusetts Inst. of Tech. Roy, Nicholas Massachusetts Inst. of Tech. 08:45-09:00 TuA01.2 Autonomous Indoor 3D Exploration with a Micro-Aerial Vehicle, pp. 9-15. Shen, Shaojie Univ. of Pennsylvania Michael, Nathan Univ. of Pennsylvania Kumar, Vijay Univ. of Pennsylvania 09:00-09:15 TuA01.3 Wind Field Estimation for Autonomous Dynamic Soaring, pp. 16-22. Langelaan, Jack W. Penn State Univ. Spletzer, John Lehigh Univ. Montella, Corey Lehigh Univ. Grenestedt, Joachim Lehigh Univ. 09:15-09:30 TuA01.4 Decentralized Formation Control with Variable Shapes for Aerial Robots, pp. 23-30. Attachment Turpin, Matthew Univ. of Pennsylvania Michael, Nathan Univ. of Pennsylvania Kumar, Vijay Univ. of Pennsylvania 09:30-09:45 TuA01.5 Versatile Distributed Pose Estimation and Sensor Self-Calibration for an Autonomous MAV, pp. 31-38. Attachment Weiss, Stephan ETH Zurich Achtelik, Markus W. ETH Zurich, Autonomous Systems Lab. Chli, Margarita ETH Zurich Siegwart, Roland ETH Zurich 09:45-10:00 TuA01.6 Probabilistic Velocity Estimation for Autonomous Miniature Airships Using Thermal Air Flow Sensors, pp.
    [Show full text]
  • Distributed Robotic Mapping
    DISTRIBUTED COLLABORATIVE ROBOTIC MAPPING by DAVID BARNHARD (Under Direction the of Dr. Walter D. Potter) ABSTRACT The utilization of multiple robots to map an unknown environment is a challenging problem within Artificial Intelligence. This thesis first presents previous efforts to develop robotic platforms that have demonstrated incremental progress in coordination for mapping and target acquisition tasks. Next, we present a rewards based method that could increase the coordination ability of multiple robots in a distributed mapping task. The method that is presented is a reinforcement based emergent behavior approach that rewards individual robots for performing desired tasks. It is expected that the use of a reward and taxation system will result in individual robots effectively coordinating their efforts to complete a distributed mapping task. INDEX WORDS: Robotics, Artificial Intelligence, Distributed Processing, Collaborative Robotics DISTRIBUTED COLLABORATIVE ROBOTIC MAPPING by DAVID BARNHARD B.A. Cognitive Science, University of Georgia, 2001 A Thesis Submitted to the Graduate Faculty of The University of Georgia in Partial Fulfillment of the Requirements for the Degree MASTERS OF SCIENCE ATHENS, GEORGIA 2005 © 2005 David Barnhard All Rights Reserved DISTRIBUTED COLLABORATIVE ROBOTIC MAPPING by DAVID BARNHARD Major Professor: Dr. Walter D. Potter Committee: Dr. Khaled Rasheed Dr. Suchendra Bhandarkar Electronic Version Approved: Maureen Grasso Dean of the Graduate School The University of Georgia August 2005 DEDICATION I dedicate this thesis to my lovely betrothed, Rachel. Everyday I wake to hope and wonder because of her. She is my strength and happiness every moment that I am alive. She has continually, but gently kept pushing me to get this project done, and I know without her careful insistence this never would have been completed.
    [Show full text]
  • Handwritten Digit Classication Using 8-Bit Floating Point Based Convolutional Neural Networks
    Downloaded from orbit.dtu.dk on: Apr 10, 2018 Handwritten Digit Classication using 8-bit Floating Point based Convolutional Neural Networks Gallus, Michal; Nannarelli, Alberto Publication date: 2018 Document Version Publisher's PDF, also known as Version of record Link back to DTU Orbit Citation (APA): Gallus, M., & Nannarelli, A. (2018). Handwritten Digit Classication using 8-bit Floating Point based Convolutional Neural Networks. DTU Compute. (DTU Compute Technical Report-2018, Vol. 01). General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Handwritten Digit Classification using 8-bit Floating Point based Convolutional Neural Networks Michal Gallus and Alberto Nannarelli (supervisor) Danmarks Tekniske Universitet Lyngby, Denmark [email protected] Abstract—Training of deep neural networks is often con- In order to address this problem, this paper proposes usage strained by the available memory and computational power. of 8-bit floating point instead of single precision floating point This often causes it to run for weeks even when the underlying which allows to save 75% space for all trainable parameters, platform is employed with multiple GPUs.
    [Show full text]
  • High Dynamic Range Video
    High Dynamic Range Video Karol Myszkowski, Rafał Mantiuk, Grzegorz Krawczyk Contents 1 Introduction 5 1.1 Low vs. High Dynamic Range Imaging . 5 1.2 Device- and Scene-referred Image Representations . ...... 7 1.3 HDRRevolution ............................ 9 1.4 OrganizationoftheBook . 10 1.4.1 WhyHDRVideo? ....................... 11 1.4.2 ChapterOverview ....................... 12 2 Representation of an HDR Image 13 2.1 Light................................... 13 2.2 Color .................................. 15 2.3 DynamicRange............................. 18 3 HDR Image and Video Acquisition 21 3.1 Capture Techniques Capable of HDR . 21 3.1.1 Temporal Exposure Change . 22 3.1.2 Spatial Exposure Change . 23 3.1.3 MultipleSensorswithBeamSplitters . 24 3.1.4 SolidStateSensors . 24 3.2 Photometric Calibration of HDR Cameras . 25 3.2.1 Camera Response to Light . 25 3.2.2 Mathematical Framework for Response Estimation . 26 3.2.3 Procedure for Photometric Calibration . 29 3.2.4 Example Calibration of HDR Video Cameras . 30 3.2.5 Quality of Luminance Measurement . 33 3.2.6 Alternative Response Estimation Methods . 33 3.2.7 Discussion ........................... 34 4 HDR Image Quality 39 4.1 VisualMetricClassification. 39 4.2 A Visual Difference Predictor for HDR Images . 41 4.2.1 Implementation......................... 43 5 HDR Image, Video and Texture Compression 45 1 2 CONTENTS 5.1 HDR Pixel Formats and Color Spaces . 46 5.1.1 Minifloat: 16-bit Floating Point Numbers . 47 5.1.2 RGBE: Common Exponent . 47 5.1.3 LogLuv: Logarithmic encoding . 48 5.1.4 RGB Scale: low-complexity RGBE coding . 49 5.1.5 LogYuv: low-complexity LogLuv . 50 5.1.6 JND steps: Perceptually uniform encoding .
    [Show full text]
  • Supporting Technology for Augmented Reality Game-Based Learning
    DOCTORAL THESIS Supporting Technology for Augmented Reality Game-Based Learning Hendrys Fabián Tobar Muñoz 2017 Doctorate in Technology Supervised by: PhD. Ramon Fabregat PhD. Silvia Baldiris Presented in partial fulfillment of the requirements for a doctoral degree from the University of Girona The research reported in this thesis was partially sponsored by: The financial support by COLCIENCIAS – Colombia’s Administrative Department of Science Technology and Innovation within the program of doctoral scholarships. The research reported in this thesis was carried out as part of the following projects: ARreLS project, funded by Spanish Economy and Competitiveness Ministry(TIN2011-23930) Open Co-Creation Project, funded by the Spanish Economy and Competitiveness Ministry (TIN2014-53082-R) The research reports in this thesis was developed within the research lines of the Communications and Distributed Systems research group (BCDS, ref GRCT40) which is part of the DURSI consolidated research group COMUNICACIONS I SISTEMES INTEL·LIGENTS. © Hendrys Fabián Tobar Muñoz, Girona, Catalonia, Spain, 2017 All Rights Reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without written permission granted by the author. Gracias a Dios Todopoderoso y a Su Divina Providencia. Este trabajo está dedicado a mi Reina Omaira que es lo más hermoso de este mundo. ______________________________________________________________________________________________________________________ Thanks to God Almighty and His Divine Providence. This work is dedicated to my Queen Omaira who is the most beautiful of this world (I swear, this sounds better in Spanish) ACKNOWLEDGEMENTS I have always liked games and digital technologies and I have always believed in their educational potential.
    [Show full text]
  • On the Automated Derivation of Domain-Specific UML Profiles
    Schriften aus der Fakultät Wirtschaftsinformatik und 37 Angewandte Informatik der Otto-Friedrich-Universität Bamberg On the Automated Derivation of Domain-Specifc UML Profles Alexander Kraas 37 Schriften aus der Fakultät Wirtschaftsinformatik und Angewandte Informatik der Otto-Friedrich- Universität Bamberg Contributions of the Faculty Information Systems and Applied Computer Sciences of the Otto-Friedrich-University Bamberg Schriften aus der Fakultät Wirtschaftsinformatik und Angewandte Informatik der Otto-Friedrich- Universität Bamberg Contributions of the Faculty Information Systems and Applied Computer Sciences of the Otto-Friedrich-University Bamberg Band 37 2019 On the Automated Derivation of Domain-Specifc UML Profles Alexander Kraas 2019 Bibliographische Information der Deutschen Nationalbibliothek Die Deutsche Nationalbibliothek verzeichnet diese Publikation in der Deutschen Nationalbibliographie; detaillierte bibliographische Informationen sind im Internet über http://dnb.d-nb.de/ abrufbar. Diese Arbeit hat der Fakultät Wirtschaftsinformatik und Angewandte Informatik der Otto-Friedrich-Universität Bamberg als Dissertation vorgelegen. 1. Gutachter: Prof. Dr. Gerald Lüttgen, Otto-Friedrich-University Bamberg, Germany 2. Gutachter: Prof. Dr. Richard Paige, McMaster University, Canada Tag der mündlichen Prüfung: 13.05.2019 Dieses Werk ist als freie Onlineversion über den Publikationsserver (OPUS; http://www. opus-bayern.de/uni-bamberg/) der Universität Bamberg erreichbar. Das Werk – ausge- nommen Cover, Zitate und Abbildungen –
    [Show full text]