Field Service Support with Google Glass and Webrtc

Total Page:16

File Type:pdf, Size:1020Kb

Field Service Support with Google Glass and Webrtc Field Service Support with Google Glass and WebRTC Support av Fälttekniker med Google Glass och WebRTC PATRIK OLDSBERG Examensarbete inom Datorteknik, Grundnivå, 15 hp Handledare på KTH: Reine Bergström Examinator: Ibrahim Orhan TRITA-STH 2014:68 KTH Skolan för Teknik och Hälsa 136 40 Handen, Sverige Abstract The Internet is dramatically changing the way we com- municate, and it is becoming increasingly important for communication services to adapt to context in which they are used. The goal of this thesis was to research how Google Glass and WebRTC can be used to create a communi- cation system tailored for field service support. A prototype was created where an expert is able to provide guidance for a field technician who is wearing Google Glass. A live video feed is sent from Glass to the expert from which the expert can select individual images. When a still image is selected it is displayed to the technician through Glass, and the expert is able to provide instructions using real time annotations. An algorithm that divides the selected image into segments was implemented using WebGL. This made it possible for the expert to highlight objects in the image by clicking on them. The thesis also investigates different options for ac- cessing the hardware video encoder on Google Glass. Sammanfattning Internet har dramatiskt ändrat hur vi kommunicerar, och det blir allt viktigare för kommunikationssystem att kunna anpassa sig till kontexten som de används i. Målet med det här examensarbetet var att undersöka hur Google Glass och WebRTC kan användas för att skapa ett kommunikationssystem som är skräddarsytt för support av fälttekniker. En prototyp skapades som låter en expert ge väg- ledning åt en fälttekniker som använder Google Glass. En videoström skickas från Glass till experten, och den- ne kan sedan välja ut enstaka bilder ur videon. När en stillbild väljs så visas den upp på Glass för teknikern, och experten kan sedan ge instruktioner med hjälp av realtidsannoteringar. En algoritm som delar upp den utvalda bilden i seg- ment implementerades med WebGL. Den gjorde det möj- ligt för experten att markera objekt i bilden genom att klicka på dem. Examensarbetet undersöker också olika sätt att få tillgång till hårdvarukodaren för video i Google Glass. Contents 1 Introduction 11 1.1 Background . 11 1.2 Problem Definition . 11 1.3 Research Goals and Contributions . 12 1.4 Limitations . 12 2 Literature Review 13 2.1 Wearable Computers . 13 2.1.1 History of Wearable Computers . 13 2.2 Augmented Reality . 14 2.2.1 Direct vs. Indirect Augmented Reality . 15 2.2.2 Video vs. Optical See-Through Display . 15 2.2.3 Effect on Health . 16 2.2.4 Positioning . 16 2.3 Collaborative Communication . 18 2.4 Image Processing . 19 2.4.1 Edge Detection . 19 2.4.2 Noise Reduction Filters . 20 2.4.3 Hough Transform . 21 2.4.4 Image Region Labeling . 22 3 Technology 25 3.1 Google Glass . 25 3.1.1 Timeline.............................. 25 3.1.2 Interaction . 26 3.1.3 Microinteractions . 26 3.1.4 Hardware . 27 3.1.5 Development . 27 3.1.6 Display .............................. 28 3.2 AlternativeDevices............................ 28 3.2.1 MetaPro ............................. 28 3.2.2 Vuzix M100 . 29 3.2.3 Oculus Rift Development Kit 1/2 . 29 CONTENTS 3.2.4 Recon Jet . 29 3.2.5 XMExpert . 29 3.3 Web Technologies . 30 3.3.1 WebRTC ............................. 30 3.3.2 WebGL .............................. 31 3.4 Mario ................................... 32 3.4.1 GStreamer . 32 3.5 VideoEncoding.............................. 33 3.5.1 Hardware Accelerated Video Encoding . 34 4 Implementation of Prototype 35 4.1 Baseline Implementation . 35 4.2 Hardware Accelerated Video Encoding . 36 4.2.1 gst-omx on Google Glass . 36 4.3 Ideastoimplement............................ 36 4.3.1 Annotating the Technician’s View . 37 4.3.2 Aligning Annotations and Video . 37 4.4 Still Image Annotation . 38 4.4.1 WebRTC Data Channel . 39 4.4.2 Out of Order Messages . 39 4.5 Image Processing . 40 4.5.1 Image Processing using WebGL . 41 4.5.2 Image Segmentation using Hough Transform . 42 4.5.3 Image Segmentation using Median Filters . 42 4.5.4 Region Labeling . 43 4.6 Glass Application . 44 4.6.1 Configuration . 44 4.6.2 OpenGL ES 2.0 . 44 4.6.3 Photographs by Technician . 44 4.7 Signaling Server . 45 4.7.1 Sessions and users . 45 4.7.2 Serversentevents ........................ 45 4.7.3 Image upload . 46 5 Result 47 5.1 Web Application . 47 5.2 Google Glass Application . 48 6 Discussion 51 6.1 Analysis of Method and the Result . 51 6.1.1 Live Video Annotations . 51 6.1.2 Early Prototype . 51 6.1.3 WebGL .............................. 51 6.1.4 OpenMAX . 52 CONTENTS 6.1.5 Audio ............................... 52 6.2 FurtherImprovements .......................... 52 6.2.1 Image Segmentation . 52 6.2.2 Video Annotations . 52 6.2.3 UX Evaluation . 53 6.2.4 Gesture and Voice Input . 53 6.2.5 More Annotations Options . 53 6.2.6 Logging . 53 6.3 Reusability ................................ 53 6.4 Capabilities of Google Glass . 54 6.5 Effects on Human Health and the Environment . 54 6.5.1 Environmental Impacts . 54 6.5.2 Health Concerns . 54 7 Conclusion 57 Bibliography 59 Introduction 1.1 Background The Internet is dramatically changing the way we communicate, and it is becoming increasingly important for communication services to be adaptable to the context in which they are used. They also need to be flexible enough to be able to integrate into new contexts without excessive effort. Using a wearable device allows a communication system to be tailored for the context to a greater extent. With additional information available such as move- ment, heart-rate or perspective of the user, a richer user experience can be achieved. Wearable devices have a huge potential in many different business fields. An upcoming form of wearable devices are currently head-mounted displays (HMD). HMDs have the advantage being able to display information in a hands-free format, this has a huge potential for businesses such as medicine and field service. Perhaps the most recognized wearable device at the moment is Google Glass, which for brevity will sometimes be referred to simply as ‘Glass’. 1.2 Problem Definition The focus of the thesis involved a generalized use case where a field service techni- cian has traveled to a remote site to solve a problem. The technician is equipped with Google Glass, or any equivalent HMD. When on his or her way to the site, the technician has information such as location of the site and the support ticket available. The technician can look up information such as the hardware on the site, the expected spare parts to resolve the issue, and recent tickets for the same site. Once the technician has arrived to the site, the back office support will be notified. When at the site, the technician can view manuals and use the device to document the work. If the problem is more complicated than expected, or the technician is unable to resolve the issue for some other reason, the device can be used to call an expert in the back office support. The purpose of this thesis was to research how a contextual communication system can be tailored for this use case. The part that was in focus was the call to 11 CHAPTER 1. INTRODUCTION the back office support which is made after the technician has arrived on site and requires assistance to resolve the issue. 1.3 Research Goals and Contributions The goal was to research collaborative communication and find ways to tailor a communication system for this specialized kind of call. Different ways to give instructions and display information to the wearer of a HMD were investigated, as well as how these could be implemented. An experimental prototype using some of the ideas was then constructed. The prototype was implemented using web real-time communication (WebRTC). At the time when the prototype was built there was no implementation of WebRTC available on Google Glass. A framework called Mario that was developed internally at Ericsson Research was used as WebRTC implementation, as it runs on Android among other platforms. The prototype comprised several different subsystems that were all implemented from scratch: • A web application built with HTML5, WebGL and WebRTC technology. • A NodeJS server acting as web server and signaling server. • A Google Glass application using the Glass Development Kit (GDK). An evaluation of the prototype was done with focus on how it could be further improved, and if any of the implemented ideas can be applied to similar use cases and devices. An evaluation of the capabilities of Google Glass with regard to media processing and the performance of Mario was also performed. 1.4 Limitations The time limit of the thesis is ten weeks, therefore a number of limitations where made so that it would be completable within this limit. No in-depth evaluation of the user experience (UX) of the prototype would be done. The prototype was to be designed with UX in mind, using the result of the initial research, but no further evaluation would be done. A broad enough UX evaluation was not considered possible within the limited time frame. The prototype would not be optimized for battery life and bandwidth usage. These restrictions are of course important and the limitations imposed by them would be taken into consideration. Although no analysis and optimization would be done to find the most optimal solutions with regard to these issues. 12 Literature Review 2.1 Wearable Computers A wearable computer is an electronic device that is worn by the user.
Recommended publications
  • Wayne Community College 2009-2010 Strategic Plan End-Of-Year Report Table of Contents
    Wayne Community College INSTITUTIONAL EFFECTIVENESS THROUGH PLAN & BUDGET INTEGRATION 2009-2010 Strategic Plan End-of-Year Report Wayne Community College 2009-2010 Strategic Plan End-of-Year Report Table of Contents Planning Group 1 – President Foundation Institutional Advancement Planning Group 2 – VP Academic Services Academic Skills Center Ag & Natural Resources Allied Health Arts & Sciences Business Administration Cooperative Programs Dental Engineering & Mechanical Studies Global Education Information Systems & Computer Technology Language & Communication Library Mathematics Medical Lab Sciences Nursing Pre-Curriculum Public Safety Public Services Science SJAFB Social Science Transportation Planning Group 3 – VP Student Services VP Student Services Admissions & Records Financial Aid Student Activities Student Development Planning Group 4 – VP Educational Support Services VP Educational Support Services Campus Information Services Educational Support Technologies Facilities Operations Information Technology Security Planning Group 5 – VP Continuing Education VP Continuing Education Basic Skills Business & Industry Center Occupational Extension WCC PLANNING DOCUMENT 2009-2010 Department: Foundation Long Range Goal #8: Integrate state-of-practice technology in all aspects of the college’s programs, services, and operations. Short Range Goal #8.2: Expand and improve program accessibility through technology. Objective/Intended Outcome: The Foundation has experienced phenomenal growth in the last three years. With the purchase of the Raisers Edge Software, we have been able to see a direct increase in our revenues. In order to sustain this level of growth, The Foundation either needs to hire extra manpower or purchase additional Raiser’s Edge software to support our growth. 1. Raiser’s Edge NetSolutions: Enhance the Foundation office’s fundraising abilities. The Foundation would be able to accept online donations, reservations for golf tournament, gala, arts and humanities programs and reach out to alumni.
    [Show full text]
  • Työohjeistuksen Kehittäminen Lisätyn Todellisuuden Avulla
    Minna Vanhatapio TYÖOHJEISTUKSEN KEHITTÄMINEN LISÄTYN TODELLISUUDEN AVULLA Tukiasematehtaan tuotanto TYÖOHJEISTUKSEN KEHITTÄMINEN LISÄTYN TODELLISUUDEN AVULLA Tukiasematehtaan tuotanto Minna Vanhatapio Opinnäytetyö Syksy 2016 YAMK, Teknologialiiketoiminta Oulun ammattikorkeakoulu TIIVISTELMÄ Oulun ammattikorkeakoulu Ylempi ammattikorkeakoulututkinto, Teknologialiiketoiminta Tekijä: Minna Vanhatapio Opinnäytetyön nimi: Työohjeistuksen kehittäminen lisätyn todellisuuden avulla, Tukiasematehtaan tuotanto Työn ohjaaja: Hannu Päätalo Työnvalmistumislukukausi ja -vuosi: Syksy 2016 Sivumäärä: 107 Opinnäytetyössä tutustuttiin Augmented Reality -tekniikkaan eli lisättyyn todellisuuteen. Lisätyllä todellisuudella tarkoitetaan reaaliaikaista näkymää, jonka päälle lisätään tietokonegrafiikalla tuo- tettua informaatiota kuten 3D-kuvia, ääntä ja videoita. Informaatio voidaan näyttää esimerkiksi äly- puhelimessa, tabletissa, tietokoneen näytöllä tai älylaseilla. Tavoitteena oli tarjota toimeksiantajalle kattava kuva lisätystä todellisuudesta ja sen tämän hetkisistä mahdollisuuksista työohjeistuksessa sekä selvittää mitä hyötyjä sillä voitaisiin saavuttaa Nokia Networksin tukiasematuotannossa. Työssä tutkittiin voitaisiinko lisätyn todellisuuden avulla tuotannon työohjeistusta parantaa, sekä pohdittiin laajemmin mitä tekniikan käyttöönotto vaatii ja mitä kaikkea on otettava huomioon. Tutki- mus suoritettiin tutustumalla tekniikkaa kehittäneiden tutkijoiden tutkimuksiin, käyttäjien ja eri käyt- töalojen ammattilaisten arviointeihin sekä haastateltiin
    [Show full text]
  • Exploratory Research Into Potential Practical Uses Of
    50th ASC Annual International Conference Proceedings Copyright 2014 by the Associated Schools of Construction Exploratory Research into Potential Practical uses of Next Generation Wearable Wireless Voice-Activated Augmented Reality (VAAR) Devices by Building Construction Site Personnel Christopher J. Willis PhD, CAPM, LEED Green Assoc., P.Eng Concordia University Montreal Quebec The miniaturization and increased functionalities of next generation augmented reality (AR) devices, as well as advances in computing technology in the form of cloud computing, is moving the building construction industry closer to adoption of AR devices for use by building construction site personnel. There is therefore a need to understand the potential practical uses of next generation AR devices in building construction site work. A conceptualization of a next generation AR device suitable for use by site personnel is provided. Based on this conceptualization, a focus group of industry professionals and postgraduate researchers have determined that potential practical uses of such a device include: easy access to digital information to support work tasks, live streaming of videos of tasks being worked on, and easy creation of a repository of as-built photographs and videos. Potential applied research studies that will aid in the adoption of next generation AR devices by site personnel include those associated with usability testing, labor productivity measurement and improvement, and suitability testing based on nature of work tasks. The major implication of this exploratory study is that its findings will help to bridge the gap between next generation AR devices and practical use in building construction. Keywords: Augmented Reality, Building Construction, Cloud Computing, Next Generation, Practical Use Introduction Augmented reality (AR) is an emerging technology that is increasingly acquiring greater relevance and usage.
    [Show full text]
  • A Viga T Ing R T Ificia L N Te Ll Igence
    July 24, 2018 Semiconductor Get real with artificial intelligence (AI) "Seriously, do you think you could actually purchase one of my kind in Walmart, say in the next 10 years?" NTELLIGENCE I "You do?! You'd better read this report from RTIFICIAL RTIFICIAL cover to cover, and I assure you Peter is not being funny at all this time." A ■ Fantasies remain in Star Trek. Let’s talk about practical AI technologies. ■ There are practical limitations in using today’s technology to realise AI elegantly. ■ AI is to be enabled by a collaborative ecosystem, likely dominated by “gorillas”. ■ An explosion of innovations in AI is happening to enhance user experience. ■ Rewards will go to the problem solvers that have invested in R&D ahead of others. Analyst(s) AVIGATING AVIGATING Peter CHAN T (82) 2 6730 6128 E [email protected] N IMPORTANT DISCLOSURES, INCLUDING ANY REQUIRED RESEARCH CERTIFICATIONS, ARE PROVIDED AT THE Powered by END OF THIS REPORT. IF THIS REPORT IS DISTRIBUTED IN THE UNITED STATES IT IS DISTRIBUTED BY CIMB the EFA SECURITIES (USA), INC. AND IS CONSIDERED THIRD-PARTY AFFILIATED RESEARCH. Platform Navigating Artificial Intelligence Technology - Semiconductor│July 24, 2018 TABLE OF CONTENTS KEY CHARTS .......................................................................................................................... 4 Executive Summary .................................................................................................................. 5 I. From human to machine .......................................................................................................10
    [Show full text]
  • Augmented Reality Navigation
    VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ BRNO UNIVERSITY OF TECHNOLOGY FAKULTA ELEKTROTECHNIKY A KOMUNIKAČNÍCH TECHNOLOGIÍ ÚSTAV RADIOELEKTRONIKY FACULTY OF ELECTRICAL ENGINEERING AND COMMUNICATION DEPARTMENT OF RADIO ELECTRONICS AUGMENTED REALITY APPLICATIONS IN EMBEDDED NAVIGATION DEVICES BAKALÁŘSKÁ PRÁCE BACHELOR'S THESIS AUTOR PRÁCE MARTIN JAROŠ AUTHOR BRNO 2014 VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ BRNO UNIVERSITY OF TECHNOLOGY FAKULTA ELEKTROTECHNIKY A KOMUNIKAČNÍCH TECHNOLOGIÍ ÚSTAV RADIOELEKTRONIKY FACULTY OF ELECTRICAL ENGINEERING AND COMMUNICATION DEPARTMENT OF RADIO ELECTRONICS AUGMENTED REALITY APPLICATIONS IN EMBEDDED NAVIGATION DEVICES AUGMENTED REALITY APPLICATIONS IN EMBEDDED NAVIGATION DEVICES BAKALÁŘSKÁ PRÁCE BACHELOR'S THESIS AUTOR PRÁCE MARTIN JAROŠ AUTHOR VEDOUCÍ PRÁCE doc. Ing. TOMÁŠ FRÝZA, Ph.D. SUPERVISOR BRNO 2014 VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ Fakulta elektrotechniky a komunikačních technologií Ústav radioelektroniky Bakalářská práce bakalářský studijní obor Elektronika a sdělovací technika Student: Martin Jaroš ID: 146847 Ročník: 3 Akademický rok: 2013/2014 NÁZEV TÉMATU: Augmented reality applications in embedded navigation devices POKYNY PRO VYPRACOVÁNÍ: Analyze the hardware possibilities of the OMAP platform and design an application to effectively combine captured video data and rendered virtual scene based on navigational data from GPS and INS sensors. Design and create a functional prototype. Examine practical use cases of the proposed navigation device, design applicable user interface. DOPORUČENÁ LITERATURA: [1] BIMBER, O.; RASKAR, R. Spatial augmented reality: merging real and virtual worlds. Wellesley: A K Peters, 2005, 369 p. ISBN 15-688-1230-2. [2] Texas Instruments. OMAP 4460 Multimedia Device [online]. 2012 - [cit. 8. listopadu 2012]. Available: http://www.ti.com/product/omap4460. Termín zadání: 10.2.2014 Termín odevzdání: 30.5.2014 Vedoucí práce: doc.
    [Show full text]
  • Augmented Reality Environments for Immersive and Non Linear Virtual Storytelling
    Augmented reality environments for immersive and non linear virtual storytelling Stefanos Kougioumtzis University of Piraeus, Department of Informatics 80, Karaoli & Dimitriou St., 185 34 Piraeus, Greece [email protected] Nikitas N. Karanikolas Technological Educational Institution (TEI) of Athens, Department of Informatics Ag. Spyridonos St., 12210 Aigaleo, Greece [email protected] Themis Panayiotopoulos University of Piraeus, Department of Informatics 80, Karaoli & Dimitriou St., 185 34 Piraeus, Greece [email protected] Abstract The work reported in this paper has the purpose of creating a virtual environment where the recipients (the user) will experience a virtual story telling. In order to introduce the user to a plausible, near to reality experience, we use augmentation methods to create an environment maintaining its physical properties, while three dimensional virtual objects and textures change it for the needs of a real time and controlled scenario. The augmentation is implemented using marker based techniques while the superimposed objects are rendered through the wearable equipment that the user is carrying. In the current implementation we try to blend intelligent non controllable characters that interpret the users’ choices and actions in order to intensify his or her interest for the story, while the real time events are being acted. The user is free to explore, interact and watch the desired events, while others are being carried out in parallel. Keywords: virtual reality, mixed reality, virtual story-telling, wearable systems, computer vision 1. Introduction and problems outline telling, tangible devices, that have already achieved this goal in some extent. As technology moves forward, the needs and purposes of virtuality are getting more and The techniques of Augmented Reality have more clear.
    [Show full text]
  • Vancouver Cross-Border Investment Guide
    Claire to try illustration idea as one final cover option Vancouver Cross-Border Investment Guide Essential legal, tax and market information for cross-border investment into Vancouver, Canada Digital Download 1 Vancouver Cross-Border Contents Investment Guide Published October 2020 Why Invest in Vancouver ............................................................................1 Sectors to Watch ........................................................................................... 3 About the Vancouver Economic Commission Technology ..................................................................................................3 The Vancouver Economic Commission (VEC) serves one of the world’s fastest-growing, low- Cleantech .................................................................................................... 4 carbon economies. As the economic development agency for the city’s businesses, investors and citizens, VEC works to strengthen Vancouver’s economic future by supporting local companies, attracting high-impact investment, conducting and publishing leading-edge industry research, Media and Entertainment ............................................................................5 and promoting international trade. VEC works collaboratively to position Vancouver as a global destination for innovative, creative, diverse and sustainable development. Life Sciences ............................................................................................... 6 VEC respectfully acknowledges that it is located
    [Show full text]
  • Gstreamer and Dmabuf
    GStreamer and dmabuf OMAP4+ graphics/multimedia update Rob Clark Outline • A quick hardware overview • Kernel infrastructure: drm/gem, rpmsg+dce, dmabuf • Blinky s***.. putting pixels on the screen • Bringing it all together in GStreamer A quick hardware overview DMM/Tiler • Like a system-wide GART – Provides a contiguous view of memory to various hw accelerators: IVAHD, ISS, DSS • Provides tiling modes for enhanced memory bandwidth efficiency – For initiators like IVAHD which access memory in 2D block patterns • Provides support for rotation – Zero cost rotation for DSS/ISS access in 0º/90º/180º/270º orientations (with horizontal or vertical reflection) IVA-HD • Multi-codec hw video encode/decode – H.264 BP/MP/HP encode/decode – MPEG-4 SP/ASP encode/decode – MPEG-2 SP/MP encode/decode – MJPEG encode/decode – VC1/WMV9 decode – etc DSS – Display Subsystem • Display Subsystem – 4 video pipes, 3 support scaling and YUV – Any number of video pipes can be attached to one of 3 “overlay manager” to route to a display Kernel infrastructure: drm/gem, rpmsg+dce, dmabuf DRM Overview • DRM → Direct Rendering Manager – Started life heavily based on x86/desktop graphics card architecture – But more recently has evolved to better support ARM and other SoC platforms • KMS → Kernel Mode Setting – Replaces fbdev for more advanced display management – Hotplug, multiple display support (spanning/cloning) – And more recently support for overlays (planes) • GEM → Graphics Execution Manager – But the important/useful part here is the graphics/multimedia buffer management DRM - KMS • Models the display hardware as: – Connector → the thing that the display connects to • Handles DDC/EDID, hotplug detection – Encoder → takes pixel data from CRTC and encodes it to a format suitable for connectors • ie.
    [Show full text]
  • Intel Brings Home Top Awards, Recognition During CES 2016
    January 11, 2016 Intel Brings Home Top Awards, Recognition during CES 2016 Intel Technologies, Products and Partners Win Best Drone, Best Wearable and More SANTA CLARA, Calif.--(BUSINESS WIRE)-- At CES 2016 last week, Intel Corporation announced innovative technologies and collaborations aimed at delivering amazing experiences throughout daily life. Many of these innovations, ranging from unmanned aerial vehicles (UAVs) and wearables to new PCs and tablets, received an array of prestigious awards and recognition. This Smart News Release features multimedia. View the full release here: http://www.businesswire.com/news/home/20160111006536/en/ For example, Intel's leadership to integrate human-like senses into technology was recognized with various awards. Engadget, PC Magazine, The Verge and Videomaker named the Intel® Atom™ processor- powered Yuneec Typhoon H* the best drone of CES 2016. With Intel® RealSense™ technology, the Yuneec Typhoon H is Intel technologies, products and partners received top awards during last capable of collision week's CES 2016. Clockwise from upper left: Engadget, PC Magazine and avoidance, has a Videomaker named the Intel® Atom™ processor-powered Yuneec Typhoon “follow-me” feature, H with Intel® RealSense™ technology the best drone of CES 2016. Engadget recognized the Empire EVS*, based on Recon's Snow2 heads-up and a 4K display (HUD) technology, as the best wearable. Recon Instruments, an Intel camera. CNET, company, enabled Empire Paintball* to create a smart paintball mask that Gizmodo and Reuters allows users to access live tactical information with a glance. PBS, Reuters also included the and Wired included the Ninebot* Segway* robot, which is powered by an Intel drone in their best-of- Atom processor and uses Intel® RealSense™ technology, in their best-of- CES lists.
    [Show full text]
  • Ownerls Manual
    RECON JET Owner’s Manual Assembled in USA SAFETY INFORMATION IMPORTANT SAFETY INFORMATION READ BEFORE USING RECON JET Jet is designed to enhance your sports and fitness experience. If used improperly (e.g. when cycling without paying attention to the road), Jet may cause you to get in an accident that could result in property damage, serious injury, or death. Always pay attention to the road. Do not focus on Jet’s display and become distracted from your surroundings. Ride safe and have fun. Please read and understand these warnings before using Jet. If you have any questions about how to use Jet safely, contact Recon Customer Support at [email protected], or call us at 1.877.642.2486. Support hours are 6:00am to 5:30pm PST, 7 days a week. 1. Don’t get distracted. Keep your eyes on the road. Jet’s display sits just below your right eye, but that does not mean you can stare at it and still see everything on the road. Focusing on the display may cause you to miss cars, road debris, and other hazards, which may reduce or eliminate your ability to avoid an accident. Jet’s display and the information delivered on it are designed for quick access. Glance at Jet’s display quickly, in the same way you would glance at your car’s dashboard or rearview mirror, and do so only when you are sure that you are safe from traffic and other hazards. Jet displays different types of information than your cycling computer or other fitness tracking devices.
    [Show full text]
  • Design, Modeling, and Analysis of Visual Mimo Communication
    DESIGN, MODELING, AND ANALYSIS OF VISUAL MIMO COMMUNICATION By ASHWIN ASHOK A Dissertation submitted to the Graduate School|New Brunswick Rutgers, The State University of New Jersey in partial fulfillment of the requirements for the degree of Doctor of Philosophy Graduate Program in Electrical and Computer Engineering written under the direction of Dr. Marco O. Gruteser, Dr. Narayan B. Mandayam and Dr. Kristin J. Dana and approved by New Brunswick, New Jersey October, 2014 c 2014 Ashwin Ashok ALL RIGHTS RESERVED ABSTRACT OF THE DISSERTATION Design, Modeling, and Analysis of Visual MIMO Communication By ASHWIN ASHOK Dissertation Director: Dr. Marco O. Gruteser, Dr. Narayan B. Mandayam and Dr. Kristin J. Dana Today's pervasive devices are increasingly being integrated with light emitting diode (LED) arrays, that serve the dual purpose of illumination and signage, and photo- receptor arrays in the form of pixel elements in a camera. The ubiquitous use of light emitting arrays (LEA) and cameras in today's world calls for building novel systems and applications where such light emitting arrays can communicate information to cameras. This thesis presents the design, modeling and analysis of a novel concept called visual MIMO (multiple-input multiple-output) where cameras are used for communication. In visual MIMO, information transmitted from light emitting arrays are received through the optical wireless channel and decoded by a camera receiver. The paradigm shift in visual MIMO is the use of digital image analysis and computer vision techniques to aid in the demodulation of information, contrary to the direct processing of electrical signals as in traditional radio-frequency (RF) communication.
    [Show full text]
  • Head-Mounted Mixed Reality Projection Display for Games Production and Entertainment
    Pers Ubiquit Comput DOI 10.1007/s00779-015-0847-y ORIGINAL ARTICLE Head-mounted mixed reality projection display for games production and entertainment 1 2 2 3 Daniel Kade • Kaan Aks¸it • Hakan U¨ rey • Og˘uzhan O¨ zcan Received: 20 November 2014 / Accepted: 3 April 2015 Ó Springer-Verlag London 2015 Abstract This research presents a mixed reality (MR) gaming applications as well. Therefore, our MR prototype application that is designed to be usable during a motion could become of special interest because the prototype is capture shoot and supports actors with their task to per- lightweight, allows for freedom of movement and is a low- form. Through our application, we allow seeing and ex- cost, stand-alone mobile system. Moreover, the prototype ploring a digital environment without occluding an actor’s also allows for 3D vision by mounting additional hardware. field of vision. A prototype was built by combining a retroreflective screen covering surrounding walls and a Keywords Head-mounted projection display Á Mixed headband consisting of a laser scanning projector with a reality Á Motion capture Á Laser projector Á Immersive smartphone. Built-in sensors of a smartphone provide environments Á Games production navigation capabilities in the digital world. The presented system was demonstrated in an initially published paper. Here, we extend these research results with our advances 1 Introduction and discuss the potential use of our prototype in gaming and entertainment applications. To explore this potential Entertainment industry products such as video games and use case, we built a gaming application using our MR films are deeply depending on computer-generated imagery prototype and tested it with 45 participants.
    [Show full text]