Augmented Reality Glasses State of the Art and Perspectives

Total Page:16

File Type:pdf, Size:1020Kb

Augmented Reality Glasses State of the Art and Perspectives Augmented Reality Glasses State of the art and perspectives Quentin BODINIER1, Alois WOLFF2, 1(Affiliation): Supelec SERI student 2(Affiliation): Supelec SERI student Abstract—This paper aims at delivering a comprehensive and detailled outlook on the emerging world of augmented reality glasses. Through the study of diverse technical fields involved in the conception of augmented reality glasses, it will analyze the perspectives offered by this new technology and try to answer to the question : gadget or watershed ? Index Terms—augmented reality, glasses, embedded electron- ics, optics. I. INTRODUCTION Google has recently brought the attention of consumers on a topic that has interested scientists for thirty years : wearable technology, and more precisely ”smart glasses”. Howewer, this commercial term does not fully take account of the diversity and complexity of existing technologies. Therefore, in these lines, we wil try to give a comprehensive view of the state of the art in different technological fields involved in this topic, Fig. 1. Different kinds of Mediated Reality for example optics and elbedded electronics. Moreover, by presenting some commercial products that will begin to be released in 2014, we will try to foresee the future of smart augmented reality devices and the technical challenges they glasses and their possible uses. must face, which include optics, electronics, real time image processing and integration. II. AUGMENTED REALITY : A CLARIFICATION There is a common misunderstanding about what ”Aug- III. OPTICS mented Reality” means. Let us quote a generally accepted defi- Optics are the core challenge of augmented reality glasses, nition of the concept : ”Augmented reality (AR) is a live, copy, as they need displaying information on the widest Field Of view of a physical, real-world environment whose elements are View (FOV) possible, very close to the user’s eyes and in a augmented (or supplemented) by computer-generated sensory very compact device. Moreover, the images must be displayed input”. so as to appear far from the user, in order for the eye not It must not be confused with Virtual Reality, which con- to be forced to accommodate, which would be a source of sists in presenting to the user a completely alternate reality, discomfort and ocular fatigue. Therefore, all devices rely on an simulated from scratch. optical system which is in charge of forming images at infinity. The concept of augmented reality is part of a wider one, A number of systems have been designed by industrials to which is mediated reality[1]. It defines not only the fact of achieve this goal. Here are three of them which give a good superposing computer-based information on real images, but outlook on the main technologies used so far. also the modification of those said images. For example, Steve Mann, ”father of wearable computing”, imagined a smart A. Heads Up Diplay welder mask which could shadow the welding arc.[2] Heads Up Displays (HUD) aim at superimposing computer- Therefore, devices such as Google Glass can not be defined created images on user’s FOV. Today, they rely on two main as ARG (”Augmented Reality Glasses”), but should be refered different opical systems which are presented in the next lines. to as ”ubiquitous computers” as they consist more of a 1) Classical systems: Though not fully accurate, the figure complementary screen than of a device capable of altering 2 shows the principle of most basic HUD which corresponds reality before presentig it to the user. This difference being to the system used in Google Glass. It consists of several core now clear in the reader’s mind, following lines will focus on components : allowing the system to display information, it can not alter the reality seen by the user. This can be done by chosing a more complex optical system, know as EyeTap. B. EyeTap EyeTap is the name of a system imagined and designed by Steve Mann’s team which is based on the same principles that HUD.[4] However, this system includes a complementary component named ”amerac” as well as a camera so that the Fig. 2. Google Glass optical system principle system sees what the user sees, as one can see on figure 4. These two added components, linked by some processing blocks that allow focus and zoom control, allow the super- • A compact, very dense source display position of perfectly synchronised artificial rays of light on • A magnifying system in charge of giving the image a natural ones, which enables a much better mixing of added satisfying size informations and reality. • A system charged of placing the image at infinity. Here Moreover, by replacing the beamsplitter by a two-sided both these operations are processed by the parabolic mirror, user gets isolated from direct ambient light and reality mirror placed on the left can be mediated before being presented to them; e.g., a very • A beamsplitter (an optical component that lets a propor- lighted zone of FOV may be shadowed artificially. tion of light go trough and that reflects the other part) Therefore EyeTap is much more powerful than simple HUD which allows the superposition of light coming from the and can fully answer the challenges of mediated (including display on the user’s field of view. augmented) reality. However, it seems much more complex However, this system uses classical optical components as far as technology is concerned. Besides, both HUD and that are difficult to integrate in a limited space. Besides, the EyeTap use a physical display which is both energy-consuming unavoidable use of a beamsplitter at an angle of 45 degrees and difficult to integrate in compact devices. Hence, some does not allow a wide FOV with limited dimensions. Hence new technologies draw the interest of scientists and industrials, the interest of another system, used for example by a French among them Virtual Retina Displays (VRD) which appear to company, Optinvent, in their ARG : diffraction based see- be a very promising alternative to the use of a physical display. through video glasses. 2) Diffraction based systems: Diffraction based systems use a light guide which is in charge of carrying the image from the output of the collimator to the user’s FOV. Until recently, it was almost impossible to use for commercial purpose as technology was both expensive and fragile. An innovation called ”Clear VU” has recently made it wearable and affordable.[3] Fig. 3. Optical guide used in Clear VU Fig. 4. Eyetap functionnal diagram The principle of this technology is depicted on figure 3. The image created on the microdisplay is collimated and the ray beam is then guided through the device by Total C. Virtual Retina Display Internal Reflection, before being impressed on the eye through Functionnal diagram of the system is presented on figure 5. a set of micro-mirrors. The turning point achieved by Clear The principle of VRD relies on a system which uses a VU is the realization of this system in molded plastic. It photon source - typically a laser - to impress what could be allows the microdisplay and the collimator, which remain the called pixels directly on the user’s retina. It therefore involves biggest unavoidable components to be placed more discreetly. a scanning system which is in charge of pointing the laser to Therefore, it partial answers the challenge of integration, each point of the retina at high speed.[5] compactness and design. Therefore, VRD is likely to be able to answer the prob- However, both these systems do not fully answer to the lematics of compactness, optical aberrations and even power ambitions of mediated (and augmented) reality as, though consumption. As such, VRD seems to be the future of ARG, as Fig. 5. Virtual Retina Display functionnal diagram Fig. 6. Basic Electronic Schematic of a typical interactive ARG far as optics are concerned. However, this is so far an unready technology, but it is very likely that ARG will soar when VRD B. Fully embedded solution : GOOGLE GLASS will be perfectly mastered.[6] Google is currently building their Glasses as a standalone Android device. Most of the computing will be done on-board, IV. ELECTRONICS AND INTEGRATION and the hardware will be built accordingly.[7] Until the announcement of the Google Glass Project [7], Their architecture resembles closely to the one of a common in 2011, the different Augmented Reality Glasses (ARG) smartphone : a TI System-on-Chip (SoC), the OMAP 4430, products could easily be divided into two categories : HEAD- based on a 1.2GHz dualcore ARM processor with a dedicated MOUNTED DISPLAYS and HEADS-UP DISPLAYS. The former multimedia hardware accelerator, drives the display. That chip regroups the different devices able to display a 2D or a 3D was even used for a 2011 Google/Samsung phone, the Galaxy image on the whole user’s field of view, while obstructing Nexus. the wearer’s vision, whereas the latter regroups the devices Connectivity is ensured by WiFi and Bluetooth to maximize displaying images while still allowing the user to view his compatibility. On-board sensors include a 9-axis sensor, the surroundings. MPU9150 by INVENSENSE, comprising an accelerometer, The first group could achieve a better screen resolution, gyroscope and magnetometer and accounting for a total orien- while severely impairing the view of the wearer’s whereabouts. tation awareness of the device, but also a GPS chip, a touch However, both device types had something in common : they pad and microphone for user interaction, and headphones. The were not designed to interact with the wearer. Simply put, whole system is powered by a 570mAh Li-Poly battery, giving they were output devices with no responsiveness to stimuli, a 5 hour ”normal utilization” time between charging.
Recommended publications
  • Eyetap Devices for Augmented, Deliberately Diminished, Or Otherwise Altered Visual Perception of Rigid Planar Patches of Real-Wo
    Steve Mann EyeTap Devicesfor Augmented, [email protected] Deliberately Diminished,or James Fung Otherwise AlteredVisual [email protected] University ofToronto Perceptionof Rigid Planar 10King’ s College Road Patches ofReal-World Scenes Toronto,Canada Abstract Diminished reality is as important as augmented reality, and bothare possible with adevice called the RealityMediator. Over thepast twodecades, we have designed, built, worn,and tested many different embodiments ofthis device in thecontext of wearable computing. Incorporated intothe Reality Mediator is an “EyeTap”system, which is adevice thatquanti es and resynthesizes light thatwould otherwise pass through one or bothlenses ofthe eye(s) ofa wearer. Thefunctional principles of EyeTap devices are discussed, in detail. TheEyeTap diverts intoa spatial measure- ment system at least aportionof light thatwould otherwise pass through thecen- ter ofprojection ofat least one lens ofan eye ofa wearer. TheReality Mediator has at least one mode ofoperation in which itreconstructs these rays oflight, un- der thecontrol of a wearable computer system. Thecomputer system thenuses new results in algebraic projective geometry and comparametric equations toper- form head tracking, as well as totrack motionof rigid planar patches present in the scene. We describe howour tracking algorithm allows an EyeTap toalter thelight from aparticular portionof the scene togive rise toa computer-controlled, selec- tively mediated reality. Animportant difference between mediated reality and aug- mented reality includes theability tonot just augment butalso deliberately diminish or otherwise alter thevisual perception ofreality. For example, diminished reality allows additional information tobe inserted withoutcausing theuser toexperience information overload. Our tracking algorithm also takes intoaccount the effects of automatic gain control,by performing motionestimation in bothspatial as well as tonal motioncoordinates.
    [Show full text]
  • Architectural Model for an Augmented Reality Based Mobile Learning Application Oluwaranti, A
    Journal of Multidisciplinary Engineering Science and Technology (JMEST) ISSN: 3159-0040 Vol. 2 Issue 7, July - 2015 Architectural Model For An Augmented Reality Based Mobile Learning Application Oluwaranti, A. I., Obasa A. A., Olaoye A. O. and Ayeni S. Department of Computer Science and Engineering Obafemi Awolowo University Ile-Ife, Nigeria [email protected] Abstract— The work developed an augmented students. It presents a model to utilize an Android reality (AR) based mobile learning application for based smart phone camera to scan 2D templates and students. It implemented, tested and evaluated the overlay the information in real time. The model was developed AR based mobile learning application. implemented and its performance evaluated with This is with the view to providing an improved and respect to its ease of use, learnability and enhanced learning experience for students. effectiveness. The augmented reality system uses the marker- II. LITERATURE REVIEW based technique for the registration of virtual Augmented reality, commonly referred to as AR contents. The image tracking is based on has garnered significant attention in recent years. This scanning by the inbuilt camera of the mobile terminology has been used to describe the technology device; while the corresponding virtual behind the expansion or intensification of the real augmented information is displayed on its screen. world. To “augment reality” is to “intensify” or “expand” The recognition of scanned images was based on reality itself [4]. Specifically, AR is the ability to the Vuforia Cloud Target Recognition System superimpose digital media on the real world through (VCTRS). The developed mobile application was the screen of a device such as a personal computer or modeled using the Object Oriented modeling a smart phone, to create and show users a world full of techniques.
    [Show full text]
  • Yonghao Yu. Research on Augmented Reality Technology and Build AR Application on Google Glass
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Carolina Digital Repository Yonghao Yu. Research on Augmented Reality Technology and Build AR Application on Google Glass. A Master’s Paper for the M.S. in I.S degree. April, 2015. 42 pages. Advisor: Brad Hemminger This article introduces augmented reality technology, some current applications, and augmented reality technology for wearable devices. Then it introduces how to use NyARToolKit as a software library to build AR applications. The article also introduces how to design an AR application in Google Glass. The application can recognize two different images through NyARToolKit build-in function. After find match pattern files, the application will draw different 3D graphics according to different input images. Headings: Augmented Reality Google Glass Application - Design Google Glass Application - Development RESEARCH ON AUGMENT REALITY TECHNOLOGY AND BUILD AR APPLICATION ON GOOGLE GLASS by Yonghao Yu A Master’s paper submitted to the faculty of the School of Information and Library Science of the University of North Carolina at Chapel Hill in partial fulfillment of the requirements for the degree of Master of Science in Information Science. Chapel Hill, North Carolina April 2015 Approved by ____________________________________ Brad Hemminger 1 Table of Contents Table of Contents ................................................................................................................ 1 1. Introduction ....................................................................................................................
    [Show full text]
  • Light Engines for XR Smartglasses by Jonathan Waldern, Ph.D
    August 28, 2020 Light Engines for XR Smartglasses By Jonathan Waldern, Ph.D. The near-term obstacle to meeting an elegant form factor for Extended Reality1 (XR) glasses is the size of the light engine2 that projects an image into the waveguide, providing a daylight-bright, wide field-of-view mobile display For original equipment manufacturers (OEMs) developing XR smartglasses that employ diffractive wave- guide lenses, there are several light engine architectures contending for the throne. Highly transmissive daylight-bright glasses demanded by early adopting customers translate to a level of display efficiency, 2k-by-2k and up resolution plus high contrast, simply do not exist today in the required less than ~2cc (cubic centimeter) package size. This thought piece examines both Laser and LED contenders. It becomes clear that even if MicroLED (µLED) solutions do actually emerge as forecast in the next five years, fundamentally, diffractive wave- guides are not ideally paired to broadband LED illumination and so only laser based light engines, are the realistic option over the next 5+ years. Bottom Line Up Front • µLED, a new emissive panel technology causing considerable excitement in the XR community, does dispense with some bulky refractive illumination optics and beam splitters, but still re- quires a bulky projection lens. Yet an even greater fundamental problem of µLEDs is that while bright compared with OLED, the technology falls short of the maximum & focused brightness needed for diffractive and holographic waveguides due to the fundamental inefficiencies of LED divergence. • A laser diode (LD) based light engine has a pencil like beam of light which permits higher effi- ciency at a higher F#.
    [Show full text]
  • A Survey of Augmented Reality Technologies, Applications and Limitations
    The International Journal of Virtual Reality, 2010, 9(2):1-20 1 A Survey of Augmented Reality Technologies, Applications and Limitations D.W.F. van Krevelen and R. Poelman Systems Engineering Section, Delft University of Technology, Delft, The Netherlands1 Abstract— We are on the verge of ubiquitously adopting shino [107] (Fig. 1), AR is one part of the general area of Augmented Reality (AR) technologies to enhance our percep- mixed reality. Both virtual environments (or virtual reality) tion and help us see, hear, and feel our environments in new and and augmented virtuality, in which real objects are added to enriched ways. AR will support us in fields such as education, virtual ones, replace the surrounding environment by a vir- maintenance, design and reconnaissance, to name but a few. tual one. In contrast, AR provides local virtuality. When This paper describes the field of AR, including a brief definition considering not just artificiality but also user transportation, and development history, the enabling technologies and their characteristics. It surveys the state of the art by reviewing some Benford et al. [28] classify AR as separate from both VR and recent applications of AR technology as well as some known telepresence (see Fig. 2). Following [17, 19], an AR system: limitations regarding human factors in the use of AR systems combines real and virtual objects in a real environment; that developers will need to overcome. registers (aligns) real and virtual objects with each other; Index Terms— Augmented Reality, Technologies, Applica- and tions, Limitations. runs interactively, in three dimensions, and in real time.
    [Show full text]
  • University of Southampton Research Repository
    University of Southampton Research Repository Copyright © and Moral Rights for this thesis and, where applicable, any accompanying data are retained by the author and/or other copyright owners. A copy can be downloaded for personal non-commercial research or study, without prior permission or charge. This thesis and the accompanying data cannot be reproduced or quoted extensively from without first obtaining permission in writing from the copyright holder/s. The content of the thesis and accompanying research data (where applicable) must not be changed in any way or sold commercially in any format or medium without the formal permission of the copyright holder/s. When referring to this thesis and any accompanying data, full bibliographic details must be given, e.g. Thesis: M Zinopoulou (2019) " A Framework for improving Engagement and the Learning Experience through Emerging Technologies ", University of Southampton, The Faculty of Engineering and Physical Sciences. School of Electronics and Computer Science (ECS), PhD Thesis, 119. Data: M Zinopoulou (2019) "A Framework for improving Engagement and the Learning Experience through Emerging Technologies" The Faculty of Engineering and Physical Sciences School of Electronics and Computer Science (ECS) A Framework for improving Engagement and the Learning Experience through Emerging Technologies by Maria Zinopoulou Supervisors: Dr. Gary Wills and Dr. Ashok Ranchhod Thesis submitted for the degree of Doctor of Philosophy November 2019 University of Southampton Abstract Advancements in Information Technology and Communication have brought about a new connectivity between the digital world and the real world. Emerging Technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) and their combination as Extended Reality (XR), Artificial Intelligence (AI), the Internet of Things (IoT) and Blockchain Technology are changing the way we view our world and have already begun to impact many aspects of daily life.
    [Show full text]
  • Display Devices: RSD™ (Retinal Scanning Display)
    6 Display Devices: RSD™ (Retinal Scanning Display) Thomas M. Lippert 6.1 Introduction Microvision Inc. 6.2 An Example Avionic HMD Challenge 6.3 CRTs and MFPs 6.4 Laser Advantages, Eye Safety 6.5 Light Source Availability and Power Requirements 6.6 Microvision’s Laser Scanning Concept Government Testing of the RSD HMD Concept • Improving RSD Image Quality 6.7 Next Step Defining Terms Acknowledgments References Further Information 6.1 Introduction This chapter relates performance, safety, and utility attributes of the Retinal Scanning Display as employed in a Helmet-Mounted Pilot-Vehicle Interface, and by association, in panel-mounted HUD and HDD applications. Because RSD component technologies are advancing so rapidly, quantitative analyses and design aspects are referenced to permit a more complete description here of the first high-performance RSD System developed for helicopters. Visual displays differ markedly in how they package light to form an image. The Retinal Scanning Display, or RSD depicted in Figure 6.1, is a relatively new optomechatronic device based initially on red, green, and blue diffraction-limited laser light sources. The laser beams are intensity modulated with video information, optically combined into a single, full-color pixel beam, then scanned into a raster pattern by a ROSE comprised of miniature oscillating mirrors, much as the deflection yoke of a cathode- ray tube (CRT) writes an electron beam onto a phosphor screen. RSDs are unlike CRTs in that conversion of electrons to photons occurs prior to beam scanning, thus eliminating the phosphor screen altogether along with its re-radiation, halation, saturation, and other brightness- and contrast-limiting factors.
    [Show full text]
  • Virtual Environments: a Survey of the Technology
    Virtual Environments: A Survey of the Technology TR93-033 Richard Holloway Anselmo Lastra Sitterson Hall CB-3175 University of North Carolina at Chapel Hill Chapel Hill, NC 27599-3175 email: [email protected] [email protected] September 1993† † These notes were actually prepared in April and May of 1993 for distribution in September at Eurographics. 1. Introduction ..................................................................................... 1 2. How VE Works: Fooling the Senses........................................................ 3 2.1 The Display/Detection Model........................................................... 3 2.2 Perceptual Issues in Matching Displays to Senses................................... 4 3. Displays ......................................................................................... 7 3.1 General.................................................................................... 7 3.2 Visual Displays .......................................................................... 7 3.2.1 Overview........................................................................... 7 3.2.2 How Immersive Displays Work ................................................ 8 3.2.3 Immersive Display Characteristics .............................................10 3.2.4 Immersive Display Issues and Problems...................................... 11 3.2.5 Desired Display Specifications .................................................12 3.2.6 Commercial Systems ............................................................12
    [Show full text]
  • Virtual Reality & Interaction
    Virtual Reality & Interaction VViirrttuuaall RReeaalliittyy IInnppuutt DDeevviicceess OOuuttppuutt DDeevviicceess AAuuggmmeenntteedd RReeaalliittyy AApppplliiccaattiioonnss What is Virtual Reality? narrow: immersive environment with head tracking, head- mounted display, glove or wand broad: interactive computer graphics our definition: an immersive interactive system 1 Fooling the Mind The mind has a strong desire to believe that the world it perceives is real. -Jaron Lanier • Illusion of depth: – Stereo parallax – Head motion parallax – Object motion parallax – Texture scale • Interaction: grab and move an object • Proprioceptive cues: when you reach out and see a hand where you believe your hand to be, you accept the hand as your own • Often you will accept what you see as “real” even if graphics are poor Interactive Cycle Recalc Display must be continuously Tracking redrawn (usually in stereo). geometry 1. User is constantly moving. Positions are tracked (head, hands, or whole body). 2. Position of objects in the environment is updated. Redisplay 3. Display is redrawn with new view position, new user body configuration (if tracking head, hands, or whole body), new object locations. 4. And back to step one. 2 Low Latency is Key · latency: time lag between sensing a change and updating the picture · 1 msec latency leads to 1 mm error ± at common head/hand speeds · 50 msec (1/20 sec.) is common and generally seen as acceptable · Otherwise user feels nausea – if inner ear says you’ve moved but your eyes say otherwise – effect is strongest for peripheral vision – nausea is a serious problem for motion platforms (simulator sickness) – filmmakers know to pan slowly • Our system for full body tracking has 100ms latency—not so good.
    [Show full text]
  • Using Virtual Reality to Improve Sitting Balance
    BearWorks MSU Graduate Theses Fall 2017 Using Virtual Reality to Improve Sitting Balance Alice Kay Barnes Missouri State University, [email protected] As with any intellectual project, the content and views expressed in this thesis may be considered objectionable by some readers. However, this student-scholar’s work has been judged to have academic value by the student’s thesis committee members trained in the discipline. The content and views expressed in this thesis are those of the student-scholar and are not endorsed by Missouri State University, its Graduate College, or its employees. Follow this and additional works at: https://bearworks.missouristate.edu/theses Part of the Graphics and Human Computer Interfaces Commons, Occupational Therapy Commons, and the Physical Therapy Commons Recommended Citation Barnes, Alice Kay, "Using Virtual Reality to Improve Sitting Balance" (2017). MSU Graduate Theses. 3221. https://bearworks.missouristate.edu/theses/3221 This article or document was made available through BearWorks, the institutional repository of Missouri State University. The work contained in it may be protected by copyright and require permission of the copyright holder for reuse or redistribution. For more information, please contact [email protected]. USING VIRTUAL REALITY TO IMPROVE SITTING BALANCE A Master’s Thesis Presented to The Graduate College of Missouri State University TEMPLATE In Partial Fulfillment Of the Requirements for the Degree Master of Natural and Applied Science, Computer Science By Alice K. Barnes December 2017 Copyright 2017 by Alice Kay Barnes ii USING VIRTUAL REALITY TO IMPROVE SITTING BALANCE Computer Science Missouri State University, December 2017 Master of Natural and Applied Science Alice K.
    [Show full text]
  • Cyborgs and Enhancement Technology
    philosophies Article Cyborgs and Enhancement Technology Woodrow Barfield 1 and Alexander Williams 2,* 1 Professor Emeritus, University of Washington, Seattle, Washington, DC 98105, USA; [email protected] 2 140 BPW Club Rd., Apt E16, Carrboro, NC 27510, USA * Correspondence: [email protected]; Tel.: +1-919-548-1393 Academic Editor: Jordi Vallverdú Received: 12 October 2016; Accepted: 2 January 2017; Published: 16 January 2017 Abstract: As we move deeper into the twenty-first century there is a major trend to enhance the body with “cyborg technology”. In fact, due to medical necessity, there are currently millions of people worldwide equipped with prosthetic devices to restore lost functions, and there is a growing DIY movement to self-enhance the body to create new senses or to enhance current senses to “beyond normal” levels of performance. From prosthetic limbs, artificial heart pacers and defibrillators, implants creating brain–computer interfaces, cochlear implants, retinal prosthesis, magnets as implants, exoskeletons, and a host of other enhancement technologies, the human body is becoming more mechanical and computational and thus less biological. This trend will continue to accelerate as the body becomes transformed into an information processing technology, which ultimately will challenge one’s sense of identity and what it means to be human. This paper reviews “cyborg enhancement technologies”, with an emphasis placed on technological enhancements to the brain and the creation of new senses—the benefits of which may allow information to be directly implanted into the brain, memories to be edited, wireless brain-to-brain (i.e., thought-to-thought) communication, and a broad range of sensory information to be explored and experienced.
    [Show full text]
  • Wearable Computer Interaction Issues in Mediated Human to Human Communication
    Wearable Computer Interaction Issues in Mediated Human to Human Communication Mikael Drugge Division of Media Technology Department of Computer Science and Electrical Engineering Luleå University of Technology SE–971 87 Luleå Sweden November 2004 Supervisor Peter Parnes, Ph.D., Luleå University of Technology ii Abstract This thesis presents the use of wearable computers as mediators for human to human commu- nication. As the market for on-body wearable technology grows, the importance of efficient interactions through such technology becomes more significant. Novel forms of communi- cation is made possible due to the highly mobile nature of a wearable computer coupled to a person. A person can, for example, deliver live video, audio and commentary from a re- mote location, allowing local participants to experience it and interact with people on the other side. In this way, knowledge and information can be shared over a distance, passing through the owner of the wearable computer who acts as a mediator. To enable the mediator to perform this task in the most efficient manner, the interaction between the user, the wear- able computer and the other people involved, needs to be made as natural and unobtrusive as possible. One of the main problemsof today is that the virtualworld offered by wearable computers can become too immersive, thereby distancing its user from interactions in the real world. At the same time, the very same immersion serves to let the user sense the remote participants as being there, accompanying and communicating through the virtual world. The key here is to get the proper balance between the real and the virtual worlds; remote participants should be able to experience a distant location through the user, while the user should similarly experience their company in the virtual world.
    [Show full text]