Google Glass
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
MAGIC Summoning: Towards Automatic Suggesting and Testing of Gestures with Low Probability of False Positives During Use
JournalofMachineLearningResearch14(2013)209-242 Submitted 10/11; Revised 6/12; Published 1/13 MAGIC Summoning: Towards Automatic Suggesting and Testing of Gestures With Low Probability of False Positives During Use Daniel Kyu Hwa Kohlsdorf [email protected] Thad E. Starner [email protected] GVU & School of Interactive Computing Georgia Institute of Technology Atlanta, GA 30332 Editors: Isabelle Guyon and Vassilis Athitsos Abstract Gestures for interfaces should be short, pleasing, intuitive, and easily recognized by a computer. However, it is a challenge for interface designers to create gestures easily distinguishable from users’ normal movements. Our tool MAGIC Summoning addresses this problem. Given a specific platform and task, we gather a large database of unlabeled sensor data captured in the environments in which the system will be used (an “Everyday Gesture Library” or EGL). The EGL is quantized and indexed via multi-dimensional Symbolic Aggregate approXimation (SAX) to enable quick searching. MAGIC exploits the SAX representation of the EGL to suggest gestures with a low likelihood of false triggering. Suggested gestures are ordered according to brevity and simplicity, freeing the interface designer to focus on the user experience. Once a gesture is selected, MAGIC can output synthetic examples of the gesture to train a chosen classifier (for example, with a hidden Markov model). If the interface designer suggests his own gesture and provides several examples, MAGIC estimates how accurately that gesture can be recognized and estimates its false positive rate by comparing it against the natural movements in the EGL. We demonstrate MAGIC’s effectiveness in gesture selection and helpfulness in creating accurate gesture recognizers. -
Eyetap Devices for Augmented, Deliberately Diminished, Or Otherwise Altered Visual Perception of Rigid Planar Patches of Real-Wo
Steve Mann EyeTap Devicesfor Augmented, [email protected] Deliberately Diminished,or James Fung Otherwise AlteredVisual [email protected] University ofToronto Perceptionof Rigid Planar 10King’ s College Road Patches ofReal-World Scenes Toronto,Canada Abstract Diminished reality is as important as augmented reality, and bothare possible with adevice called the RealityMediator. Over thepast twodecades, we have designed, built, worn,and tested many different embodiments ofthis device in thecontext of wearable computing. Incorporated intothe Reality Mediator is an “EyeTap”system, which is adevice thatquanti es and resynthesizes light thatwould otherwise pass through one or bothlenses ofthe eye(s) ofa wearer. Thefunctional principles of EyeTap devices are discussed, in detail. TheEyeTap diverts intoa spatial measure- ment system at least aportionof light thatwould otherwise pass through thecen- ter ofprojection ofat least one lens ofan eye ofa wearer. TheReality Mediator has at least one mode ofoperation in which itreconstructs these rays oflight, un- der thecontrol of a wearable computer system. Thecomputer system thenuses new results in algebraic projective geometry and comparametric equations toper- form head tracking, as well as totrack motionof rigid planar patches present in the scene. We describe howour tracking algorithm allows an EyeTap toalter thelight from aparticular portionof the scene togive rise toa computer-controlled, selec- tively mediated reality. Animportant difference between mediated reality and aug- mented reality includes theability tonot just augment butalso deliberately diminish or otherwise alter thevisual perception ofreality. For example, diminished reality allows additional information tobe inserted withoutcausing theuser toexperience information overload. Our tracking algorithm also takes intoaccount the effects of automatic gain control,by performing motionestimation in bothspatial as well as tonal motioncoordinates. -
Augmented Reality Glasses State of the Art and Perspectives
Augmented Reality Glasses State of the art and perspectives Quentin BODINIER1, Alois WOLFF2, 1(Affiliation): Supelec SERI student 2(Affiliation): Supelec SERI student Abstract—This paper aims at delivering a comprehensive and detailled outlook on the emerging world of augmented reality glasses. Through the study of diverse technical fields involved in the conception of augmented reality glasses, it will analyze the perspectives offered by this new technology and try to answer to the question : gadget or watershed ? Index Terms—augmented reality, glasses, embedded electron- ics, optics. I. INTRODUCTION Google has recently brought the attention of consumers on a topic that has interested scientists for thirty years : wearable technology, and more precisely ”smart glasses”. Howewer, this commercial term does not fully take account of the diversity and complexity of existing technologies. Therefore, in these lines, we wil try to give a comprehensive view of the state of the art in different technological fields involved in this topic, Fig. 1. Different kinds of Mediated Reality for example optics and elbedded electronics. Moreover, by presenting some commercial products that will begin to be released in 2014, we will try to foresee the future of smart augmented reality devices and the technical challenges they glasses and their possible uses. must face, which include optics, electronics, real time image processing and integration. II. AUGMENTED REALITY : A CLARIFICATION There is a common misunderstanding about what ”Aug- III. OPTICS mented Reality” means. Let us quote a generally accepted defi- Optics are the core challenge of augmented reality glasses, nition of the concept : ”Augmented reality (AR) is a live, copy, as they need displaying information on the widest Field Of view of a physical, real-world environment whose elements are View (FOV) possible, very close to the user’s eyes and in a augmented (or supplemented) by computer-generated sensory very compact device. -
Architectural Model for an Augmented Reality Based Mobile Learning Application Oluwaranti, A
Journal of Multidisciplinary Engineering Science and Technology (JMEST) ISSN: 3159-0040 Vol. 2 Issue 7, July - 2015 Architectural Model For An Augmented Reality Based Mobile Learning Application Oluwaranti, A. I., Obasa A. A., Olaoye A. O. and Ayeni S. Department of Computer Science and Engineering Obafemi Awolowo University Ile-Ife, Nigeria [email protected] Abstract— The work developed an augmented students. It presents a model to utilize an Android reality (AR) based mobile learning application for based smart phone camera to scan 2D templates and students. It implemented, tested and evaluated the overlay the information in real time. The model was developed AR based mobile learning application. implemented and its performance evaluated with This is with the view to providing an improved and respect to its ease of use, learnability and enhanced learning experience for students. effectiveness. The augmented reality system uses the marker- II. LITERATURE REVIEW based technique for the registration of virtual Augmented reality, commonly referred to as AR contents. The image tracking is based on has garnered significant attention in recent years. This scanning by the inbuilt camera of the mobile terminology has been used to describe the technology device; while the corresponding virtual behind the expansion or intensification of the real augmented information is displayed on its screen. world. To “augment reality” is to “intensify” or “expand” The recognition of scanned images was based on reality itself [4]. Specifically, AR is the ability to the Vuforia Cloud Target Recognition System superimpose digital media on the real world through (VCTRS). The developed mobile application was the screen of a device such as a personal computer or modeled using the Object Oriented modeling a smart phone, to create and show users a world full of techniques. -
Yonghao Yu. Research on Augmented Reality Technology and Build AR Application on Google Glass
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Carolina Digital Repository Yonghao Yu. Research on Augmented Reality Technology and Build AR Application on Google Glass. A Master’s Paper for the M.S. in I.S degree. April, 2015. 42 pages. Advisor: Brad Hemminger This article introduces augmented reality technology, some current applications, and augmented reality technology for wearable devices. Then it introduces how to use NyARToolKit as a software library to build AR applications. The article also introduces how to design an AR application in Google Glass. The application can recognize two different images through NyARToolKit build-in function. After find match pattern files, the application will draw different 3D graphics according to different input images. Headings: Augmented Reality Google Glass Application - Design Google Glass Application - Development RESEARCH ON AUGMENT REALITY TECHNOLOGY AND BUILD AR APPLICATION ON GOOGLE GLASS by Yonghao Yu A Master’s paper submitted to the faculty of the School of Information and Library Science of the University of North Carolina at Chapel Hill in partial fulfillment of the requirements for the degree of Master of Science in Information Science. Chapel Hill, North Carolina April 2015 Approved by ____________________________________ Brad Hemminger 1 Table of Contents Table of Contents ................................................................................................................ 1 1. Introduction .................................................................................................................... -
University of Southampton Research Repository
University of Southampton Research Repository Copyright © and Moral Rights for this thesis and, where applicable, any accompanying data are retained by the author and/or other copyright owners. A copy can be downloaded for personal non-commercial research or study, without prior permission or charge. This thesis and the accompanying data cannot be reproduced or quoted extensively from without first obtaining permission in writing from the copyright holder/s. The content of the thesis and accompanying research data (where applicable) must not be changed in any way or sold commercially in any format or medium without the formal permission of the copyright holder/s. When referring to this thesis and any accompanying data, full bibliographic details must be given, e.g. Thesis: M Zinopoulou (2019) " A Framework for improving Engagement and the Learning Experience through Emerging Technologies ", University of Southampton, The Faculty of Engineering and Physical Sciences. School of Electronics and Computer Science (ECS), PhD Thesis, 119. Data: M Zinopoulou (2019) "A Framework for improving Engagement and the Learning Experience through Emerging Technologies" The Faculty of Engineering and Physical Sciences School of Electronics and Computer Science (ECS) A Framework for improving Engagement and the Learning Experience through Emerging Technologies by Maria Zinopoulou Supervisors: Dr. Gary Wills and Dr. Ashok Ranchhod Thesis submitted for the degree of Doctor of Philosophy November 2019 University of Southampton Abstract Advancements in Information Technology and Communication have brought about a new connectivity between the digital world and the real world. Emerging Technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) and their combination as Extended Reality (XR), Artificial Intelligence (AI), the Internet of Things (IoT) and Blockchain Technology are changing the way we view our world and have already begun to impact many aspects of daily life. -
Prozessunterstützung Für Den Entwurf Von Wearable-Computing-Systemen
Prozessunterstutzung¨ fur¨ den Entwurf von Wearable-Computing-Systemen Vom Fachbereich Informatik der Technischen Universit¨at Darmstadt genehmigte Dissertation zur Erlangung des akademischen Grades Dr. rer. nat. vorgelegt von Dipl.-Inform. Tobias Klug geboren in Frankfurt Tag der Einreichung: 15.4.2008 Tag der Disputation: 30.5.2008 Referenten: Prof. Dr. Max Muhlh¨ ¨auser, Darmstadt Prof. Dr.-Ing. Ralph Bruder, Darmstadt Darmstadt 2008 Darmst¨adter Dissertationen D17 i Danksagung Diese Dissertation ist weder Steffi Graf, noch Andr´eAgassi gewidmet. Dafur¨ je- doch den vielen Personen, die mich in den letzten dreieinhalb Jahren begleitet und unterstutzt¨ haben. An erster Stelle m¨ochte ich hier meinem Doktorvater Prof. Max Muhlh¨ ¨auser fur¨ die Anregungen und Diskussionen danken, die entscheidend zum Entstehen dieser Arbeit beigetragen haben. Außerdem m¨ochte ich meiner Freundin Nina Steinert und meinen Eltern danken, die meine Entscheidung zu promovieren unterstutzt¨ und mich die gesamte Zeit uber¨ begleitet haben. Sie haben mich immer wieder motiviert und auch bei der Korrektur des endgultigen¨ Textes tatkr¨aftige Unterstutzung¨ geleistet. Des weiteren m¨ochte ich Markus Roth, Hristo Indzhov und Svenja Kahn danken, die mit ihren Diplom- und Bachelorarbeiten entscheidend zur Entstehung dieser Arbeit beigetragen haben. Außerdem m¨ochte ich den vielen Kollegen an der Universit¨at und bei SAP Rese- arch fur¨ die freundliche und lockere Arbeitsatmosph¨are danken. Victoria Carlsson, Andreas Zinnen und Thomas Ziegert danke ich fur¨ die gute Zusammenarbeit und fruchtbaren Diskussionen im Rahmen des wearIT@work-Projektes und auch daruber¨ hinaus. Ich danke meinen Zimmerkollegen an der Universit¨at Melanie Hartmann und Fernando Lyardet, die sich immer wieder meine wilden Ideen anh¨oren und kommen- tieren mussten. -
The Challenges of Wearable Computing: Part 2
THE CHALLENGES OF WEARABLE COMPUTING: PART 2 WEARABLE COMPUTING PURSUES AN INTERFACE IDEAL OF A CONTINUOUSLY WORN, INTELLIGENT ASSISTANT THAT AUGMENTS MEMORY, INTELLECT, CREATIVITY, COMMUNICATION, AND PHYSICAL SENSES AND ABILITIES. MANY CHALLENGES AWAIT WEARABLE DESIGNERS. PART 2 BEGINS WITH THE CHALLENGES OF NETWORK RESOURCES AND PRIVACY CONCERNS. THIS SURVEY DESCRIBES THE POSSIBILITIES OFFERED BY WEARABLE SYSTEMS AND, IN DOING SO, DEMONSTRATES ATTRIBUTES UNIQUE TO THIS CLASS OF COMPUTING. Challenges throughput. Another serious issue is open The most immediately striking challenge standards to enable interoperability between in designing wearable computers is creating different services. For example, only one long- appropriate interfaces. However, the issues of range radio should be necessary to provide power use, heat dissipation, networking, and telephony, text messaging, Global Positioning privacy provide a necessary framework in System (GPS) correction signals, and so on. which to discuss interface. Part 1 of this arti- For wearable computers, networking cle covers the first two of these issues; Part 2 involves communication off body to the fixed begins with the networking discussion. network, on body among devices, and near Thad Starner body with objects near the user. Each of these Networking three network types requires different design Georgia Institute of As with any wireless mobile device, the decisions. Designers must also consider pos- amount of power and the type of services sible interference between the networks. Technology available can constrain networking. Wearable computers could conserve resources through Off-body communications. Wireless commu- improved coordination with the user inter- nication from mobile devices to fixed infra- face. For example, the speed at which a given structure is the most thoroughly researched of information packet is transferred can be bal- these issues. -
Cyborgs and Enhancement Technology
philosophies Article Cyborgs and Enhancement Technology Woodrow Barfield 1 and Alexander Williams 2,* 1 Professor Emeritus, University of Washington, Seattle, Washington, DC 98105, USA; [email protected] 2 140 BPW Club Rd., Apt E16, Carrboro, NC 27510, USA * Correspondence: [email protected]; Tel.: +1-919-548-1393 Academic Editor: Jordi Vallverdú Received: 12 October 2016; Accepted: 2 January 2017; Published: 16 January 2017 Abstract: As we move deeper into the twenty-first century there is a major trend to enhance the body with “cyborg technology”. In fact, due to medical necessity, there are currently millions of people worldwide equipped with prosthetic devices to restore lost functions, and there is a growing DIY movement to self-enhance the body to create new senses or to enhance current senses to “beyond normal” levels of performance. From prosthetic limbs, artificial heart pacers and defibrillators, implants creating brain–computer interfaces, cochlear implants, retinal prosthesis, magnets as implants, exoskeletons, and a host of other enhancement technologies, the human body is becoming more mechanical and computational and thus less biological. This trend will continue to accelerate as the body becomes transformed into an information processing technology, which ultimately will challenge one’s sense of identity and what it means to be human. This paper reviews “cyborg enhancement technologies”, with an emphasis placed on technological enhancements to the brain and the creation of new senses—the benefits of which may allow information to be directly implanted into the brain, memories to be edited, wireless brain-to-brain (i.e., thought-to-thought) communication, and a broad range of sensory information to be explored and experienced. -
Wearable Computer Interaction Issues in Mediated Human to Human Communication
Wearable Computer Interaction Issues in Mediated Human to Human Communication Mikael Drugge Division of Media Technology Department of Computer Science and Electrical Engineering Luleå University of Technology SE–971 87 Luleå Sweden November 2004 Supervisor Peter Parnes, Ph.D., Luleå University of Technology ii Abstract This thesis presents the use of wearable computers as mediators for human to human commu- nication. As the market for on-body wearable technology grows, the importance of efficient interactions through such technology becomes more significant. Novel forms of communi- cation is made possible due to the highly mobile nature of a wearable computer coupled to a person. A person can, for example, deliver live video, audio and commentary from a re- mote location, allowing local participants to experience it and interact with people on the other side. In this way, knowledge and information can be shared over a distance, passing through the owner of the wearable computer who acts as a mediator. To enable the mediator to perform this task in the most efficient manner, the interaction between the user, the wear- able computer and the other people involved, needs to be made as natural and unobtrusive as possible. One of the main problemsof today is that the virtualworld offered by wearable computers can become too immersive, thereby distancing its user from interactions in the real world. At the same time, the very same immersion serves to let the user sense the remote participants as being there, accompanying and communicating through the virtual world. The key here is to get the proper balance between the real and the virtual worlds; remote participants should be able to experience a distant location through the user, while the user should similarly experience their company in the virtual world. -
THAD E. STARNER, Ph.D. Associate Professor of Computing Georgia Institute of Technology
THAD E. STARNER, Ph.D. Associate Professor of Computing Georgia Institute of Technology Research Interest: Wearable/mobile computing interfaces and pattern recognition. Educational Background B.S. 1991 MIT, Cambridge MA Computer Science B.S. 1991 MIT, Cambridge MA Brain and Cognitive Science M.S. 1995 MIT Media Laboratory, Cambridge MA Media Arts and Sciences Ph.D. 1999 MIT Media Laboratory, Cambridge MA Media Arts and Sciences Professional Employment Associate Professor of Computing, Georgia Tech (Atlanta, GA), 2006-present Assistant Professor of Computing, Georgia Tech (Atlanta, GA), 1999-2006 Visiting Professor, ECE, Swiss Federal Inst. of Technology (ETH) (Zurich, Switzerland), 2002 Research Assistant, MIT Media Laboratory (Cambridge, MA), 1993-1999 Associate Scientist, Speech Systems Group, BBN (Cambridge, MA), 1992-1993 Experience Summary Thad Starner is a wearable computing pioneer, being the first person to advocate wearing a computer as an everyday personal assistant and having done so in his own life since 1993. He is a founder of the MIT Wearable Computing Project and the IEEE Technical Committee on Wearable Information Systems, which supervises the annual International Symposium on Wearable Computers. Professor Starner’s most recent work demonstrates a new speech interface paradigm where the user can control a mobile computer directly through his conversation with another party. Other recent work has shown that desktop typing speeds are achievable on small mobile keypads. Trained in computer vision, Professor Starner’s work on computer recognition of American Sign Language (ASL) is one of the most cited works on gesture recognition. Recent extensions of this work have shown the feasibility of an ASL tutor for deaf children and gesture interfaces appropriate for mobile devices. -
Google Glass
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 02 Issue: 01 | Mar-2015 www.irjet.net p-ISSN: 2395-0072 ADVANCE TECHNOLOGY- GOOGLE GLASS Pooja S. Mankar Computer science & engineering, J.D.I.E.T ---------------------------------------------------------------------***---------------------------------------------------------------------- ABSTRACT- Most of the people who have seen the the Project Glass research and development project,[2] with glasses, but may not allowed speaking publicly; a major the mission of producing a mass-market ubiquitous computer. The frames do not currently have fitted lenses, feature of the glasses was the location information. Google is on the process of considering sunglass retailers Google will be able to capture images to its computers partnership such as Ray-Ban or Warby Parker, wish to open and augmented reality information returns to the person retail shop to try on the device for users. People who wear wearing them through the camera already built-in on the prescription glasses cannot use explorer edition, but Google glasses. For moment, if a person looking at a landmark has confirmed that Glass will be compatible with frames and then he could see historical and detailed information. lenses [3] according to the wearer's prescription and possibly attachable to normal prescription glasses. Google X Also comments about it that their friend’s left. If it’s facial Lab developed this Glass, which has experience with other recognition software becomes moderate and accurate futuristic technologies such as driverless cars. enough, the glasses could remind a wearer and also tells us when and how he met the foggy familiar person standing in front of him at a function or party.