(12) Patent Application Publication (10) Pub. No.: US 2014/0184496 A1 Gribetz Et Al

Total Page:16

File Type:pdf, Size:1020Kb

(12) Patent Application Publication (10) Pub. No.: US 2014/0184496 A1 Gribetz Et Al US 2014O184496A1 (19) United States (12) Patent Application Publication (10) Pub. No.: US 2014/0184496 A1 Gribetz et al. (43) Pub. Date: Jul. 3, 2014 (54) EXTRAMISSIVESPATIAL IMAGING Related U.S. Application Data DIGITAL EYE GLASS APPARATUSES, (60) Provisional application No. 61/748,468, filed on Jan. Y.SR RS.SQRE OR 3, 2013, provisional application No. 61/916,773, filed 9 9 CREATION, OR INTERACTION WITH on Dec. 16, 2013. OBJECTS, MATERIALS, OR OTHER Publication Classification ENTITIES (51) Int. Cl. (71) Applicant: Msta Company, Portola Valley, CA G06F 3/0 (2006.01) (US) (52) U.S. Cl. (72) Inventors: Meron Gribetz, New York, NY (US); Sc - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - G06F 3/07 E. W. Steve G. Mann, Toronto (CA) '''' “ (57) ABSTRACT (73) Assignee: Meta Company, Portola Valley, CA (US) A sensing and display apparatus, comprising: a first phenom enon interface configured to operatively interface with a first augmediated-reality space, and a second phenomenon inter (21) Appl. No.: 14/147,199 face configured to operatively interface with a second aug mediated-reality space, is implemented as an extramissive (22) Filed: Jan. 3, 2014 spatial imaging digital eyeglass. 18OW 181W - 180DEG L-4 a? a Lu-n 181DEG 180U 1 181U 18OMS 18OAM Patent Application Publication Jul. 3, 2014 Sheet 1 of 35 US 2014/O184496 A1 1300 1993 IEEE| 1995, 1903 h-962 1300 Patent Application Publication Jul. 3, 2014 Sheet 2 of 35 US 2014/O184496 A1 41 O FIG. 1AA Patent Application Publication Jul. 3, 2014 Sheet 3 of 35 US 2014/O184496 A1 1300 GO) () () () 9 1 O 12 1. () () 910 (1 1. 916 1300 Patent Application Publication Jul. 3, 2014 Sheet 4 of 35 US 2014/O184496 A1 1300 1300 Patent Application Publication Jul. 3, 2014 Sheet 5 of 35 US 2014/O184496 A1 s 3 5 e 3 Patent Application Publication Jul. 3, 2014 Sheet 6 of 35 US 2014/O184496 A1 1EE1 O 1EE20 -1EE25 VISUAL ACOUSTIC 1EE30 TRUSION TRUSION 1EE35 SENSOR SENSOR 1EE40 1EE45 O C C C C C 1EE5O &ge e sess 1EE7O OOOOOOONSSESs earé2 1EE8O 1EE90 EE9 NTRUS9N || TOUC EXTRUSION DETECTOR | | DETECTOR | | DETECTOR 1EE93 1EE94 GESTURE 1EE96 1EE97 SENSOR 1EE98 FIG. EE Patent Application Publication Jul. 3, 2014 Sheet 7 of 35 US 2014/O184496 A1 96|| ||ÑSOEOCH||8 Patent Application Publication Jul. 3, 2014 Sheet 8 of 35 US 2014/O184496 A1 Patent Application Publication Jul. 3, 2014 Sheet 9 of 35 US 2014/O184496 A1 A BODY PART 1 C 10 EXTREMITY IS N PLANET TERMINATE MANPLAY 1C40 C20 MANPLAY TM DISPLAYS POINT(S) ONSURFACE THAT 1C30 TRACK(S) PART POINT REMAINS IN PLANET Patent Application Publication US 2014/O184496 A1 09||ZOZOOZ07@07| Patent Application Publication Jul. 3, 2014 Sheet 11 of 35 US 2014/O184496 A1 G09Z Patent Application Publication Jul. 3, 2014 Sheet 12 of 35 US 2014/O184496 A1 ~?~p dS Patent Application Publication Jul. 3, 2014 Sheet 13 of 35 US 2014/O184496 A1 ... R %22 2 W V ' ' , , Sf HIll MW\ -- -s W N ill......! J iiiiii Patent Application Publication Jul. 3, 2014 Sheet 14 of 35 US 2014/O184496 A1 w O CO us 1. & INIII-IIIllW s it." Hill --YsNsi-iss ---------------- -3.---3.NSas 3. Patent Application Publication Jul. 3, 2014 Sheet 15 of 35 US 2014/O184496 A1 L11 L21 L31 ty 2.É 2F64 -1 s N-NN sers FIG. 2F Patent Application Publication Jul. 3, 2014 Sheet 16 of 35 US 2014/O184496 A1 0919, Patent Application Publication Jul. 3, 2014 Sheet 17 of 35 US 2014/O184496 A1 e S g ? r 3 S3 8 y c5 TNa 9 y O) y S. CN O Cro L?) CY) O) y CY) O r CD Cro CO O) Y- L O CO OO Y O LO OO CY) so SO Y p Y N 8 N - , Cro OO GN O) CY)31 Y y 3 o s Patent Application Publication Jul. 3, 2014 Sheet 18 of 35 US 2014/O184496 A1 28|| Patent Application Publication Jul. 3, 2014 Sheet 19 of 35 US 2014/O184496 A1 530 560 520 550 510 540 500 531 56 521 551 511 541 5O1 571 573 575 570 572 574 FIG. 5 Patent Application Publication Jul. 3, 2014 Sheet 20 of 35 US 2014/O184496 A1 500 510 FIG. 5A Patent Application Publication Jul. 3, 2014 Sheet 21 of 35 US 2014/O184496 A1 Patent Application Publication Jul. 3, 2014 Sheet 22 of 35 US 2014/O184496 A1 o s: s Patent Application Publication Jul. 3, 2014 Sheet 23 of 35 US 2014/O184496 A1 800 804 805 806 807 803 FIG. 8A Patent Application Publication Jul. 3, 2014 Sheet 24 of 35 US 2014/O184496 A1 N/- Patent Application Publication Jul. 3, 2014 Sheet 25 of 35 US 2014/O184496 A1 V 801 FIG. 8C Patent Application Publication Jul. 3, 2014 Sheet 26 of 35 US 2014/O184496 A1 O 802 ( is p 803 FIG. 9A Patent Application Publication Jul. 3, 2014 Sheet 27 of 35 US 2014/O184496 A1 8OO 802 t Patent Application Publication Jul. 3, 2014 Sheet 28 of 35 US 2014/O184496 A1 FIG. 10A Patent Application Publication Jul. 3, 2014 Sheet 29 of 35 US 2014/O184496 A1 802 t 804 FIG. OB Patent Application Publication Jul. 3, 2014 Sheet 30 of 35 US 2014/O184496 A1 800 808 FIG. 11A Patent Application Publication Jul. 3, 2014 Sheet 31 of 35 US 2014/O184496 A1 FIG. 11B Patent Application Publication Jul. 3, 2014 Sheet 32 of 35 US 2014/O184496 A1 FIG. 11C Patent Application Publication Jul. 3, 2014 Sheet 33 of 35 US 2014/O184496 A1 O 802 t Patent Application Publication Jul. 3, 2014 Sheet 34 of 35 US 2014/O184496 A1 FIG. 13 Patent Application Publication Jul. 3, 2014 Sheet 35 of 35 US 2014/O184496 A1 1300 FIG. 14 US 2014/0184496 A1 Jul. 3, 2014 EXTRAMISSIVE SPATIAL MAGING the second space; a processing apparatus operatively coupled DIGITAL EYE GLASS APPARATUSES, to the interface assembly, and the processing apparatus con METHODS AND SYSTEMS FOR VIRTUAL OR figured to process the sensor signals and the effector signals AUGMEDIATED VISION, MANIPULATION, conveyed by the interface assembly; and a memory assembly CREATION, OR INTERACTION WITH configured to tangibly embody a processing program, includ OBJECTS, MATERIALS, OR OTHER ing a sequence of programmed instructions configured to ENTITIES direct the processing apparatus to execute operations on the sensor signals and the effector signals. PRIORITY CLAIMS AND CROSS-REFERENCE 10007. In accordance with an aspect, there is provided a TO RELATED APPLICATIONS user interface, including: a first interface section configured to display phenomena derived from a first space; and a second 0001. This application is related to and claims domestic interface section configured to display phenomena derived priority benefits under 35 USC S119(e) from U.S. Provisional from a second space. Patent Application Ser. No. 61/748,468 filed on 3 Jan. 2013, 0008. In accordance with an aspect, there is provided a and from U.S. Provisional Patent Application Ser. No. sensing and display apparatus, including: a first phenomenon 61/916,773 filed Dec. 16, 2013, the entire contents of the interface configured to operatively interface with a phenom aforementioned applications are expressly and hereby incor enon and/or phenomena in a first space; and a second phe porated hereinto by reference. nomenon interface configured to operatively interface with a phenomenon and/or phenomena in a second space. BACKGROUND 0009. In accordance with an aspect, there are provided 0002 Sensor technology is employed in contexts to moni other aspects as identified in the claims. tor and/or measure information about an environment, such I0010. Other aspects and features of the non-limiting as the use of temperature sensors in industrial processes. embodiments (examples, options, etc.) may now become Mobile devices employ increasingly powerful processing apparent upon review of the following detailed description of capabilities to support various applications, such as telephony the non-limiting embodiments with the accompanying draw and other mobile communications. 1ngS. SUMMARY DETAILED DESCRIPTION OF THE DRAWINGS 0003. In accordance with an aspect, there is provided a 0011. The non-limiting embodiments may be more fully sensing and display apparatus, including: an interface assem appreciated by reference to the following description of the bly including: a first interface module configured to interface non-limiting embodiments when taken in conjunction with with a first computationally mediated space (for example, a the accompanying drawings, in which: scene, subject matter, or the like), the first interface module I0012 FIG. 1A depicts a schematic example of a sensor configured to exchange sensor signals and effector signals and display apparatus 1300 in one embodiment: with the first space; and a second interface module configured I0013 FIG. 1AA depicts a schematic example of a method to interface with a second computationally mediated space associated with the sensory and display apparatus 1300 of (for example, a user's own personal viewing space in or FIG. 1A in one embodiment; around their digital eye glass space), the second interface I0014 FIG. 1B depicts a schematic example of a sensory module configured to provide or exchange effector signals, and display apparatus 1300 in one embodiment; and, in Some embodiments, also sensor signals (for example, I0015 FIG. 1C depicts a schematic example of a sensory an eye-tracker, or the like) with the second space, and the and display apparatus 1300 in one embodiment; effector signals being user presentable, at least in part, in any I0016 FIG. 1D depicts a schematic example of a sensory one of the first space and the second space.
Recommended publications
  • Inverse Kinematic Infrared Optical Finger Tracking
    Inverse Kinematic Infrared Optical Finger Tracking Gerrit Hillebrand1, Martin Bauer2, Kurt Achatz1, Gudrun Klinker2 1Advanced Realtime Tracking GmbH ∗ Am Oferl¨ 3 Weilheim, Germany 2Technische Universit¨atM¨unchen Institut f¨urInformatik Boltzmannstr. 3, 85748 Garching b. M¨unchen, Germany e·mail: [email protected] Abstract the fingertips are tracked. All further parameters, such as the angles between the phalanx bones, are Human hand and finger tracking is an important calculated from that. This approach is inverse input modality for both virtual and augmented compared to the current glove technology and re- reality. We present a novel device that overcomes sults in much better accuracy. Additionally, the the disadvantages of current glove-based or vision- device is lightweight and comfortable to use. based solutions by using inverse-kinematic models of the human hand. The fingertips are tracked by an optical infrared tracking system and the pose of the phalanxes is calculated from the known anatomy of the hand. The new device is lightweight and accurate and allows almost arbitrary hand movement. Exami- nations of the flexibility of the hand have shown that this new approach is practical because ambi- guities in the relationship between finger tip posi- tions and joint positions, which are theoretically possible, occur only scarcely in practical use. Figure 1: The inverse kinematic infrared optical finger tracker Keywords: Optical Tracking, Finger Tracking, Motion Capture 1 Introduction 1.1 Related Work While even a standard keyboard can be seen For many people, the most important and natural as an input device based on human finger mo- way to interact with the environment is by using tion, we consider only input methods that allow their hands.
    [Show full text]
  • Airpen: a Touchless Fingertip Based Gestural Interface for Smartphones and Head-Mounted Devices
    AirPen: A Touchless Fingertip Based Gestural Interface for Smartphones and Head-Mounted Devices Varun Jain Ramya Hebbalaguppe [email protected] [email protected] TCS Research TCS Research New Delhi, India New Delhi, India ABSTRACT Hand gestures are an intuitive, socially acceptable, and a non-intrusive interaction modality in Mixed Reality (MR) and smartphone based applications. Unlike speech interfaces, they tend to perform well even in shared and public spaces. Hand gestures can also be used to interact with smartphones in situations where the user’s ability to physically touch the device is impaired. However, accurate gesture recognition can be achieved through state-of-the-art deep learning models or with the use of expensive sensors. Despite the robustness of these deep learning models, they are computationally heavy and memory hungry, and obtaining real-time performance on-device without additional hardware is still a challenge. To address this, we propose AirPen: an analogue to pen on paper, but in air, for in-air writing and gestural commands that works seamlessly in First and Second Person View. The models are trained on a GPU machine and ported on an Android smartphone. AirPen comprises of three deep learning models that work in tandem: MobileNetV2 for hand localisation, our custom fingertip regression architecture followed by a Bi-LSTM model for gesture classification. The overall framework works in real-time on mobile devices and achieves a classification accuracy of 80% with an average latency of only 0:12 s. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.
    [Show full text]
  • Auraring: Precise Electromagnetic Finger Tracking
    AuraRing: Precise Electromagnetic Finger Tracking FARSHID SALEMI PARIZI∗, ERIC WHITMIRE∗, and SHWETAK PATEL, University of Washington Fig. 1. AuraRing is 5-DoF electromagnetic tracker that enables precise, accurate, and fine-grained finger tracking for AR, VR and wearable applications. Left: A user writes the word "hello" in the air. Right: Using AuraRing to play a songinamusic application on a smart glass. Wearable computing platforms, such as smartwatches and head-mounted mixed reality displays, demand new input devices for high-fidelity interaction. We present AuraRing, a wearable magnetic tracking system designed for tracking fine-grained finger movement. The hardware consists of a ring with an embedded electromagnetic transmitter coil and a wristband with multiple sensor coils. By measuring the magnetic fields at different points around the wrist, AuraRing estimates thefive degree-of-freedom pose of the ring. We develop two different approaches to pose reconstruction—a first-principles iterative approach and a closed-form neural network approach. Notably, AuraRing requires no runtime supervised training, ensuring user and session independence. AuraRing has a resolution of 0:1 mm and a dynamic accuracy of 4:4 mm, as measured through a user evaluation with optical ground truth. The ring is completely self-contained and consumes just 2:3 mW of power. CCS Concepts: • Human-centered computing → Interaction devices; Ubiquitous and mobile computing. Additional Key Words and Phrases: Electromagnetic tracking, mixed reality, wearable, input, finger tracking ACM Reference Format: Farshid Salemi Parizi, Eric Whitmire, and Shwetak Patel. 2019. AuraRing: Precise Electromagnetic Finger Tracking. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3, 4, Article 150 (December 2019), 28 pages.
    [Show full text]
  • Lightweight Palm and Finger Tracking for Real-Time 3D Gesture Control
    Lightweight Palm and Finger Tracking for Real-Time 3D Gesture Control Georg Hackenberg* Rod McCall‡ Wolfgang BrollD Fraunhofer FIT Fraunhofer FIT Fraunhofer FIT Freehand multi-touch has been explored within various ABSTRACT approaches, e.g. Oblong’s g-speak1, after initially having been We present a novel technique implementing barehanded introduced in the popular Hollywood movie “Minority Report”. interaction with virtual 3D content by employing a time-of-flight They usually depended heavily on hand-worn gloves, markers, camera. The system improves on existing 3D multi-touch systems wrists, or other input devices, and typically did not achieve the by working regardless of lighting conditions and supplying a intuitiveness, simplicity and efficiency of surface (2D) based working volume large enough for multiple users. Previous multi-touch techniques2. In contrast to those approaches, the goal systems were limited either by environmental requirements, of our approach was to use barehanded interaction as a working volume, or computational resources necessary for real- replacement for surface based interaction. time operation. By employing a time-of-flight camera, the system Only a vision-based approach will allow for freehand and is capable of reliably recognizing gestures at the finger level in barehanded 3D multi-touch interaction. The system must also real-time at more than 50 fps with commodity computer hardware provide sufficient solutions for the following four steps: detection using our newly developed precision hand and finger-tracking of hand position without prior knowledge of existence; for each algorithm. Building on this algorithm, the system performs appearance determine pose from image cues; track appearances gesture recognition with simple constraint modeling over over time; recognize gestures based on their trajectory and pose statistical aggregations of the hand appearances in a working information.
    [Show full text]
  • Perception-Action Loop in the Experience of Virtual Environments
    Perception-Action Loop in the Experience of Virtual Environments by Seung Wook Kim A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Architecture and the Designated Emphasis in New Media in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Yehuda E. Kalay, Chair Professor Galen Cranz Professor John F. Canny Fall 2009 Abstract Perception-Action Loop in the Experience of Virtual Environments by Seung Wook Kim Doctor of Philosophy in Architecture and the Designated Emphasis in New Media University of California, Berkeley Professor Yehuda Kalay, Chair The goal of this work is to develop an approach to the design of natural and immersive interaction methods for three-dimensional virtual environments. My thesis is that habitation and presence in any environment are based on a continuous process of perception and action between a person and his/her surroundings. The current practice of virtual environments, however, disconnects this intrinsic loop, separating perception and action into two different ‘worlds’—a physical one (for perception) and a virtual one (for action). This research is aimed at bridging the gap between those two worlds. Being drawn from perceptual philosophy and psychology, the theoretical study in this dissertation identifies three embodiments of natural perception-action loop: direct perceptual acts, proprioceptive locomotion, and motor intentionality. These concepts form the basis for the interaction methods proposed in this work, and I demonstrate these methods by implementing pertinent prototype systems. First, I suggest a view-dependent, non-planar display space that supports natural perceptual actions, thereby enhancing our field of view as well as depth perception.
    [Show full text]
  • Finger Tracking for Interaction in Augmented Environments
    Finger tracking for interaction in augmented environments ;+ + Klaus Dorfmuller-Ulhaas¨ ∗ , Dieter Schmalstieg ∗Imagination Computer Services, Austria +Vienna University of Technology, Austria klaus, dieter @ims.tuwien.ac.at f g Abstract 1 Introduction In order to convey a sense of immersion, a virtual envi- Optical tracking systems allow three-dimensional input ronment (VE) system must not only present a convincing for virtual environment applications with high precision and visual rendering of the simulated objects, but also allow to without annoying cables. Spontaneous and intuitive inter- manipulate them in a fast, precise, and natural way. Rather action is possible through gestures. In this paper, we present than relying on mouse or keyboard, direct manipulation of a finger tracker that allows gestural interaction and is sim- virtual objects is enabled by employing tracking with six de- ple, cheap, fast, robust against occlusion and accurate. It grees of freedom (6DOF). Frequently, this is done via hand- is based on a marked glove, a stereoscopic tracking system held props (such as flying mouse or wand) that are fitted and a kinematic 3-d model of the human finger. Within our with magnetic trackers. However, this technology can only augmented reality application scenario, the user is able to offer limited quality because it is inherently tethered, inac- grab, translate, rotate, and release objects in an intuitive curate and susceptible to magnetic interference. Early on, way. We demonstrate our tracking system in an augmented optical tracking has been proposed as an alternative. One reality chess game allowing a user to interact with virtual main reason why optical tracking is so attractive is because objects.
    [Show full text]
  • Arxiv:2103.04380V1 [Cs.HC] 7 Mar 2021 Environmental Context and Interaction Context As Much As Possible
    A Full Body Avatar-Based Telepresence System for Dissimilar Spaces Leonard Yoon* Dongseok Yang† Korea Advanced Institute of Science and Technology Korea Advanced Institute of Science and Technology Choongho Chung‡ Sung-Hee Lee§ Korea Advanced Institute of Science and Technology Korea Advanced Institute of Science and Technology Figure 1: Our telepresence system allows for different room environments by retargeting the user’s placement and deictic gesture to the remote avatar so as to preserve the user’s context of placement and interaction. In this figure, a user in the left space is sitting facing a display and pointing at a point in the display. The user is represented in the right space by a male avatar sitting on the right side of a display, pointing at the corresponding point in the display. A user sitting in front of the display in the right space is represented by a female avatar sitting to the left of the display in the left space. Small figures are user’s egocentric views. ABSTRACT However, in the case of using a whole space of two rooms with We present a novel mixed reality (MR) telepresence system enabling different shapes and object arrangements, directly placing an avatar a local user to interact with a remote user through full-body avatars with respect to the spatial relation between a user, an avatar, and in their own rooms. If the remote rooms have different sizes and shared objects of interest may cause a discrepancy, for instance, an furniture arrangements, directly applying a user’s motion to an avatar avatar sitting in the air or penetrating some objects.
    [Show full text]
  • Fine-Grained Device-Free Tracking Using Acoustic Signals
    Strata: Fine-Grained Device-Free Tracking Using Acoustic Signals ABSTRACT it is more natural to play games and interact with VR/AR Next generation devices, such as virtual reality (VR), aug- objects using hands directly. mented reality (AR), and smart appliances, demand a simple and intuitive way for users to interact with them. To ad- Challenges: Enabling accurate device-free tracking is par- dress such needs, we develop a novel acoustic based device- ticularly challenging. In such a case, reflected signal has to free tracking system, called Strata, to enable a user to inter- be used for tracking. Reflected signal is much weaker than act with a nearby device by simply moving his finger. In the directly received signal (e.g., in free space, the directly 2 Strata, a mobile (e.g., smartphone) transmits known audio received signal attenuates by 1=d whereas the reflected sig- 4 signals at an inaudible frequency, and analyzes the received nal attenuates by 1=d , where d is the distance between the signal reflected by the moving finger to track the finger loca- device and the target to be tracked). Moreover, it is more dif- tion. To explicitly take into account multipath propagation, ficult to handle multiple reflection paths in device-free track- the mobile estimates the channel impulse response (CIR), ing. In device-based tracking, one may rely on finding the which characterizes signal traversal paths with different de- first arriving signal since the straight-line path between the lays. Each channel tap corresponds to the multipath effects sender and receiver is shortest.
    [Show full text]
  • A Vision-Based Approach for Human Hand Tracking and Gesture Recognition
    University of Windsor Scholarship at UWindsor Electronic Theses and Dissertations Theses, Dissertations, and Major Papers 2004 A vision-based approach for human hand tracking and gesture recognition. Jiangnan Lu University of Windsor Follow this and additional works at: https://scholar.uwindsor.ca/etd Recommended Citation Lu, Jiangnan, "A vision-based approach for human hand tracking and gesture recognition." (2004). Electronic Theses and Dissertations. 871. https://scholar.uwindsor.ca/etd/871 This online database contains the full-text of PhD dissertations and Masters’ theses of University of Windsor students from 1954 forward. These documents are made available for personal study and research purposes only, in accordance with the Canadian Copyright Act and the Creative Commons license—CC BY-NC-ND (Attribution, Non-Commercial, No Derivative Works). Under this license, works must always be attributed to the copyright holder (original author), cannot be used for any commercial purposes, and may not be altered. Any other use would require the permission of the copyright holder. Students may inquire about withdrawing their dissertation and/or thesis from this database. For additional inquiries, please contact the repository administrator via email ([email protected]) or by telephone at 519-253-3000ext. 3208. A Vision-Based Approach for Human Hand Tracking and Gesture Recognition By Jiangnan Lu A Thesis Submitted to the Faculty of Graduate Studies and Research through Computer Science in Partial Fulfillment of the Requirements for the Degree of Master of Science at the University of Windsor Windsor, Ontario, Canada 2004 © 2004 Jiangnan Lu Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
    [Show full text]
  • Implementation of a 3D Hand Pose Estimation System John-Certus Lack BJ.: 2014 Institut Für Robotik Und Mechatronik IB.Nr.: 572-2014-22
    Drucksachenkategorie Pose Estimation System Estimation Pose 3D Hand a of Implementation John-Certus Lack BJ.: 2014 Institut für Robotik und Mechatronik IB.Nr.: 572-2014-22 BACHELORARBEIT IMPLEMENTATION OF A 3D HAND POSE ESTIMATION SYSTEM Freigabe: Der Bearbeiter: John-Certus Lack Betreuer: Dr.-Ing. Ulrike Thomas Der Institutsdirektor Dr. Alin Albu-Schäffer Dieser Bericht enthält 65 Seiten, 41 Abbildungen und 7 Tabellen Ort: Oberpfaffenhofen Datum: 28.08.2014 Bearbeiter: John-Certus Lack Zeichen: 74703 Technische Universit¨at Munchen¨ Institute for Media Technology Prof. Dr.-Ing. Eckehard Steinbach Bachelor Thesis Implementation of a 3D Hand Pose Estimation System Author: John-Certus Lack Matriculation Number: 03620284 Address: Steinickeweg 7 80798 Munich Advisor: M.Sc. Nicolas Alt (TUM) Dr.-Ing. Ulrike Thomas (DLR) Begin: 01.04.2014 End: 11.09.2014 Abstract Chair of Media Technology Bachelor of Science Implementation of a 3D Hand Pose Estimation System by John-Certus Lack The variety of means for human-computer interaction has experienced a surge in recent years and currently covers several methods for haptic communication and voice control. These technologies have proven to be a vital tool for information exchange and are already a fundamental part of daily human interaction. However, they are mainly restricted to a two dimensional input and the interpretation of formulated commands. To overcome these limitations, several approaches of 3D hand pose detection systems were made that often imply computation intensive models or large databases with synthetic training sets. Therefore, this thesis presents a 3D hand pose estimation system that provides real-time processing at low computational cost. The first parts describes methods for the hand-background segmentation and the tracking of specific feature points, which include the center of the palm and the fingertips.
    [Show full text]
  • Exploring 3D User Interface Technologies for Improving the Gaming Experience
    EXPLORING 3D USER INTERFACE TECHNOLOGIES FOR IMPROVING THE GAMING EXPERIENCE by ARUN KUMAR KULSHRESHTH M.S. University of Central Florida, 2012 M.Tech. Indian Institute of Technology Delhi, 2005 A dissertation submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science in the College of Engineering & Computer Science at the University of Central Florida Orlando, Florida Spring Term 2015 Major Professor: Joseph J. LaViola Jr. c 2015 Arun Kumar Kulshreshth ii ABSTRACT 3D user interface technologies have the potential to make games more immersive & engaging and thus potentially provide a better user experience to gamers. Although 3D user interface technolo- gies are available for games, it is still unclear how their usage affects game play and if there are any user performance benefits. A systematic study of these technologies in game environments is required to understand how game play is affected and how we can optimize the usage in order to achieve better game play experience. This dissertation seeks to improve the gaming experience by exploring several 3DUI technologies. In this work, we focused on stereoscopic 3D viewing (to improve viewing experience) coupled with motion based control, head tracking (to make games more engaging), and faster gesture based menu selection (to reduce cognitive burden associated with menu interaction while playing). We first studied each of these technologies in isolation to understand their benefits for games. We present the results of our experiments to evaluate benefits of stereoscopic 3D (when coupled with motion based control) and head tracking in games. We discuss the reasons behind these findings and provide recommendations for game designers who want to make use of these technologies to enhance gaming experiences.
    [Show full text]
  • A Systematic Review of Commercial Smart Gloves: Current Status and Applications
    sensors Review A Systematic Review of Commercial Smart Gloves: Current Status and Applications Manuel Caeiro-Rodríguez * , Iván Otero-González , Fernando A. Mikic-Fonte and Martín Llamas-Nistal atlanTTic Research Center for Telecommunication Technologies, Universidade de Vigo, 36312 Vigo, Spain; [email protected] (I.O.-G.); [email protected] (F.A.M.-F.); [email protected] (M.L.-N.) * Correspondence: [email protected]; Tel.: +34-683-243-422 Abstract: Smart gloves have been under development during the last 40 years to support human- computer interaction based on hand and finger movement. Despite the many devoted efforts and the multiple advances in related areas, these devices have not become mainstream yet. Nevertheless, during recent years, new devices with improved features have appeared, being used for research purposes too. This paper provides a review of current commercial smart gloves focusing on three main capabilities: (i) hand and finger pose estimation and motion tracking, (ii) kinesthetic feedback, and (iii) tactile feedback. For the first capability, a detailed reference model of the hand and finger basic movements (known as degrees of freedom) is proposed. Based on the PRISMA guidelines for systematic reviews for the period 2015–2021, 24 commercial smart gloves have been identified, while many others have been discarded because they did not meet the inclusion criteria: currently active commercial and fully portable smart gloves providing some of the three main capabilities for the whole hand. The paper reviews the technologies involved, main applications and it discusses about the current state of development. Reference models to support end users and researchers comparing and selecting the most appropriate devices are identified as a key need.
    [Show full text]