Hand-And-Finger-Awareness for Mobiletouch Interaction Using

Total Page:16

File Type:pdf, Size:1020Kb

Hand-And-Finger-Awareness for Mobiletouch Interaction Using Hand-and-Finger-Awareness for Mobile Touch Interaction using Deep Learning Von der Fakultät für Informatik, Elektrotechnik und Informationstechnik und dem Stuttgart Research Centre for Simulation Technology der Universität Stuttgart zur Erlangung der Würde eines Doktors der Naturwissenschaften (Dr. rer. nat.) genehmigte Abhandlung Vorgelegt von Huy Viet Le aus Nürtingen Hauptberichter: Prof. Dr. Niels Henze Mitberichter: Prof. Antti Oulasvirta Mitprüfer: Jun.-Prof. Dr. Michael Sedlmair Tag der mündlichen Prüfung: 28.03.2019 Institut für Visualisierung und Interaktive Systeme der Universität Stuttgart 2019 2 Zusammenfassung Mobilgeräte wie Smartphones und Tablets haben mittlerweile Desktop Compu- ter für ein breites Spektrum an Aufgaben abgelöst. Nahezu jedes Smartphone besitzt einen berührungsempfindlichen Bildschirm (Touchscreen), welches die Ein- und Ausgabe zu einer Schnittstelle kombiniert und dadurch eine intuitive Interaktion ermöglicht. Mit dem Erfolg sind Anwendungen, die zuvor nur für Desktop Computer verfügbar waren, nun auch für Mobilgeräte verfügbar. Dieser Wandel steigerte die Mobilität von Computern und erlaubt es Nutzern dadurch Anwendungen auch unterwegs zu verwenden. Trotz des Erfolgs von Touchscreens sind traditionelle Eingabegeräte, wie Tastatur und Maus, aufgrund ihrer Eingabemöglichkeiten immer noch überlegen. Eine Maus besitzt mehrere Tasten, mit denen verschiedene Funktionen an dersel- ben Zeigerposition aktiviert werden können. Zudem besitzt eine Tastatur mehrere Hilfstasten, mit denen die Funktionalität anderer Tasten vervielfacht werden. Im Gegensatz dazu beschränken sich die Eingabemöglichkeiten von Touchscreens auf zweidimensionale Koordinaten der Berührung. Dies bringt einige Herausfor- derungen mit sich, die die Benutzerfreundlichkeit beeinträchtigen. Unter anderem sind Möglichkeiten zur Umsetzung von Kurzbefehlen eingeschränkt, was Sheider- mans goldene Regeln für das Interface Design widerspricht. Zudem wird meist nur ein Finger für Eingabe verwendet, was die Interaktion verlangsamt. Weitere Herausforderungen, wie das Fat-Finger Problem und die limitierte Erreichbarkeit 3 auf großen Geräten, tragen mit Unbequemlichkeiten bei. Neue berührungsba- sierte Interaktionsmethoden werden benötigt, um die Eingabemöglichkeiten auf Touchscreens zu erweitern und die Eingabe mit mehreren Fingern, wie es bei traditionellen Eingabegeräten üblich ist, zu ermöglichen. In dieser Arbeit wird untersucht, wie es einzelnen Fingern und Teile der Hand ermöglicht werden kann, Eingaben auf einem mobilen Gerät zu tätigen und zwischen deren Eingaben zu unterscheiden. Dieses Konzept wird als “Hand-und- Finger-bewusste” Interaktion bezeichnet. Durch die Erkennung von Hand und Finger können einzelnen Fingern und Teile der Hand verschiedene Funktionen zugewiesen werden, was die Eingabemöglichkeit erweitert und viele Herausforde- rungen der Touch Interaktion löst. Des Weiteren ermöglicht die Anwendung des Konzepts der “Hand-und-Finger-bewussten” Interaktion auf die komplette Gerä- teoberfläche nun auch die Verwendung der hinteren Finger zur Eingabe, die bisher nur das Gerät hielten. Dies addressiert weitere Herausforderungen der Touch Interaktion und bietet viele Möglichkeiten zur Realisierung von Kurzbefehlen. Diese Dissertation enthält die Ergebnisse aus zwölf Studien, welche sich auf die Design Aspekte, die technische Realisierbarkeit und die Benutzerfreundlich- keit der “Hand-und-Finger-bewussten” Interaktion fokussieren. Im ersten Schritt wird die Ergonomie und das Verhalten der Hand untersucht, um die Entwicklung neuer Interaktionstechniken zu inspirieren. Anschließend wird erforscht, wie gut einzelne Finger und Teile der Hand mit Hilfe von Deep Learning Techniken und Rohdaten von kapazitiven Sensoren erkannt werden können. Dabei wird sowohl ein einzelner kapazitiver Bildschirm, als auch ein Gerät, das rundum Berührungen registriert, verwendet. Basierend darauf präsentieren wir vier Studien, die sich damit befassen Kurzbefehle von Computer-Tastaturen auf mobile Geräte zu brin- gen, um die Benutzerfreundlichkeit von Textverarbeitung auf Mobilgeräten zu verbessern. Wir folgen dabei dem angepassten benutzerzentriertem Designprozess für die Anwendung von Deep Learning. Der Kernbeitrag dieser Dissertation erstreckt sich von tieferen Einsichten zur Interaktion mit verschiedenen Fingern und Teilen der Hand, über einen tech- nischen Beitrag zur Identifikation der Berührungsquelle mit Hilfe von Deep Learning Techniken, bis hin zu Ansätzen zur Lösung der Herausforderungen mobiler Berührungseingabe. 4 Abstract Mobile devices such as smartphones and tablets have replaced desktop computers for a wide range of everyday tasks. Virtually every smartphone incorporates a touchscreen which enables an intuitive interaction through a combination of input and output in a single interface. Due to the success of touch input, a wide range of applications became available for mobile devices which were previously exclusive to desktop computers. This transition increased the mobility of computing devices and enables users to access important applications even while on the move. Despite the success of touchscreens, traditional input devices such as keybo- ard and mouse are still superior due to their rich input capabilities. For instance, computer mice offer multiple buttons for different functions at the same cur- sor position while hardware keyboards provide modifier keys which augment the functionality of every other key. In contrast, touch input is limited to the two-dimensional location of touches sensed on the display. The limited input capabilities slow down the interaction and pose a number of challenges which affect the usability. Among others, shortcuts can merely be provided which affects experienced users and contradicts Shneiderman’s golden rules for interface design. Moreover, the use of mostly one finger for input slows down the interaction while further challenges such as the fat-finger problem and limited reachability add ad- ditional inconveniences. Although the input capabilities are sufficient for simple applications, more complex everyday tasks which require intensive input, such 5 as text editing, are still not widely adopted yet. Novel touch-based interaction techniques are needed to extend the touch input capabilities and enable multiple fingers and even parts of the hand to perform input similar to traditional input devices. This thesis examines how individual fingers and other parts of the hand can be recognized and used for touch input. We refer to this concept as hand-and- finger-awareness for mobile touch interaction. By identifying the source of input, different functions and action modifiers can be assigned to individual fingers and parts of the hand. We show that this concept increases the touch input capabilities and solves a number of touch input challenges. In addition, by applying the concept of hand-and-finger-awareness to input on the whole device surface, previously unused fingers on the back are now able to perform input and augment touches on the front side. This further addresses well-known challenges in touch interaction and provides a wide range of possibilities to realize shortcuts. We present twelve user studies which focus on the design aspects, technical feasibility, and the usability of hand-and-finger-awareness for mobile touch interaction. In a first step, we investigate the hand ergonomics and behavior during smartphone use to inform the design of novel interaction techniques. Afterward, we examine the feasibility of applying deep learning techniques to identify individual fingers and other hand parts based on the raw data of asingle capacitive touchscreen as well as of a fully touch sensitive mobile device. Based on these findings, we present a series of studies which focus on bringing shortcuts from hardware keyboards to a fully touch sensitive device to improve mobile text editing. Thereby, we follow a user-centered design process adapted for the application of deep learning. The contribution of this thesis ranges from insights on the use of different fingers and parts of the hand for interaction, through technical contributions for the identification of the touch source using deep learning, to solutions for addressing limitations of mobile touch input. 6 Acknowledgements Over the past three years, I had one of the best times of my life working together with a number of amazing colleagues and friends who inspired me a lot. Without their support, this work would never have been possible. First and foremost, I would like to thank my supervisor Niels Henze who inspired my work and always supported me in the best possible ways to achieve my goals. Without his support, I would have never came this far. I further thank my committee Antti Oulasvirta, Michael Sedlmair, and Stefan Wagner for the great and inspiring discussions. Discussions with Syn Schmitt in the SimTech milestone presentation, and a number of student peers and mentors in doctoral consortia at international conferences further shaped my thesis. I would also like to thank Albrecht Schmidt for all his great support which even goes beyond research. Moreover, I thank Andreas Bulling for the opportunity to stay another five months to finalize my thesis. Before my time as a PhD student, I had the great honor to meet a number of awesome people who introduced me into the world of Human-Computer Inte- raction research. I thank Alireza Sahami Shirazi for his outstanding supervision during my bachelor’s thesis. His inspiration and recommendations
Recommended publications
  • Annual Report
    Mitsubishi Electric Research Laboratories (MERL) Annual Report July 2007 through June 2008 TR2008-00 (Published July 2009) Welcome to Mitsubishi Electric Research Laboratories (MERL), the North American corporate R&D arm of Mitsubishi Electric Corporation. In this report, you will find descriptions of MERL and our projects. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for another purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories. All rights reserved. Copyright© Mitsubishi Electric Research Laboratories, 2009 20 I Broadway, Cambridge, Massachusetts 02139 6 17.621.7500 Production: Karen Dickie, Janet O'Halloran, Janet McAndless, Richard C. Waters Table of Contents Mitsubishi Electric Research Laboratories ........................................................... 1 Awa rds and Commendations ........................... ........................................ ............ 7 Technical Staff .................................................................................................... 9 Publications ......................................................................................................
    [Show full text]
  • Thesis Explores How to Bridge the Gap Between the Power Provided by Current Desktop Com- Puter Interfaces and the Fluid Use of Whiteboards and Pin-Boards
    FLUID INTERACTION FOR HIGH RESOLUTION WALL-SIZE DISPLAYS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY François Guimbretière January 2002 2002 by François Guimbretière All Rights Reserved ii I certify that I have read this dissertation and that in my opinion it is fully ade- quate, in scope and quality, as a dissertation for the degree of Doctor of Philoso- phy. __________________________________ Terry Winograd (Principal Advisor) I certify that I have read this dissertation and that in my opinion it is fully ade- quate, in scope and quality, as a dissertation for the degree of Doctor of Philoso- phy. __________________________________ Pat Hanrahan I certify that I have read this dissertation and that in my opinion it is fully ade- quate, in scope and quality, as a dissertation for the degree of Doctor of Philoso- phy. __________________________________ David Kelley I certify that I have read this dissertation and that in my opinion it is fully ade- quate, in scope and quality, as a dissertation for the degree of Doctor of Philoso- phy. __________________________________ Maureen Stone (StoneSoup Consulting) Approved for the University Committee on Graduate Studies: __________________________________ iii Abstract As computers become more ubiquitous, direct interaction with wall-size, high-resolution dis- plays will become commonplace. The familiar desktop computer interface is ill-suited to the affor- dances of these screens, such as size, and capacity for using pen or finger as primary input device. Current Graphical User Interfaces (GUIs) do not take into account the cost of reaching for a far- away menu bar, for example, and they rely heavily on the keyboard for rapid interactions.
    [Show full text]
  • Five Cs for the Future
    The RSS Feed Five Cs for the future hen you try and take a loan, a bank music, you’re Consuming content. This is what or a lender looks at five Cs for devices today are best at achieving – your PC Wyou – Character, Capacity, Col- and your phone are equally capable of satiating lateral, Capital and Conditions. Character is this need of yours. your reputation – do you have a history of bad Communication is what is responsible for credit? Capacity is your ability to repay – a big Facebook, cell phones, emailing, IM-ing, video enough salary, existing debts, etc. Collateral is conferencing, etc., Regardless of whether we what you’re putting up to minimise the risk to communicate using PCs, phones or tablets, all the lender – property for example. Capital is form factors of devices of today seem to excel similar, in the sense it refers to how much you at Communication. have, and are risking from your own pocket. Creation is the real chink in technology Finally, Conditions define the loan amount and armour, and has always been. There’s always the interest rate, based on your age – how close some device that’s better at creating certain or far away from retirement you are… types of content than another. Your phone can Robert Sovereign-Smith No, I’m not going to give you a finance lesson click a photo like a dedicated camera can, make Executive Editor this month, especially since I consider myself a call as clear as a landline, video chat like on [email protected] to be pretty terrible on the subject.
    [Show full text]
  • Using Multiuser Multi-Touch Tables to Visualize GIS Data
    MITSUBISHI ELECTRIC RESEARCH LABORATORIES Cambridge, Massachusetts, USA Using Multi-User Multi-Touch Tables to Visualize GIS Data Alan Esenther, Kathy Ryall Technology Lab Mitsubishi Electric Research Labs, Cambridge, MA, USA Masakazu Furuichi Modeling & Simulation, Information Systems Dept Mitsubishi Electric Corp, Kamakura, Japan Petroleum User Group 2007 MITSUBISHI ELECTRIC RESEARCH LABORATORIES DiamondTouch (DT): Novel Technology Platform Overcoming limitations of the mouse… – Multi-user • Simultaneous multiple users • Supports up to four users at a time –Multi-touch • Each user can touch with multiple fingers at the same time • Enables rich gestures (unlike mouse/stylus) – Toucher identification • It doesn’t just allow multiple touchers – it distinguishes them • Identifies which user is touching each point (unlike tables based on cameras, IR, or pressure) – Direct-input • Input and Display spaces are superimposed • Allows simplicity of directly touching and MITSUBISHI interacting with content ELECTRIC Changes for the better Petroleum User Group 2007 MITSUBISHI ELECTRIC RESEARCH LABORATORIES DiamondTouch: Hardware Overview • Basic Components – Table (touch surface) is a transmitter array Receiver – Chairs (or a conductive pad) Transmitters are receivers • User capacitively couples a signal between the table and the receiver – Supports seated or standing interaction – Stylus-based interaction also possible – Transmitter signals are interpolated to 2736x2048 input resolution • DT most-often used as horizontal tabletop display – Front-projection
    [Show full text]
  • Technical Program
    Technical Program IS&T/SPIE 18th Annual Symposium 15-19 January 2006 San Jose Marriott and San Jose Convention Center • San Jose, California USA Sponsored by: electronicimaging.org • Tel: +1 703 642 9090 • [email protected] 1 Technical Program IS&T/SPIE 18th Annual Symposium 15–19 January 2006 San Jose Marriott and San Jose Convention Center • San Jose, California USA Conferences • Continuing Education • Technical Exhibition • Demonstrations Welcome We are all too busy. If your jobs are like ours, then you are expected to produce more innovation, more products, more research, more students, more money, but all with less time. So how does participating in EI 2006 fit into this picture? Well, here is the straight answer as to why the Electronic Imaging Symposium is a great investment of your time and resources. • Defines the cutting edge in imaging research - The EI symposium, and its associated publication the Journal of Electronic Imaging, have been defining the envelope of high impact digital imaging research since their inception. Because of its leading role, EI is where the innovation leaders in imaging research go to find out about the newest imaging systems, methods, instrumentation, and algorithms. • Provides broad coverage with tremendous technical depth - The unique structure of EI leads to a rich marketplace of ideas where a broad range of topics are discussed in great technical depth. Each year, the EI symposium starts new conferences on important new topics and reinvigorates highly respected existing conferences through the incorporation of new research and participants. • Promotes an atmosphere for professional networking - Because of its organization into primarily single-tracked conferences, the symposium has a very personal and friendly atmosphere that gives technical experts and researchers an opportunity to learn what they need and also, to efficiently network on a professional level.
    [Show full text]
  • Multimodal Interaction with Mobile Devices: Fusing a Broad Spectrum of Modality Combinations
    Multimodal Interaction with Mobile Devices: Fusing a Broad Spectrum of Modality Combinations Dissertation zur Erlangung des Grades Doktor der Ingenieurwissenschaften (Dr.-Ing.) der Naturwissenschaftlich-Technischen Fakultat¨ I der Universitat¨ des Saarlandes vorgelegt von Rainer Wasinger Saarbrucken¨ 20. November 2006 ii Datum des Kolloquiums: Montag den 20. November 2006 Dekan und Vorsitzender: Prof. Dr. Thorsten Herfet Gutachter: 1. Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster 2. Prof. Dr. Elisabeth Andre´ Akademischer Beisitzer: Dr. Dominik Heckmann Eidesstattliche Erklarung¨ Hiermit erklare¨ ich, dass ich die vorliegende Arbeit ohne unzulassige¨ Hilfe Dritter und ohne Be- nutzung anderer als der angegebenen Hilfsmittel angefertigt habe. Die aus anderen Quellen oder indirekt ubernommenen¨ Daten und Konzepte sind unter Angabe der Quelle gekennzeichnet. Diese Arbeit wurde bisher weder im In- noch im Ausland in gleicher oder ahnlicher¨ Form in anderen Prufungsverfahren¨ vorgelegt. Saarbrucken,¨ den 20. November 2006 iii iv Acknowledgements Most of all, I would like to thank Professor Wahlster for providing the funds, the time, and the environment, for any and all of this to have been possible. I also thank you for permitting me to work on such an interesting topic at the DFKI. I value your advice, which is always relevant, and your ability to see the interesting aspect of practically any research field. I also thank Elisabeth Andre´ for being the second corrector of this work, and the German Federal Ministry for Education and Research (BMBF), which has funded this work under the contract number 01 IN C02 as part of the projects COLLATE and COLLATE II. I would like to dearly thank my wife Emma and my son Xavier for making everything wor- thwhile.
    [Show full text]
  • Grasping Gestures: Gesturing with Physical Artifacts
    Artificial Intelligence for Engineering Design, Analysis and Manufacturing (2011), 25, 255–271. # Cambridge University Press 2011 0890-0604/11 $25.00 doi:10.1017/S0890060411000072 Grasping gestures: Gesturing with physical artifacts 1 2 ELISE VAN DEN HOVEN AND ALI MAZALEK 1User-Centered Engineering Group, Industrial Design Department, Eindhoven University of Technology, Eindhoven, The Netherlands 2Synaesthetic Media Lab, Digital Media Program, Georgia Institute of Technology, Atlanta, Georgia, USA (RECEIVED July 19, 2010; ACCEPTED October 6, 2010) Abstract Gestures play an important role in communication. They support the listener, who is trying to understand the speaker. How- ever, they also support the speaker by facilitating the conceptualization and verbalization of messages and reducing cog- nitive load. Gestures thus play an important role in collaboration and also in problem-solving tasks. In human–computer interaction, gestures are also used to facilitate communication with digital applications, because their expressive nature can enable less constraining and more intuitive digital interactions than conventional user interfaces. Although gesture re- search in the social sciences typically considers empty-handed gestures, digital gesture interactions often make use of hand- held objects or touch surfaces to capture gestures that would be difficult to track in free space. In most cases, the physical objects used to make these gestures serve primarily as a means of sensing or input. In contrast, tangible interaction makes use of physical objects as embodiments of digital information. The physical objects in a tangible interface thus serve as representations as well as controls for the digital information they are associated with. Building on this concept, gesture interaction has the potential to make use of the physical properties of hand-held objects to enhance or change the function- ality of the gestures made.
    [Show full text]
  • User Interaction in Hybrid Multi-Display Environments
    User Interaction in Hybrid Multi-Display Environments Hrvoje Benko Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2007 © 2007 Hrvoje Benko All Rights Reserved ABSTRACT User Interaction in Hybrid Multi-Display Environments Hrvoje Benko This dissertation presents the design, implementation, and evaluation of novel pointer-based and gesture-based interaction techniques for multi-display environments (MDEs). These techniques transcend the constraints of a single display, allowing users to combine multiple displays and interaction devices in order to benefit from the advan- tages of each. First, we introduce a set of pointer warping techniques, which improve existing mouse pointer interaction in an MDE, by allowing users to instantaneously relocate the cursor to an adjacent display, instead of traversing the bezel. Formal evaluations show significant improvements in user performance when compared to standard mouse be- havior. Next, we focus on a particular kind of MDE, in which 3D head-worn augmented re- ality displays are combined with handheld and stationary displays to form a hybrid MDE. A key benefit of hybrid MDEs is that they integrate and utilize the 3D space in which the 2D displays are embedded, creating a seamless visualization environment. We describe the development of a complex hybrid MDE system, called Visual Interac- tion Tool for Archaeology (VITA), which allows for collaborative off-site analysis of archaeological excavation data by distributing the presentation of data among several head-worn, handheld, projected tabletop, and large high-resolution displays. Finally, inspired by the lack of freehand interactions for hybrid MDEs, we designed gestural techniques that address the important issues of interacting across and within 2D and 3D displays.
    [Show full text]
  • Supporting Medical Conversations Between Deaf and Hearing Individuals with Tabletop Displays Anne Marie Piper and James D
    Supporting Medical Conversations between Deaf and Hearing Individuals with Tabletop Displays Anne Marie Piper and James D. Hollan Distributed Cognition and Human-Computer Interaction Laboratory Department of Cognitive Science, University of California, San Diego 9500 Gilman Dr., La Jolla, CA 92093 {apiper, hollan}@ucsd.edu ABSTRACT language interpreter to facilitate communication between a This paper describes the design and evaluation of Shared deaf patient and hearing doctor. Deaf patients must plan Speech Interface (SSI), an application for an interactive ahead to ensure that an interpreter is available. For most multitouch tabletop display designed to facilitate medical patients, privacy is a central concern. Some deaf patients, conversations between a deaf patient and a hearing, non- depending on their comfort level with interpreters, prefer signing physician. We employ a participatory design and actually use an alternate communication channel (e.g., process involving members of the deaf community as well email or instant messenger to discuss sensitive medical as medical and communication experts. We report results issues with their physician). Increasing communication from an evaluation that compares conversation when privacy is one motivation behind the work reported here. facilitated by: (1) a digital table, (2) a human sign language While other viable communication tools for the deaf interpreter, and (3) both a digital table and an interpreter. community exist, tabletop displays with speech recognition Our research reveals that tabletop displays have valuable have potential to facilitate medical conversations between properties for facilitating discussion between deaf and deaf and hearing individuals. Consultations with physicians hearing individuals as well as enhancing privacy and often involve discussion of visuals such as medical records, independence.
    [Show full text]
  • Diamondtouch: a Multi-User Touch Technology
    MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com DiamondTouch: A Multi-User Touch Technology Paul Dietz and Darren Leigh TR2003-125 October 2003 Abstract A technique for creating touch sensitive surfaces is proposed which allows multiple, simul- taneous users to interact in an intuitive fashion. Touch location information is determined independently for each user, allowing each touch on the common surface to be associated with a particular user. The surface gnerates location dependent electric fields, which are capaci- tively coupled through the users to receivers installed in the work environment. We describe the design of these sytems and their applications. Finally, we conclude with a description of the results we have obtained with a first generation prototype. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c Mitsubishi Electric Research Laboratories, Inc., 2003 201 Broadway, Cambridge, Massachusetts 02139 Published in Proceedings of UIST 2001, the 14th Annual ACM Symposium on User Interface Soft- ware and Technology, November 11-14, 2001, Orlando, Florida USA, pages 219-226.
    [Show full text]
  • Diamondtouch SDK:Support for Multi-User, Multi-Touch Applications
    MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com DiamondTouch SDK:Support for Multi-User, Multi-Touch Applications Alan Esenther, Cliff Forlines, Kathy Ryall, Sam Shipman TR2002-48 November 2002 Abstract DiamondTouch is a multi-touch input technology that supports multiple, simultaneous users; it can distinguish who is touching where. We present the DiamondTouch SDK; it provides sup- port for the development of applications that utilize DiamondTouch s capabilities to implement computer-supported collaboration and rich input modalities (such as gestures). Our first demo illustrates the basic utilities and functionality of our system. Our second demo, a multi-user map application, highlights DiamondTouch s ability to sup-port input from multiple, simultane- ous users and exploits Di-amondTouch s ability to identify the owner of each touch. Our third demo illustrates DiamondTouch s ability to run with existing applications by providing a mouse emulation mode. DiamondTouch is well-suited to shared-display applications. It is suitable for front-projected video of the computer display, which facilitates direct manipulation of user interface elements and provides a shared focus of attention for collab-orating users. Possible applications include command-and-control command posts, control rooms, business or technical meetings, and a variety of casual applications in the home, at schools, and in retail settings. CSCW 2002 Demo This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice.
    [Show full text]
  • Comprehensive Vulnerability Analysis of at Commands Within the Android Ecosystem Dave (Jing) Tian∗1, Grant Hernandez1, Joseph I
    ATtention Spanned: Comprehensive Vulnerability Analysis of AT Commands Within the Android Ecosystem Dave (Jing) Tian∗1, Grant Hernandez1, Joseph I. Choi1, Vanessa Frost1, Christie Ruales1, Patrick Traynor1, Hayawardh Vijayakumar2, Lee Harrison2, Amir Rahmati2,3, Michael Grace2, and Kevin R. B. Butler1 1Florida Institute for Cybersecurity (FICS) Research, University of Florida, Gainesville, FL, USA fdaveti,grant.hernandez,choijoseph007,vfrost,cruales,traynor,butlerg@ufl.edu 2Samsung Research America, Mountain View, CA, USA fh.vijayakuma,lee.harrison,amir.rahmati,[email protected] 3Department of Computer Science, Stony Brook University, Stony Brook, NY, USA Abstract have also been used by smartphone manufacturers and operating system designers to access and control device AT commands, originally designed in the early 80s for functionality in proprietary ways. For example, AT com- controlling modems, are still in use in most modern mands on Sony Ericsson smartphones can be used to ac- smartphones to support telephony functions. The role cess GPS accessories and the camera flash [18]. of AT commands in these devices has vastly expanded through vendor-specific customizations, yet the extent of While previous research (e.g., [20, 46, 47]) has demon- their functionality is unclear and poorly documented. In strated that these commands can expose actions poten- this paper, we systematically retrieve and extract 3,500 tially causing security vulnerabilities, these analyses have AT commands from over 2,000 Android smartphone been ad-hoc and narrowly focused on specific smartphone firmware images across 11 vendors. We methodically vendors. To date, there has been no systematic study of the test our corpus of AT commands against eight Android types of AT commands available on modern smartphone devices from four different vendors through their USB platforms, nor the functionality they enable.
    [Show full text]