GRASP-SENSITIVE SURFACES Utilizing Grasp Information for Human-Computer Interaction

Total Page:16

File Type:pdf, Size:1020Kb

GRASP-SENSITIVE SURFACES Utilizing Grasp Information for Human-Computer Interaction GRASP-SENSITIVE SURFACES Utilizing Grasp Information for Human-Computer Interaction DISSERTATION an der Fakultät für Mathematik, Informatik und Statistik der Ludwig-Maximilians-Universität München vorgelegt von Diplom-Medieninformatiker RAPHAEL WIMMER geboren am 26. Dezember 1980 in Bad Tölz München, den 19. Dezember 2014 iii Erstgutachter: Prof. Dr. Heinrich Hußmann, Ludwig-Maximilians-Universität München Zweitgutachter: Prof. Roderick Murray-Smith, PhD, University of Glasgow Tag der mündlichen Prüfung: 27. März 2015 iv Abstract v ABSTRACT Grasping objects with our hands allows us to skillfully move and manipulate them. Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand. Some of these tools, such as mobile phones or computer mice, already incorporate infor- mation processing capabilities. Many other tools may be augmented with small, energy- efficient digital sensors and processors. This allows for graspable objects to learn about the user grasping them - and supporting the user’s goals. For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action. A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case. And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter. This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in design- ing grasp-sensitive user interfaces. It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction: For nearly a century, scientists have analyzed human grasping. My literature review gives an overview of definitions, classifications, and models of human grasping. A small number of studies have investigated grasping in everyday situations. They found a much greater diversity of grasps than described by existing taxonomies. This diversity makes it difficult to directly associate certain grasps with users’ goals. In order to structure related work and own research, I formalize a generic workflow for grasp sensing. It comprises capturing of sensor values, identifying the associated grasp, and interpreting the meaning of the grasp. A comprehensive overview of related work shows that implementation of grasp- sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention. In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sens- ing techniques: HandSense uses four strategically positioned capacitive sensors for detecting and classi- fying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with vi Abstract sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors. FlyEye uses optical fiber bundles connected to a camera for detecting touch and prox- imity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp- sensitive objects and requires only very limited electronics knowledge. For FlyEye I de- veloped a relative calibration algorithm that allows determining the locations of groups of sensors whose arrangement is not known. TDRtouch extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces. I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects. Furthermore, I discuss challenges for making sense of raw grasp information and cat- egorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor’s data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information. For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object: Goal – what we want to do with the object, Relationship – what we know and feel about the object we want to grasp, Anatomy – hand shape and learned movement patterns, Setting – surrounding and environmental conditions, and Properties – texture, shape, weight, and other intrinsics of the object I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction. Zusammenfassung vii ZUSAMMENFASSUNG Die Fähigkeit, Gegenstände mit unseren Händen zu greifen, erlaubt uns, diese vielfältig zu manipulieren. Werkzeuge erweitern unsere Fähigkeiten noch, indem sie Genauigkeit, Kraft und Form unserer Hände an die Aufgabe anpassen. Digitale Werkzeuge, beispielsweise Mobiltelefone oder Computermäuse, erlauben uns auch, die Fähigkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern. Diese Geräte verfügen bereits über Sensoren und Recheneinheiten. Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Rechenein- heiten erweitern. Dies erlaubt greifbaren Objekten, mehr über den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstützen. Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut für diese Aktionen dienen. Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hält und den Dienst verweigern, falls dem nicht so ist. Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren. Diese Dissertation gibt einen Überblick über Grifferkennung (grasp sensing) für die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher Oberflächen und auf Herausforderungen beim Design griffempfind- licher Benutzerschnittstellen. Sie umfasst drei primäre Beiträge zum wissenschaftli- chen Forschungsstand: einen umfassenden Überblick über die bisherige Forschung zu menschlichem Greifen und Grifferkennung, eine detaillierte Beschreibung dreier neu- er Prototyping-Werkzeuge für griffempfindliche Oberflächen und ein Framework für Analyse und Design von griff-basierter Interaktion (grasp interaction). Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen. Mein Überblick über den Forschungsstand beschreibt Definitionen, Klassifikationen und Mo- delle menschlichen Greifens. In einigen wenigen Studien wurde bisher Greifen in all- täglichen Situationen untersucht. Diese fanden eine deutlich größere Diversität in den Griffmuster als in existierenden Taxonomien beschreibbar. Diese Diversität erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen. Um verwand- te Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung. Dieser besteht aus dem Erfassen von Sensor- werten, der Identifizierung der damit verknüpften Griffe und der Interpretation der Be- deutung des Griffes. In einem umfassenden Überblick über verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen Oberflächen immer noch ein her- ausforderndes Problem ist, dass Forscher regelmäßig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat. Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken für griffempfindliche Oberflächen entwickelt. Diese mindern jeweils eine oder mehrere viii Zusammenfassung Schwächen traditioneller Sensortechniken: HandSense verwendet vier strategisch positionierte kapazitive Sensoren um Griffmus- ter zu erkennen. Durch die Verwendung von selbst entwickelten, hochauflösenden Sen- soren ist es möglich, schon die Annäherung an das Objekt zu erkennen. Außerdem muss nicht die komplette Oberfläche des Objekts mit Sensoren bedeckt werden. Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binären Senso- ren ist. FlyEye verwendet Lichtwellenleiterbündel, die an eine Kamera angeschlossen werden, um Annäherung und Berührung auf beliebig geformten Oberflächen zu erkennen. Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berührungs- und griffempfindlichen Objekten. Für FlyEye entwickelte ich einen relative-calibration-Algorithmus, der verwendet werden kann um Gruppen von Senso- ren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen. TDRtouch erweitert Time Domain Reflectometry (TDR), eine Technik die üblicherweise zur Analyse von Kabelbeschädigungen eingesetzt wird. TDRtouch erlaubt es, Berüh- rungen entlang eines Drahtes zu
Recommended publications
  • Neonode Inc. (NEON-OTC.BB)
    Executive Informational Overview® Facts Without Fiction™ Neonode Inc. (NEON‐OTC.BB) Snapshot March 7, 2012 Neonode Inc. (“Neonode” or “the Company”) provides optical infrared† touchscreen solutions that make handheld to midsized consumer and industrial electronic devices touch sensitive. Neonode operates via a resource‐efficient technology licensing model where revenues are primarily generated through non‐exclusive, royalty‐based licenses to original equipment manufacturers (OEMs), original design manufacturers (ODMs), and component suppliers. The Company’s innovative touch technology, for which it holds multiple patents worldwide, is branded zForce®. With zForce®, Neonode seeks to rival low‐cost resistive touch technologies while outperforming today’s advanced capacitive touch solutions. To date, zForce® is employed in the Kindle Touch eReader from Amazon.com, Inc. and the Nook eReader from Barnes & Noble, Inc., as well as in eReaders from Sony Corp., Kobo Inc., and Koobe Inc. The Company has also licensed its display technology to ASUSTeK Computer Inc. and L&I Electronic Technology Co., Ltd (a joint venture between LG Display Co., Ltd and IRIVER Ltd), among other companies in the tablet PC, mobile phone, and automotive sectors. Neonode has headquarters in Sweden with development and sales offices in Santa Clara, California, and Korea. Corporate Headquarters Financial Data (NA Neonode Inc. Ticker (Exchange) NEON (OTC.BB) Recent Price (03/07/2012)* $3.97 Linnégatan 89, SE‐115 23 52‐week Range $2.50 ‐ $6.10 Stockholm, Sweden Shares Outstanding ~32.8 million Phone: +46 8 667 17 17 Market Capitalization ~$130 million www.neonode.com Average 3‐month Volume 98,030 Insider Owners +5% ~26% Institutional Owners** 6.24% EPS (Qtr.
    [Show full text]
  • Touch Technologies Tutorial
    Touch Technologies Tutorial Geoff Walker February, 2010 Agenda: Part 1 Introduction [3] Why There Are So Many Touch Technologies [6] Touch Technologies & Flexible Displays [10] Mainstream Touch Technologies [27] Analog Resistive Surface Acoustic Wave (SAW) Surface Capacitive Traditional Infrared (IR) Electromagnetic Resonance (EMR) Pen Digitizer [4] Emerging Touch Technologies Without Multi-Touch [13] Acoustic Pulse Recognition (APR - Elo) Dispersive Signal Technology (DST - 3M) Force Sensing (Vissumo) [ ] = Number of content slides in each section 2 Agenda: Part 2 Multi-Touch [9] Emerging Touch Technologies With Multi-Touch [58] Projected Capacitive LCD In-Cell (Optical, Switch & Capacitive) Optical Digital Resistive Waveguide Infrared (RPO) Vision-Based Optical Comparing Touch Technologies [5] Conclusions [2] Appendix [9] Sunlight Readability of Resistive Touchscreens [ ] = 137 Total 3 <begin>About NextWindow NextWindow Founded in 2000 by CTO and private investors 100 employees, 45 in engineering Brief history 2003: First product to market (optical touch for large displays) 2005: Entered USA market 2006: First major volume contract signed (HP TouchSmart AiO) 2008: Entered Taiwan market with ODM focus 2009: Engaged with many PC OEMs & ODMs on Win-7 products Global presence HQ in New Zealand; offices in USA, Taiwan and Singapore Manufacturing in China, Thailand and Malaysia Currently focused on two touch-screen markets Windows-7 consumer monitors and all-in-one computers Large-format display applications
    [Show full text]
  • A Review of Technologies for Sensing Contact Location on the Surface of a Display
    A review of technologies for sensing contact location on the surface of a display Geoff Walker (SID Member) Abstract — Touchscreen interactive devices have become increasingly important in both consumer and commercial applications. This paper provides a broad overview of all touchscreen technologies in use today, organized into 13 categories with 38 variations. The 13 categories are projected capacitive, analog resistive, surface capacitive, surface acoustic wave, infrared, camera-based optical, liquid crystal display in-cell, bending wave, force sensing, planar scatter detection, vision-based, electromagnetic resonance, and combinations of technologies. The information provided on each touchscreen technology includes a little history, some basic theory of operation, the most common applications, the key advantages and disadvantages, a few current issues or trends, and the author’s opinion of the future outlook for the technology. Because of its dominance, this paper begins with projected capacitive; more information is provided on this technology than on any of the other touch technologies that are discussed. This paper covers only technologies that operate by contact with a display screen; this excludes technologies such as 3D gesture recognition, touch on opaque devices such as interactive whiteboards, and proximity sensing. This is not a highly technical paper; it sacrifices depth of information on any one technology for breadth of information on multiple technologies. Keywords — touch technologies; touchscreen; touch panel; projected capacitive; in-cell; on-cell. DOI # 10.1002/jsid.100 1 Introduction 1.1 Context Touchscreen interactive devices have become increasingly As shown in Fig. 1, analog resistive and p-cap touch important in both consumer and commercial applications, technologies dominate the touch landscape today.
    [Show full text]
  • Review of Capacitive Touchscreen Technologies: Overview, Research Trends, and Machine Learning Approaches
    sensors Review Review of Capacitive Touchscreen Technologies: Overview, Research Trends, and Machine Learning Approaches Hyoungsik Nam * , Ki-Hyuk Seol, Junhee Lee, Hyeonseong Cho and Sang Won Jung Department of Information Display, Kyung Hee University, Seoul 02447, Korea; [email protected] (K.-H.S.); [email protected] (J.L.); [email protected] (H.C.); [email protected] (S.W.J.) * Correspondence: [email protected]; Tel.: +82-2-961-0925 Abstract: Touchscreens have been studied and developed for a long time to provide user-friendly and intuitive interfaces on displays. This paper describes the touchscreen technologies in four categories of resistive, capacitive, acoustic wave, and optical methods. Then, it addresses the main studies of SNR improvement and stylus support on the capacitive touchscreens that have been widely adopted in most consumer electronics such as smartphones, tablet PCs, and notebook PCs. In addition, the machine learning approaches for capacitive touchscreens are explained in four applications of user identification/authentication, gesture detection, accuracy improvement, and input discrimination. Keywords: touchscreen; capacitive; display; SNR; stylus; machine learning 1. Introduction Human beings collect a lot of information through their eyes, and many displays Citation: Nam, H.; Seol, K.-H.; Lee, around us play a key role to transfer this visual information. Displays have evolved dramat- J.; Cho, H.; Jung, S.W. Review of ically from cathode-ray tube (CRT) [1–4] via plasma display panel (PDP) [5–10] and liquid Capacitive Touchscreen Technologies: crystal display (LCD) [11–15] to cutting-edge organic light-emitting diode (OLED) [16–22] Overview, Research Trends, and and micro-LED technologies [23–28].
    [Show full text]