Predicting and Reducing the Impact of Errors in Character-Based Text Entry
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
UC Berkeley UC Berkeley Electronic Theses and Dissertations
UC Berkeley UC Berkeley Electronic Theses and Dissertations Title Perceptual and Context Aware Interfaces on Mobile Devices Permalink https://escholarship.org/uc/item/7tg54232 Author Wang, Jingtao Publication Date 2010 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California Perceptual and Context Aware Interfaces on Mobile Devices by Jingtao Wang A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor John F. Canny, Chair Professor Maneesh Agrawala Professor Ray R. Larson Spring 2010 Perceptual and Context Aware Interfaces on Mobile Devices Copyright 2010 by Jingtao Wang 1 Abstract Perceptual and Context Aware Interfaces on Mobile Devices by Jingtao Wang Doctor of Philosophy in Computer Science University of California, Berkeley Professor John F. Canny, Chair With an estimated 4.6 billion units in use, mobile phones have already become the most popular computing device in human history. Their portability and communication capabil- ities may revolutionize how people do their daily work and interact with other people in ways PCs have done during the past 30 years. Despite decades of experiences in creating modern WIMP (windows, icons, mouse, pointer) interfaces, our knowledge in building ef- fective mobile interfaces is still limited, especially for emerging interaction modalities that are only available on mobile devices. This dissertation explores how emerging sensors on a mobile phone, such as the built-in camera, the microphone, the touch sensor and the GPS module can be leveraged to make everyday interactions easier and more efficient. -
Interaction Techniques for Finger Sensing Input Devices
PreSense: Interaction Techniques for Finger Sensing Input Devices ££ £ £ Jun Rekimoto £ Takaaki Ishizawa Carsten Schwesig Haruo Oba £ Interaction Laboratory ££ Graduate School of Media and Governance Sony Computer Science Laboratories, Inc. Keio University 3-14-13 Higashigotanda Shonan Fujisawa Campus Shinagawa-ku, Tokyo 141-0022 Japan 5322 Endo, Fujisawa 252-8520 Japan Phone: +81 3 5448 4380, Fax +81 3 5448 4273 [email protected] rekimoto,oba @csl.sony.co.jp http://www.csl.sony.co.jp/person/rekimoto.html ABSTRACT Although graphical user interfaces started as imitations of the physical world, many interaction techniques have since been invented that are not available in the real world. This paper focuses on one of these “previewing”, and how a sen- sory enhanced input device called “PreSense Keypad” can provide a preview for users before they actually execute the commands. Preview important in the real world because it is often not possible to undo an action. This previewable fea- Figure 1: PreSense Keypad senses finger contact ture helps users to see what will occur next. It is also helpful (proximity) and position before the key press. when the command assignment of the keypad dynamically changes, such as for universal commanders. We present sev- eral interaction techniques based on this input device, includ- touch sensor, finger motions on the keypad are recognized ing menu and map browsing systems and a text input system. by integrating each keytop’s information. We also discuss finger gesture recognition for the PreSense This sensor and keytop combination makes it possible to rec- Keypad. ognize “which button is about to be pressed”, and the system KEYWORDS: input devices, previewable user interfaces, can provide appropriate information to users. -
Using Action-Level Metrics to Report the Performance of Multi-Step Keyboards
Using Action-Level Metrics to Report the Performance of Multi-Step Keyboards Gulnar Rakhmetulla* Ahmed Sabbir Arif† Steven J. Castellucci‡ Human-Computer Interaction Group Human-Computer Interaction Group Independent Researcher University of California, Merced University of California, Merced Toronto I. Scott MacKenzie§ Caitlyn E. Seim¶ Electrical Engineering and Computer Science Mechanical Engineering York University Stanford University Table 1: Conventional error rate (ER) and action-level unit error rate ABSTRACT (UnitER) for a constructive method (Morse code). This table illustrates Computer users commonly use multi-step text entry methods on the phenomenon of using Morse code to enter “quickly” with one handheld devices and as alternative input methods for accessibility. character error (“l”) in each attempt. ER is 14.28% for each attempt. These methods are also commonly used to enter text in languages One of our proposed action-level metrics, UnitER, gives a deeper with large alphabets or complex writing systems. These methods insight by accounting for the entered input sequence. It yields an error require performing multiple actions simultaneously (chorded) or rate of 7.14% for the first attempt (with two erroneous dashes), and in a specific sequence (constructive) to produce input. However, an improved error rate of 3.57% for the second attempt (with only evaluating these methods is difficult since traditional performance one erroneous dash). The action-level metric shows that learning has metrics were designed explicitly for non-ambiguous, uni-step meth- occurred with a minor improvement in error rate, but this phenomenon ods (e.g., QWERTY). They fail to capture the actual performance of is omitted in the ER metric, which is the same for both attempts. -
Neue Formen Der Interaktion Mit Mobilen Geräten
Albrecht Schmidt, Paul Holleis Matthias Kranz, Andreas Butz (Hrsg.) Neue Formen der Interaktion mit Mobilen Geräten Hauptseminar Medieninformatik SS 2005 Technical Report LMU-MI-2005-2, Nov. 2005 ISSN 1862-5207 University of Munich Department of Computer Science Media Informatics Group Albrecht Schmidt, Paul Holleis, Matthias Kranz, Andreas Butz (Herausgeber) Neue Formen der Interaktion mit Mobilen Geräten Ein Überblick über Forschungsarbeiten im Bereich der Mensch-Maschine-Interaktion Vorwort In den letzten 10 Jahren haben sich massive Veränderungen im Bereich der Benutzungsschnittstellen vollzogen. Mit diesem Bericht über Neue Formen der Interaktion mit Mobilen Geräten wollen wir kompakt aktuelle Entwicklungen und Forschungstrends im Bereich Mensch-Maschine-Interaktion mit Schwerpunkt auf mobilen Geräten einem interessierten Fachpublikum zugänglich machen. Dazu analysieren Studenten und Mitarbeiter des Lehrstuhls Medieninformatik an der Ludwig-Maximilians-Universität München Beiträge und Veröffentlichungen aus zahlreichen Workshops, Konferenzen und wissenschaftlichen Zeitschriften. Diese Reihe an Texten wurde im Zusammenhang mit einem Hauptseminar im Sommersemester 2005 von Studenten erstellt. Jedes der einzelnen Kapitel greift ein Thema auf und stellt wesentliche Forschungs- trends in diesem Bereich in kurzen Artikeln in deutscher Sprache vor. Im Rahmen des Seminars wurde von den Studenten zu den jeweiligen Themen Vorträge gehalten, die die wesentlichen Inhalte noch einmal zusammenfassten. Die Folien zu diesen Vorträgen und weitere Informationen über die Veranstaltung können auf der Webseite http://www.hcilab.org/events/mobileinteraction/ eingesehen werden. Dieser Bericht richtet sich in erster Linie an Informatiker, Medieninformatiker, User Interface Designer und Studenten der Informatik und Medieninformatik. Betrachtet man den Trend auch Alltagsgegenstände (Kleidung, Kaffeemaschine, usw.) mit Technologie auszustatten und sie somit in Benutzungsschnittstellen zu verwandeln, erhält das Thema eine größere Tragweite. -
Twiddler Typing: One-Handed Chording Text Entry for Mobile Phones
Twiddler Typing: OneHanded Chording Text Entry for Mobile Phones Kent Lyons, Thad Starner, Daniel Plaisted, James Fusia, Amanda Lyons, Aaron Drew, E. W. Looney College of Computing and GVU Center Georgia Institute of Technology Atlanta, GA 30332-0280 USA fkent,thad,plaisted,visyz,amandal,adrew,[email protected] ABSTRACT markets for wireless email, which is desired by 81% of con- An experienced user of the Twiddler, a one–handed chord- sumers surveyed [3]. Wireless email is predicted to drive the ing keyboard, averages speeds of 60 words per minute with next stage of the industry's European data revenues [4]. In- letter–by–letter typing of standard test phrases. This fast typ- creased text entry speeds may also help unexpected segments ing rate coupled with the Twiddler's 3x4 button design, sim- of the user population. For example, the Deaf population has ilar to that of a standard mobile telephone, makes it a poten- adopted wireless texting as a convenient means of communi- tial alternative to multi–tap for text entry on mobile phones. cation within the community. Despite this similarity, there is very little data on the Twid- dler's performance and learnability. We present a longitu- In this paper, we demonstrate a chording method of text entry dinal study of novice users' learning rates on the Twiddler. on a 3x4 button keypad where expert rates average 60 wpm Ten participants typed for 20 sessions using two different on letter–by–letter entry. We show that chording novices av- methods. Each session is composed of 20 minutes of typing erage over 26 wpm after 400 minutes of practice and have with multi–tap and 20 minutes of one–handed chording on rates comparable to their multi–tap rates within 80 minutes the Twiddler. -
Senorita: a Chorded Keyboard for Sighted, Low Vision, and Blind
Senorita: A Chorded Keyboard for Sighted, Low Vision, and Blind Mobile Users Gulnar Rakhmetulla Ahmed Sabbir Arif Human-Computer Interaction Group Human-Computer Interaction Group University of California, Merced University of California, Merced Merced, CA, United States Merced, CA, United States [email protected] [email protected] Figure 1. With Senorita, the left four keys are tapped with the left thumb and the right four keys are tapped with the right thumb. Tapping a key enters the letter in the top label. Tapping two keys simultaneously (chording) enters the common letter between the keys in the bottom label. The chord keys are arranged on opposite sides. Novices can tap a key to see the chords available for that key. This figure illustrates a novice user typing the word “we”. She scans the keyboard from the left and finds ‘w’ on the ‘I’ key. She taps on it with her left thumb to see all chords for that key on the other side, from the edge: ‘C’, ‘F’, ‘W’, and ‘X’ (the same letters in the bottom label of the ‘I’ key). She taps on ‘W’ with her right thumb to form a chord to enter the letter. The next letter ‘e’ is one of the most frequent letters in English, thus has a dedicated key. She taps on it with her left thumb to complete the word. ABSTRACT INTRODUCTION Senorita is a novel two-thumb virtual chorded keyboard for Virtual Qwerty augmented with linguistic and behavioral mod- mobile devices. It arranges the letters on eight keys in a single els has become the dominant input method for mobile devices. -
Gesture Typing on Smartwatches Using a Segmented QWERTY Around the Bezel
SwipeRing: Gesture Typing on Smartwatches Using a Segmented QWERTY Around the Bezel Gulnar Rakhmetulla* Ahmed Sabbir Arif† Human-Computer Interaction Group Human-Computer Interaction Group University of California, Merced University of California, Merced Figure 1: SwipeRing arranges the standard QWERTY layout around the edge of a smartwatch in seven zones. To enter a word, the user connects the zones containing the target letters by drawing gestures on the screen, like gesture typing on a virtual QWERTY. A statistical decoder interprets the input and enters the most probable word. A suggestion bar appears to display other possible words. The user could stroke right or left on the suggestion bar to see additional suggestions. Tapping on a suggestion replaces the last entered word. One-letter and out-of-vocabulary words are entered by repeated strokes from/to the zones containing the target letters, in which case the keyboard first enters the two one-letter words in the English language (see the second last image from the left), then the other letters in the sequence in which they appear in the zones (like multi-tap). Users could also repeatedly tap (instead of stroke) on the zones to enter the letters. The keyboard highlights the zones when the finger enters them and traces all finger movements. This figure illustrates the process of entering the phrase “the world is a stage” on the SwipeRing keyboard (upper sequence) and on a smartphone keyboard (bottom sequence). We can clearly see the resemblance of the gestures. ABSTRACT player or rejecting a phone call), checking notifications on incoming Most text entry techniques for smartwatches require repeated taps to text messages and social media posts, and using them as fitness enter one word, occupy most of the screen, or use layouts that are trackers to record daily physical activity. -
Text Input Methods for Indian Languages Sowmya Vajjala, International Institute of Information Technology
Iowa State University From the SelectedWorks of Sowmya Vajjala 2011 Text Input Methods for Indian Languages Sowmya Vajjala, International Institute of Information Technology Available at: https://works.bepress.com/sowmya-vajjala/3/ TEXT INPUT METHODS FOR INDIAN LANGUAGES By Sowmya V.B. 200607014 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science (by Research) in Computer Science & Engineering Search and Information Extraction Lab Language Technologies Research Center International Institute of Information Technology Hyderabad, India September 2008 Copyright c 2008 Sowmya V.B. All Rights Reserved Dedicated to all those people, living and dead, who are directly or indirectly responsible to the wonderful life that I am living now. INTERNATIONAL INSTITUTE OF INFORMATION TECHNOLOGY Hyderabad, India CERTIFICATE It is certified that the work contained in this thesis, titled “ Text input methods for Indian Languages ” by Sowmya V.B. (200607014) submitted in partial fulfillment for the award of the degree of Master of Science (by Research) in Computer Science & Engineering, has been carried out under my supervision and it is not submitted elsewhere for a degree. Date Advisor : Dr. Vasudeva Varma Associate Professor IIIT, Hyderabad Acknowledgements I would like to first express my gratitude to my advisor Dr Vasudeva Varma, for being with me and believing in me throughout the duration of this thesis work. His regular suggestions have been greatly useful. I thank Mr Prasad Pingali for his motivation and guidance during the intial phases of my thesis. I thank Mr Bhupal Reddy for giving me the first lessons in my research. I entered IIIT as a novice to Computer Science in general and research in particular. -
Investigating Retrospective Interoperability Between the Accessible and Mobile Webs with Regard to User Input
INVESTIGATING RETROSPECTIVE INTEROPERABILITY BETWEEN THE ACCESSIBLE AND MOBILE WEBS WITH REGARD TO USER INPUT A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering and Physical Sciences 2011 By Tianyi Chen School of Computer Science Contents Abstract 13 Declaration 15 Copyright 16 Acknowledgements 17 1 Introduction 18 1.1 Problem Statement . 21 1.2 Research Questions . 23 1.3 Thesis Structure . 25 1.4 Publications . 27 2 Background and Related Works 31 2.1 Web Accessibility . 31 2.2 Guidelines & Best Practices . 33 2.2.1 Web Content Accessibility Guidelines . 34 2.2.2 User Agent Accessibility Guidelines & Authoring Tools Ac- cessibility Guidelines . 35 2.2.3 Mobile Web Best Practices . 37 2.3 Mobile Web . 37 2.4 Overlapping experiences between the Accessible and the Mobile Webs . 40 2.4.1 Visual Impariment . 41 2.4.2 Hearing Impairment . 42 2.4.3 Physical Impairment . 43 2.4.4 Cognitive Impairment . 44 2 2.4.5 Older Users . 44 2.5 Input Problems Affecting Disabled Desktop Users and Mobile Web Users . 46 2.5.1 Text Input . 49 2.5.2 Pointing (Target Acquisition) . 56 3 Small Device User Evaluation While Seated 63 3.1 Methodology . 66 3.1.1 Participants . 66 3.1.2 Apparatus and Venue . 66 3.1.3 Tasks . 68 3.1.4 Procedure . 70 3.2 Data Analysis . 71 3.3 Typing Task Results . 73 3.3.1 Overall Performance . 73 3.3.2 Key Ambiguity Error . 77 3.3.3 Additional Key Error . -
Clavier DUCK
Clavier DUCK : Utilisation d'un syst`emede d´eduction de mots pour faciliter la saisie de texte sur ´ecrantactile pour les non-voyants Philippe Roussille, Mathieu Raynal, Christophe Jouffrais To cite this version: Philippe Roussille, Mathieu Raynal, Christophe Jouffrais. Clavier DUCK : Utilisation d'un syst`emede d´eduction de mots pour faciliter la saisie de texte sur ´ecrantactile pour les non- voyants. 27`eme conf´erencefrancophone sur l'Interaction Homme-Machine., Oct 2015, Toulouse, France. ACM, IHM-2015, pp.a19, 2015, <10.1145/2820619.2820638>. <hal-01218748> HAL Id: hal-01218748 https://hal.archives-ouvertes.fr/hal-01218748 Submitted on 21 Oct 2015 HAL is a multi-disciplinary open access L'archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destin´eeau d´ep^otet `ala diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publi´esou non, lished or not. The documents may come from ´emanant des ´etablissements d'enseignement et de teaching and research institutions in France or recherche fran¸caisou ´etrangers,des laboratoires abroad, or from public or private research centers. publics ou priv´es. Clavier DUCK : Utilisation d’un système de déduction de mots pour faciliter la saisie de texte sur écran tactile pour les non-voyants. Philippe Roussille Mathieu Raynal Christophe Jouffrais Université de Toulouse & Université de Toulouse & CNRS & Université de CNRS ; IRIT ; CNRS ; IRIT ; Toulouse ; IRIT ; F31 062 Toulouse, France F31 062 Toulouse, France F31 062 Toulouse, France [email protected] [email protected] [email protected] RÉSUMÉ fournissant des claviers logiciels qui s’affichent sur ces L’utilisation des écrans tactiles et en particulier les cla- surfaces tactiles. -
Using Action-Level Metrics to Report the Performance of Multi-Step Keyboards
Using Action-Level Metrics to Report the Performance of Multi-Step Keyboards Gulnar Rakhmetulla* Ahmed Sabbir Arif† Steven J. Castellucci‡ Human-Computer Interaction Group Human-Computer Interaction Group Independent Researcher University of California, Merced University of California, Merced Toronto I. Scott MacKenzie§ Caitlyn E. Seim¶ Electrical Engineering and Computer Science Mechanical Engineering York University Stanford University Table 1: Conventional error rate (ER) and action-level unit error rate ABSTRACT (UnitER) for a constructive method (Morse code). This table illustrates Computer users commonly use multi-step text entry methods on the phenomenon of using Morse code to enter “quickly” with one handheld devices and as alternative input methods for accessibility. character error (“l”) in each attempt. ER is 14.28% for each attempt. These methods are also commonly used to enter text in languages One of our proposed action-level metrics, UnitER, gives a deeper with large alphabets or complex writing systems. These methods insight by accounting for the entered input sequence. It yields an error require performing multiple actions simultaneously (chorded) or rate of 7.14% for the first attempt (with two erroneous dashes), and in a specific sequence (constructive) to produce input. However, an improved error rate of 3.57% for the second attempt (with only evaluating these methods is difficult since traditional performance one erroneous dash). The action-level metric shows that learning has metrics were designed explicitly for non-ambiguous, uni-step meth- occurred with a minor improvement in error rate, but this phenomenon ods (e.g., QWERTY). They fail to capture the actual performance of is omitted in the ER metric, which is the same for both attempts. -
Abstract There Has Been a Substantial Growth in Interest in Mobile Text Entry Over Recent Years, Among Both Researchers and Users
Rhodes University Department of Computer Science Computer Science Honours Literature Review Principal Investigator: Edison Mukadah Supervisor: Greg Foster Co-Supervisor: Hannah Slay Mobile Interface Design and Predictive Text Algorithm For Improved Text Entry Abstract There has been a substantial growth in interest in mobile text entry over recent years, among both researchers and users. Increasingly mobile devices are being used to perform text-intensive applications, such as text messaging, creating a demand for more efficient and easier to use text entry methods. Unlike for desktop computing, no single standard mobile text entry method has emerged. The diversity of mobile devices makes it unlikely that this will occur in the near future. Thus, mobile text entry remains a very open area of research, providing a fertile environment for the development of innovative text entry methods. A necessary part of the development of a new mobile text entry method is a comparison of its performance with existing ones. A review of current best practice for the qualitative and empirical evaluation of mobile text entry methods is presented, alongside a classification of existing mobile text entry methods and the various ways to optimise text entry. A mobile interface that uses a scroll wheel will then be designed together with a predictive text algorithm by combining or improving the various techniques that will be examined by this literature review for improved text entry. Empirical tests will then be conducted to test the hypothesis that the new interface has a faster text entry input rate compared to the traditional Multi-Tap keypad. 1 Introduction 1.1 The Popularity of Mobile Devices Mobile computing devices popularity has escalated astronomically over the past years.