Touch Assisted Gaze Swipe for Text Entry

Touch Assisted Gaze Swipe for Text Entry

TAGSwipe: Touch Assisted Gaze Swipe for Text Entry Chandan Kumar Ramin Hedeshy University of Koblenz-Landau University of Koblenz-Landau Koblenz, Germany Koblenz, Germany [email protected] [email protected] I Scott MacKenzie Steffen Staab York University Universität Stuttgart, Stuttgart, Germany Dept of EE and Computer Science & Toronto, Canada University of Southampton, Southampton, UK [email protected] [email protected] ABSTRACT Gaze-based text entry is usually accomplished via an onscreen The conventional dwell-based methods for text entry by gaze keyboard, where the user selects letters by looking at them for are typically slow and uncomfortable. A swipe-based method a certain duration called a dwell-time [20]. However, this is that maps gaze path into words offers an alternative. However, slow since the dwell period needs to be sufficiently long to it requires the user to explicitly indicate the beginning and prevent unintentional selections. Researchers have proposed ending of a word, which is typically achieved by tedious gaze- alternate dwell-free interfaces like pEYE [9] or Dasher [41] en- only selection. This paper introduces TAGSwipe, a bi-modal compassing customized designs and layouts. Although novel, method that combines the simplicity of touch with the speed these approaches are difficult to deploy due to constraints of gaze for swiping through a word. The result is an efficient such as requiring too much screen space or extensive learn- and comfortable dwell-free text entry method. In the lab study ing. There are recent dwell-free approaches using the famil- TAGSwipe achieved an average text entry rate of 15.46 wpm iar QWERTY interface which rely on word-level gaze input. and significantly outperformed conventional swipe-based and This is achieved by predicting the word based on fixation se- dwell-based methods in efficacy and user satisfaction. quences [26] or the gaze path shape [17]. These approaches are promising since users do not select each letter; however, Author Keywords entry still requires explicit gaze gestures (look up/down, in- Eye typing; multimodal interaction; touch input; dwell-free side/outside of the key layout) for select and delete operations, typing; word-level text entry; swipe; eye tracking and this hampers the efficacy and user experience. A usability challenge of current approaches is the learning re- CCS Concepts quired to achieve reasonable text entry speed, since users need Human-centered computing ! Text input; Interaction de- • to adapt to faster dwell-times, a customized typing interface, vices; Accessibility technologies; Interaction paradigms; or specific gaze gestures. This is evident from the methodol- ogy and results reported in the gaze-based text entry literature. INTRODUCTION Most approaches report the entry rate after several sessions Eye tracking has emerged as a popular technology to enhance and hours of training. For example, the text entry rate using interaction in areas such as medicine, psychology, marketing, dwell selection is about 10 wpm after 10 training sessions [19, and human-computer interaction [32, 14]. In HCI, research 34]. Continuous writing using Dasher was reported as 6.6 has investigated using eye tracking as a text entry methodology, wpm in the initial session, and reaches 12.4 wpm after eight often termed eye typing or gaze-based text entry [19]. Gaze- sessions [34]. In addition to learnability, another challenge based text entry is useful for individuals who are unable to is discomfort and fatigue in continuous text entry using eye operate a physical keyboard. However, there is a need to gaze. The reason is that the eye is a perceptual organ and is make gaze-based text entry more usable to gain acceptance normally used for looking and observing, not for selecting. of eye tracking for typing. There are still challenges due to It is argued that if gaze is combined with manual input for performance, learning, and fatigue issues. interaction (gaze to look and observe and a separate mode for selection), the whole interaction process can become more Permission to make digital or hard copies of all or part of this work for personal or natural and perhaps faster [16]. It would also take less time to classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation learn and may be less tiring for the eyes. on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, Previous work has shown that adding touch input to gaze- to post on servers or to redistribute to lists, requires prior specific permission and/or a based system allows faster pointing and selecting [5, 16, 44, fee. Request permissions from [email protected]. CHI ’20, April 25–30, 2020, Honolulu, HI, USA. 13]. However, integrating gaze and touch for text entry is Copyright is held by the owner/author(s). Publication rights licensed to ACM. unexplored. The straightforward approach of replacing dwell ACM ISBN 978-1-4503-6708-0/20/04 ...$15.00. http://dx.doi.org/10.1145/3313831.3376317 selection with touch selection for each letter (Touch+Gaze) Mott et al. [25] proposed a cascading dwell-time method using amounts to a corresponding keystroke-level delay and adds a language model to compute the probability of the next letter an interaction overhead. Hence, to optimize the interaction, and adjust the dwell-time accordingly. In general, language we propose TAGSwipe, a word-level text entry method that models such as word prediction are effective in accelerating combines eye gaze and touch input: The eyes look from the the text entry rate of dwell-time typing methods [40, 37,4]. first through last letter of a word on the virtual keyboard, with touch demarking the gaze gesture. In particular, the act of Dwell-free Methods press and release signals the start and end of a word. The An alternative to dwell-based methods is dwell-free typing. system records and compares the user’s gaze path with the For dwell-free text entry, two kinds of gesture-based methods ideal path of words to generate a top candidate word which have been used: (i) gesture-based character-level text entry is entered automatically when the user releases the touch. If and (ii) gesture-based word-level text entry. For character- the gaze point switches to the predicted word list, a simple level text entry, gaze gestures replace dwell-time selection of touch gesture (left/right swipe) provides options for the user single characters [36]. Sarcar et al. [36] proposed EyeK, a to select an alternate word. typing method in which the user moves their gaze through the key in an in-out-in fashion to enter a letter. Wobbrock Our experiment, conducted with 12 participants, assessed et al. [46] proposed EyeWrite, a stroke gesture-based system TAGSwipe’s efficacy compared to gaze-based text entry ap- where users enter text by making letter-like gaze strokes based proaches. The result yielded a superior performance with on the EdgeWrite alphabet [40] among the four corners of an TAGSwipe (15.46 wpm) compared to EyeSwipe (8.84 wpm) on-screen square. Huckauf et al. [9] designed pEYEWrite, an and Dwell (8.48 wpm). Participants achieved an average entry expandable pie menu with letter groups; the user types a letter rate of 14 wpm in the first session, reflecting the ease of use by gazing across the borders of the corresponding sections. with TAGSwipe. Furthermore, the qualitative responses and Different variations of pEYEWrite approach were proposed explicit feedback showed a clear preference for TAGSwipe as using bi-gram model and word predictions [42]. Dasher [41] is a fast, comfortable, and easy to use text entry method. a zooming interface where users select characters by fixating To the best of our knowledge, TAGSwipe is the first approach on a dynamically-sized key until the key crosses a boundary to word-level text entry combining gaze with manual input. point. In a context-switching method [23], the keyboard in- In the experiment, a touchscreen mobile device was used terface is replicated on two separate screen regions (called for manual input; however, the TAGSwipe framework can contexts), and character selection is made by switching con- work with any triggering device; and, so, TAGSwipe also texts (a saccade to the other context). The duplication of con- stands for “Tag your device to Swipe for fast and comfortable text requires extensive screen space, hence, further adaption gaze-based text entry”. The framework is feasible for people of context switching method was proposed with the context who lack fine motor skills but can perform coarse interaction being a single-line, a double-line, and a QWERTY layout with with their hands (tap) [33, 27]. A majority of such users dynamically expanding keys [24]. already use switch inputs and this can be used to support The techniques just described are useful contributions in im- gaze interaction. Additionally, TAGSwipe could be effective proving gaze-based text entry. However, these systems are for elderly users who cannot move their fingers fast enough difficult to implement in practice due to screen space require- on a physical keyboard or control a mouse cursor precisely. ments and a steep learning curve. These issues may dissuade Furthermore, if the frequency of use is low, able-bodied users users from adopting the system. may also opt for a gaze-supported approach of text entry. Word-level Text Entry BACKGROUND AND RELATED WORK Interfaces using a gaze gesture to type word-by-word have This work was inspired by previous research on gaze-based shown promise in eye typing.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us