Iterative Design and Evaluation of an Event Architecture for Pen-And-Paper Interfaces Ron B

Iterative Design and Evaluation of an Event Architecture for Pen-And-Paper Interfaces Ron B

Iterative Design and Evaluation of an Event Architecture for Pen-and-Paper Interfaces Ron B. Yeh, Andreas Paepcke, Scott R. Klemmer Stanford University HCI Group Computer Science Department, Stanford, CA 94305 {ronyeh, paepcke, srk}@cs.stanford.edu ABSTRACT This paper explores architectural support for interfaces combining pen, paper, and PC. We show how the event- based approach common to GUIs can apply to augmented paper, and describe additions to address paper’s distinguish- ing characteristics. To understand the developer experience of this architecture, we deployed the toolkit to 17 student teams for six weeks. Analysis of the developers’ code provided insight into the appropriateness of events for paper UIs. The usage patterns we distilled informed a second iteration of the toolkit, which introduces techniques for integrating interactive and batched input handling, coordi- nating interactions across devices, and debugging paper applications. The study also revealed that programmers Figure 1. PaperToolkit provides an event-driven model, output created gesture handlers by composing simple ink meas- to devices, and debugging techniques. Users have created urements. This desire for informal interactions inspired us tools for many tasks, including web design (left) and music to include abstractions for recognition. This work has im- composition (right). Going beyond the retrieval and form-filling tasks shown in prior paper + digital work, these apps explore plications beyond paper — designers of graphical tools can examine API usage to inform iterative toolkit development. real-time control of GUI elements and recognition of ink input. ACM Classification Keywords architectures can be adopted for creating paper applica- tions? Which aspects of paper applications necessitate a D.2.2 [Software Engineering]: Design Tools and Tech- departure from previous GUI design tools? Finally, what niques — User interfaces. applications are developers interested in creating, and what H.5.2. [Information Interfaces]: User Interfaces — input do they do in practice? This paper addresses these ques- devices and strategies; prototyping; user-centered design. tions; additionally, we hope our approach and findings will Keywords provide value to other areas of ubiquitous computing. Toolkits, evaluation, augmented paper, device ensembles. In describing augmented paper interactions, it can be useful INTRODUCTION to delineate two approaches. The first builds digital interac- Recent research has introduced techniques for combining tivity on top of the drawing and writing tasks that users pen-and-paper with interactive computing. These aug- have traditionally engaged in with pen and paper. The other mented interactions provide a fluid and flexible input approach begins by regarding the pen as a command- interface for tasks such as documenting information in specification device, exploring paper widgets, gestures, and scientific research (e.g., [26, 44]), sketching product de- asynchronously executed behaviors. In reality, most re- signs, and composing music (see Figure 1). The primary search and commercial systems draw from both of these attraction of designing augmented paper interactions is their approaches. Examples that draw more on the first approach embrace of existing practices, particularly in mobile, infor- include techniques for temporally coordinating multiple mal, and collaborative settings (e.g., [17, 40]). To discover media—such as Audio Notebook’s and LiveScribe’s inte- the best techniques to help developers create these systems, gration of written notes with captured audio [24, 37], we can ask several questions. What aspects of graphical UI Adapx’s ruggedized system for capturing field notes [2], A- book’s use of a PDA to help organize laboratory notes [26], and ButterflyNet’s integration of field observations with photographs [44]. Work that exemplifies the second, com- mand-centric approach, includes interactions such as taps of the pen (e.g., to retrieve scientific citations [31]), gestures for editing printed documents (e.g., [11, 22]), and gestures for creating and playing paper-based games [21]. 1 This paper explores event-based architectures for paper + digital interactions, and describes PaperToolkit, a manifes- tation of the ideas. There are many enabling technologies for integrating paper and computation (e.g., [9, 12, 15, 37, 40]). PaperToolkit, our implementation of the ideas in this paper, is built on top of Anoto’s [3]; chosen for its reliabil- ity, mobility, high-resolution capture, and ability to distinguish pages. This technology employs a tiny camera, mounted inside the pen and pointed at the tip, to track pen motion across paper pre-printed with a dot-pattern. This Figure 2. In PaperToolkit, input arrives from pens (left) and is vision-based tracking provides the location, force, and time sent to handlers (middle). Output is displayed on the local ma- chine or routed to devices (bottom). Event dispatch and UI of each stroke, either in real time (via Bluetooth) or in construction are modeled after GUI architectures, to help pro- batched mode (via a wired dock). However, most aspects of grammers create paper + digital applications rapidly. PaperToolkit’s architecture apply to alternate pen hardware. This work offers three contributions: First, this research maps and posters for engineers. The question is how devel- builds on prior augmented paper platforms [11, 35] that opers program the input handling and feedback for the UI. have abstracted development pragmatics such as producing PaperToolkit helps programmers accomplish this by provid- Anoto-enhanced paper, acquiring pen data, and digitally ing methods to create paper forms, abstractions to handle rendering captured ink. PaperToolkit is similar to this prior multi-device events, and techniques to develop and debug work in its bookkeeping of the correspondence between faster. The abstractions are distributed across seven main interactive paper elements and their location in the Anoto concepts, which were developed based on iterative feed- coordinate space. However, PaperToolkit’s architecture is back from our developers, both internal and external to our more flexible than these prior systems, introducing tech- lab. The concepts are summarized in the following table: niques for integrated real-time and batched input handling, coordinated interactions across multiple devices, and rich debugging of augmented paper applications. Second, this paper reports on the usage of PaperToolkit, and how it has evolved in response to our findings. We provided the toolkit to a semester-long undergraduate HCI class at another university. The class comprised 69 students in 17 groups, mostly computer science juniors and seniors. The usage patterns we present were distilled through dis- cussions with students and a review of the final projects. While all seven areas support paper applications, the ones Third, the paper contributes a method for user-centered marked by dark circles are also valuable in other domains. toolkit design through static source-code analysis. Our For example, a mobile application may use the Device findings provided ideas for architecture and API revisions. architecture to send feedback from a phone to a PC. The paper is organized as follows. We first introduce the Scenario: Designing a Paper-Based Blog core PaperToolkit architecture and describe how applica- Karen is building a paper-based blogging system. A user tions are created with it. We then describe the user study, writes blog entries with a digital pen, and taps a paper present findings about toolkit usage, and detail how we button to wirelessly transmit the entries to a handheld de- improved the toolkit in response to the results of the study. vice, which uploads them to a web site. We then describe this paper’s relationship with prior work, On a PC, Karen writes a Java program to create a Sheet and close by suggesting opportunities for future research. object, a large Region to capture the user’s handwriting, and THE PAPERTOOLKIT ARCHITECTURE a small Region to act as the upload button (e.g., the sheet in PaperToolkit addresses the problem of creating, debugging, Figure 2). She adds two event handlers: an InkHandler to and deploying paper + digital applications. In these inter- capture notes, and a ClickHandler to detect the pen tap. faces, one or more people use digital pens and paper to When Karen prints the paper UI, PaperToolkit augments the capture and organize information, and issue commands to a interface with the dot pattern. At runtime, the InkHandler computer via pen gestures and paper widgets. Visual or receives the user’s strokes from the pen’s wireless connec- audio feedback is presented to the user on a nearby PC or tion. When the user taps the button, Karen’s code retrieves handheld device (see Figure 2). Alternatively, a user may the handwritten ink strokes, sends it through handwriting work without a PC nearby; his pen input is batched for later recognition, renders it to a JPEG, and uploads both the processing. Paper interfaces come in many forms, including recognized text and the image of the handwriting to the datasheets for scientists, notebooks for designers, and large user’s blog. This programming approach (see Figure 3) * ' applications and features that were interesting, yet difficult + 4) to implement. This section highlights some of the projects , # %& $' - 4 #$ we implemented, and how they cover points in the paper- . 4 ! #1!.**$ digital design

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us