Waken: Reverse Engineering Usage Information and Interface Structure from Software Videos

Waken: Reverse Engineering Usage Information and Interface Structure from Software Videos

Waken: Reverse Engineering Usage Information and Interface Structure from Software Videos 1,2 1 1 1 Nikola Banovic , Tovi Grossman , Justin Matejka , and George Fitzmaurice 1Autodesk Research 2Dept. of Computer Science 210 King St. E., Toronto, Ontario, Canada University of Toronto, Ontario, Canada {[email protected]} [email protected] ABSTRACT In this paper, we present Waken, an application- We present Waken, an application-independent system that independent system that recognizes UI components and recognizes UI components and activities from screen cap- activities, from an input set of video tutorials. Waken does tured videos, without any prior knowledge of that applica- not require any prior knowledge of the appearance of an tion. Waken can identify the cursors, icons, menus, and applications icons or its interface layout. Instead, our sys- tooltips that an application contains, and when those items tem identifies specific motion patterns in the video and are used. Waken uses frame differencing to identify occur- extracts the visual appearance of associated widgets in the rences of behaviors that are common across graphical user application UI (Figure 1). Template matching can then be interfaces. Candidate templates are built, and then other performed on the extracted widgets to locate them in other occurrences of those templates are identified using a multi- frames of the video. phase algorithm. An evaluation demonstrates that the sys- Waken is capable of tracking the cursor, identifying appli- tem can successfully reconstruct many aspects of a UI cation icons, and recognizing when icons are clicked. We without any prior application-dependant knowledge. To also explore the feasibility of associating tooltips to icons, showcase the design opportunities that are introduced by and reconstructing the contents of menus. An initial evalua- having this additional meta-data, we present the Waken tion of our system on a set of 34 Google SketchUp video Video Player, which allows users to directly interact with tutorials showed that the system was able to successfully UI components that are displayed in the video. identify 14 cursors, 8 icons, and accurately model the ACM Classification: H5.2 [Information interfaces and hotspot of the system cursor. While the accuracy levels presentation]: User Interfaces. - Graphical user interfaces. have room for improvement, the trends indicate that with enough samples, all elements could get recognized. Keywords: Video; tutorial; pixel-based reverse engineering INTRODUCTION We then present the Waken Video Player, to showcase the In recent years, video tutorials have become a prevalent design opportunities that are introduced by having the me- medium for delivering software learning content [21, 26]. ta-data associated with videos. The video player allows Several recent research projects have attempted to pre- users to directly explore and interact with the video itself, capture usage meta-data in order to enhance the video tuto- almost as if it were a live application. For example, users rial viewing experience by marking up the video timeline can click directly on icons in the video, to find related vid- [10, 13, 22]. While these types of systems show promise, eos in the library that use that particular tool. Users can also the immense collection of existing online videos will not hover over icons in the video to display their associated possess such meta-data. tooltips, or hover over menus to reveal their contents. The- se techniques are all enabled without any prior knowledge We build upon recent work in reverse engineering user of specific interface layouts or item appearances. We con- interfaces [5, 30], and explore the possibilities and oppor- clude with a discussion of lessons learned and limitations tunities related to reverse engineering the video tutorials of our work, and outline possible lines of future work. themselves. This is a challenging task, as information from cursor movements, mouse clicks, and accessibility API’s are not available to aid recognition. Previous research sys- tems have provided some initial promising results, but have relied on explicit application-dependant rules [21] or tem- plates [26] that may not generalize. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Figure 1. Waken uses frame differencing to extract UIST ’12, October 7–10, 2012, Cambridge, Massachusetts, USA. UI elements, such as cursors (a) and icons (b). Copyright 2012 ACM 978-1-4503-1580-7/12/10...$15.00. RELATED WORK [26] described a technique to infer icon usage data from In this section we discuss previous research on video tuto- existing video tutorials. However, their implementation was rials, tutorial authoring, and reverse engineering UIs. also application specific, as it performed template matching Video Tutorials on the target application’s tool palette. While it may be Video tutorials have become a prevalent instructional aid possible to provide such templates for any application of for learning software [21, 26]. One of the main reasons for interest, it would be helpful if such application-dependant the popularity of such tutorials could be due to the benefits templates were not required. We explore an approach, of learning software application by directly observing ex- which does not require prior training data. pert use [28]. However, there are also some drawbacks. Reverse Engineering User Interfaces One major drawback is navigation issues within longer Advances in computer vision have enabled techniques to video tutorials which could lead to misunderstandings of identify user interface elements from live applications. the content [14]. Additionally, users may be unable to keep Sikuli [30] uses computer vision to identify GUI compo- up with the pace of the instructions, which could lead to nents in screen captures, search for GUI elements in the reduced knowledge retention [23]. Although Grossman and interface [31], and automate GUI tasks [2]. However, the Fitzmaurice [12] showed performance benefits of short underlying computer vision algorithms in Sikuli identify contextual video clips, longer tutorials could still pose sig- GUI elements based on templates from sample image data. nificant difficulty for the user. To minimize the time required for collecting training data, Past research has shown that navigation of recorded soft- past research [3, 5, 18, 21] explored abstracting identifica- ware workflows can be aided by visualizing operation his- tion of different GUI elements and decoupling GUI element tory [22]. Chronicle [13] combines operational history vis- representation from predefined image templates. Hurst et ualizations and timeline-based video summarization and al. [18] combined a number of useful computer vision browsing, to aid in exploration of high-level workflows in techniques with mouse and accessibility API information to video tutorials. Ambient Help [21] and Pause-and-Play [26] automatically identify clickable targets in the interface. also provide video timelines based on operation histories. Chang et al. [3] proposed an accessibility and pixel-based framework, which also allowed for detecting text and arbi- Our enhanced player provides similar operation history and trary word blobs in user interfaces. However, when work- timeline-based browsing interactions. However, we also ing with video tutorials, mouse information and data from extend this literature by exploring how direct manipulation the accessibility API is not available. of the user interface in the video tutorial itself could en- hance the user experience. While direct manipulation of Prefab [5] identified GUI elements by using GUI specific videos has been previously explored [9] we are unaware of visual features, which enabled overlaying of advanced in- any implementations specific to software tutorial videos. teraction techniques on top of existing interfaces [6, 7]. We Motivation for such interaction is provided by evidence that build on the insight from Prefab [5] that “it is likely not users sometimes try to interact with the elements within a necessary to individually define each widget in every appli- documentation article directly [20]. Allowing the user to cation, but may instead be possible to learn definitions of explore the target application interface directly in the video entire families of widgets.” However, instead of using in- could also reduce switching between the video and target teractive methods to identify visual features of widgets, we application [26], and support discovery-based learning [8]. propose an automated application-independent method that learns widget representations from the source video itself. Tutorial Authoring and Usage Data Capture Video tutorials are typically built from screen capture vide- SYSTEM GOALS AND CHALLENGES os demonstrating usage of the target application. Unfortu- Our work is a first step towards an ultimate goal of auto- nately, tutorials authored in this way do not contain addi- matically constructing interface facades [27], from nothing tional metadata required for reconstructing operation histo- more than an input set of videos. Previous work on reverse ries

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us