Guided GUI Testing of Android Apps with Minimal Restart and Approximate Learning

Guided GUI Testing of Android Apps with Minimal Restart and Approximate Learning

Guided GUI Testing of Android Apps with Minimal Restart and Approximate Learning Wontae Choi George Necula Koushik Sen EECS Department EECS Department EECS Department University of California, Berkeley University of California, Berkeley University of California, Berkeley [email protected] [email protected] [email protected] Abstract 1. Introduction Smartphones and tablets with rich graphical user interfaces Smartphones and tablets with rich graphical user interfaces (GUI) are becoming increasingly popular. Hundreds of thou- (GUI) are becoming increasingly popular. Hundred of thou- sands of specialized applications, called apps, are available sands of specialized applications, called apps, are already for such mobile platforms. Manual testing is the most pop- available for these mobile platforms. The complexity of ular technique for testing graphical user interfaces of such these apps lies often in the user interface, with data process- apps. Manual testing is often tedious and error-prone. In ing either minor, or delegated to a backend component. A this paper, we propose an automated technique, called Swift- similar situation exists in applications using software-as-a- Hand, for generating sequences of test inputs for Android service architecture, where the client-side component con- apps. The technique uses machine learning to learn a model sists mostly of user interface code. Testing such applications of the app during testing, uses the learned model to generate involves predominantly user interface testing. user inputs that visit unexplored states of the app, and uses We focus on user interface testing for Android apps, al- the execution of the app on the generated inputs to refine the though many of the same challenges exist on other mobile model. A key feature of the testing algorithm is that it avoids and browser platforms. The only two tools that are widely restarting the app, which is a significantly more expensive used in practice for testing Android apps are Monkeyrun- operation than executing the app on a sequence of inputs. ner [2], a framework for manually scripting sequences of An important insight behind our testing algorithm is that we user inputs in Python, and Monkey [3], a random automatic do not need to learn a precise model of an app, which is of- user input generation tool. A typical use of the Monkey tool ten computationally intensive, if our goal is to simply guide involves generating clicks at random positions on the screen, test execution into unexplored parts of the state space. We without any consideration of what are the actual controls have implemented our testing algorithm in a publicly avail- shown on the screen, which ones have already been clicked, able tool for Android apps written in Java. Our experimental and what is the sequence of steps taken to arrive at the cur- results show that we can achieve significantly better cover- rent configuration. It is not surprising then that such testing age than traditional random testing and L∗-based testing in a has trouble exploring enough of the user interface, especially given time budget. Our algorithm also reaches peak coverage the parts that are reached after a specific sequence of inputs. faster than both random and L∗-based testing. In this paper, we consider the problem of automatically generating sequences of test inputs for Android apps for Categories and Subject Descriptors D.2.5 [Software En- which we do not have an existing model of the GUI. The gineering]: Testing and Debugging goal is to achieve code coverage quickly by learning and ex- General Terms Algorithm, Design, Experimentation ploring an abstraction of the model of the GUI of the app. We Keywords GUI testing, Learning, Automata, Android discuss further in the paper various techniques that have been explored previously to address different aspects of this prob- lem. However, our experience shows that previous learning Permission to make digital or hard copies of all or part of this work for personal or techniques ignore a very significant practical constraint in classroom use is granted without fee provided that copies are not made or distributed testing apps. All automatic exploration algorithms will occa- for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM sionally need to restart the app, in order to explore additional must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, states reachable from the initial state. The only reliable way to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. to restart an app is to remove and reinstall it. This is neces- OOPSLA ’13, October 29–31, 2013, Indianapolis, Indiana, USA.. sary, for example, when the application has persistent data. Copyright c 2023 ACM 978-1-4503-2374-1/13/10. $15.00. http://dx.doi.org/10.1145/2509136.2509552 Our experiments show that the restart operation takes 30 sec- onds, which is significantly larger than the time required to fectiveness of SwiftHand with that of random testing and L∗- explore any other transition, such as a user input. Currently, based testing. Our results show that SwiftHand can achieve our implementation waits for up to 5 seconds after sending significantly better coverage on these apps within a given a user input, to ensure that all handlers have finished. We time budget. Moreover, SwiftHand achieves branch cover- expect that with more support from the operating system, or age at a rate faster than that of random and L∗-based testing. a more involved implementation, we could reduce that wait We report the results of our investigation in the empirical time to well under a second. evaluation section. Since the cost of exploring a transition to the initial state (a restart) is an order of magnitude more than the cost of any 2. Overview other transition, we must use an exploration and learning In this section we introduce a motivating example, which we algorithm that minimizes the number of restarts. Standard will use first to describe several existing techniques for auto- regular language learning algorithms are not appropriate in mated user interface testing. Then we describe SwiftHand at this case. For example, Angluin’s L∗ [10] requires at least 2 a high level in the context of the same example. The formal O(n ) restarts, where n is the number of states in the model details of the SwiftHand algorithm are in Section 3. of the user interface. Rivest and Schapire’s algorithm [38] We use part of Sanity, an Android app in our benchmark- reduces the number of restarts to O(n), which is still high, suite as our running example. Figure 1 shows the first four by computing homing sequences, and it also increases the screens of the app. The app starts with three consecutive end- runtime by a factor of n, which is again not acceptable when user license agreement (EULA) screens. To test the main we want to achieve code coverage quickly. app, an automated testing technique must pass all the three In this paper we propose a testing algorithm based on two EULA screens. key observations: Sanity 1. It is possible to reduce the use of app restarts, because License License License Action 1 most user interface screens of an Android app can of- Term #1 Term #2 Term #3 Action 2 ten be reached from other screens just by triggering a se- Yes Yes Yes Action 3 quence of user inputs, e.g., using “back” or “home” but- Yes No Yes No Yes No tons. EULA 1 (q1) EULA 2 (q2) EULA 3 (q3) Main (qM) 2. For the purpose of test generation, we do not need to learn an exact model of the app under test. All we need Figure 1: The first four screens of Sanity App. The screen is an approximate model that could guide the generation names in parentheses are for cross-reference from the mod- of user inputs while maximizing the code coverage. Note els of this app discussed later. that for real-world apps a finite state model of the GUI may not even exist. Some apps may require push-down The first and the third EULA screens have four input model and others may require more sophisticated infinite choices: (a) Yes button to accept the license terms, (b) No models. button to decline the license terms, (c) ScrollUp the license terms, and (d) ScrollDown the license terms. Pressing No at Based on these two observations, we propose a testing al- any point terminates program. ScrollDown and ScrollUp do gorithm, called SwiftHand, that uses execution traces gen- not change the non-visual state of the program. The second erated during the testing process to learn an approximate EULA screen doesn’t have the scrolling option. Pressing model of the GUI. SwiftHand then uses the learned model to Yes button three times leads the user to the main screen of choose user inputs that would take the app to previously un- the app. For convenience, in the remainder of this paper we explored states. As SwiftHand triggers the newly generated are going to use short name q , q , q , and q , instead of user inputs and visits new screens, it expands the learned 1 2 3 M EULA1, EULA2, EULA2, and Main. model, and it also refines the model when it finds discrepan- cies between the model learned so far. The interplay between 2.1 Goals and Assumptions on-the-fly learning of the model and generation of user in- We want to develop an algorithm that generates sequences puts based on the learned model helps SwiftHand to quickly of user inputs and feeds them to the app in order to achieve visit unexplored states of the app.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    17 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us