Debguer: a Tool for Bug Prediction and Diagnosis

Debguer: a Tool for Bug Prediction and Diagnosis

The Thirty-First AAAI Conference on Innovative Applications of Artificial Intelligence (IAAI-19) DeBGUer: A Tool for Bug Prediction and Diagnosis Amir Elmishali, Roni Stern, Meir Kalech The Cyber Security Research Center at Ben Gurion University of the Negev [email protected], fsternron,[email protected] Abstract detection, LDP uses Machine Learning to train a fault pre- diction model that predicts which software components are In this paper, we present the DeBGUer tool, a web-based tool likely to contain bugs, providing guidance to testing efforts. for prediction and isolation of software bugs. DeBGUer is To support bug isolation, LDP uses a software diagnosis al- a partial implementation of the Learn, Diagnose, and Plan (LDP) paradigm, which is a recently introduced paradigm for gorithm to detect the root cause of failed tests, and tech- integrating Artificial Intelligence (AI) in the software bug de- niques from automated planning to plan additional tests. tection and correction process. In LDP, a diagnosis (DX) al- In this paper, we describe DeBGUer, a novel open source gorithm is used to suggest possible explanations – diagnoses tool that is a partial implementation of LDP. DeBGUer pro- – for an observed bug. If needed, a test planning algorithm vides a web interface to the LDP bug prediction and diagno- is subsequently used to suggest further testing. Both diag- sis capabilities. To use DeBGUer, a developer simply needs nosis and test planning algorithms consider a fault predic- to provide a link to its issue tracker (e.g., Jira) and source tion model, which associates each software component (e.g., control management system (e.g., Git). Then, DeBGUer vi- class or method) with the likelihood that it contains a bug. DeBGUer implements the first two components of LDP, bug sualizes the fault prediction results, showing a fault (bug) prediction (Learn) and bug diagnosis (Diagnose). It provides likelihood estimate for every software component in a given an easy-to-use web interface, and has been successfully tested version of the code. Through its web interface, DeBGUer on 12 projects. also exposes the LDP diagnosis functionality: the developer points to a set of tests, DeBGUer executes them, and if some of the tests fail it outputs a set of software components that Introduction may have caused the tests to fail. Software is ubiquitous and its complexity is growing. Con- Our tool can be seen as a counterpart to the GZoltar sequently, software bugs are common and their impact can tool (Campos et al. 2012). GZoltar provides two key func- be very costly. Therefore, much effort is diverted by soft- tionalities: test suite minimization and software diagnosis. ware engineers to detect, isolate, and fix bugs as early as DeBGUer does not deal with test suite minimization. The possible in the software development life cycle. DeBGUer software diagnosis algorithm is an improved ver- Learn, Diagnose, and Plan (LDP) is a recently proposed sion of diagnosis algorithm used by GZoltar.1 DeBGUer paradigm that aims to support bug detection and isolation also provides a software fault prediction functionality, which using technologies from the Artificial Intelligence (AI) lit- is not supported by GZoltar. erature (Elmishali, Stern, and Kalech 2018). To support bug Having such an implementation of LDP – DeBGUer – serves several purposes. First, developers can use it in prac- Learn a Fault tice to follow LDP. Second, researchers working on software Prediction Model No fault prediction or software diagnosis can use DeBGUer to evaluate their algorithms against the corresponding algo- Generate Run Bug Issue Tracking Version Tests Tests Found? rithms implemented in DeBGUer. System Control System In this paper, we provide a brief description of the LDP Yes Send bug details paradigm and describe our implementation of it in De- Plan Run Dx File Bug Fix Bug BGUer. Then, we report on an empirical evaluation of De- Tests algorithm Report BGUer on 12 open-source projects so far. Results show ac- No Bug Yes curate bug prediction and isolation capabilities, demonstrat- Isolated? Root cause ing the applicability of LDP in general and DeBGUer specif- ically. Figure 1: The workflow of the LDP paradigm. 1GZoltar uses the Barinel algorithm (Abreu, Zoeteweij, and Copyright c 2019, Association for the Advancement of Artificial van Gemund 2009), while DeBGUer uses a data-augmented ver- Intelligence (www.aaai.org). All rights reserved. sion of Barinel that were already shown to outperform the vanilla 9446 Figure 2: Export from the Jira issue tracker, showing data on bug TIKA-1222. The LDP Paradigm Figure 3: Screenshot from Github, showing the commit that As a preliminary, we briefly describe the LDP paradigm. For fixed Bug TIKA-1222 from Figure 2. a complete description please see Elmishali et al. (2018). LDP uses three AI components: Implementation Details • A fault predictor. An algorithm that estimates the proba- In this section, we describe the architecture of DeBGUer. bility of each software component to have a bug. As introduced, DeBGUer is partial implementation of the • A diagnoser. A diagnosis (DX) algorithm that accepts as LDP paradigm. It includes AI components that perform two input a set of test executions including at least one failed tasks: fault prediction and diagnosis. Next we describe the test. It outputs one or more possible explanations for the implementation details of these components as well as the failed tests. Each of these explanations, referred to as di- web interface. agnoses, is an assumption about which software compo- Fault Predictor nents are faulty. The diagnoser also associates every diag- nosis with a score that indicates how likely the diagnosis Fault prediction in software is a classification problem. is the true explanation for the failed tests. Given a software component, such as method or class etc., the goal is to determine its class – healthy or faulty. Super- • A test planner. An algorithm that accepts the set of di- vised machine learning algorithms are commonly used to agnoses outputted by the diagnoser and suggests addi- solve classification problems. They work as follows. As in- tional tests to perform in order to gain additional diag- put, they are given a set of labeled instances, which are pairs nostic knowledge that will help identifying the correct di- of instances and their correct labeling, i.e., the correct class agnosis. for each instance. In our case, instances are software com- ponents such as classes or methods, and the labels are which Figure 1 illustrates the LDP workflow. The process starts software component is healthy and which is not. They output by extracting information from issue tracking system (ITS) a classification model, which maps an (unlabeled) instance and version control system (VCS) that is used. ITS records to a class. This set of labeled instances is called the training all reported bugs and their status (e.g., fixed or open). VCS set and the act of generating a classification model from the records every modification done to the source code. Promi- training set if referred to as learning. nent examples of ITS and VCS are Jira and Git, respectively. Learning algorithms extract features of the instances in The information extracted from these systems, such as the the training set, and learn the relation between the features set of past failures and code changes, is used by LDP to train of the instances and their class. A key to the success of a bug prediction model using standard Machine Learning. machine learning algorithms is the choice of features used. When one or more tests fail, indicating that a bug exists, Many possible features were proposed in the literature for the set of executed tests is inputted to the LDP diagnoser. software fault prediction. Radjenovic et al. (2013) surveyed The diagnoser outputs a set of possible diagnoses and their features used by existing software prediction algorithms. In score. If a single diagnosis is found whose score is high a preliminary set of experiments we found that the combina- enough (how much is “high enough” is a parameter), then tion of features that performed best is a combination of 405 this diagnosis is passed to the developer to be fixed. If not, features from the features listed by Radjenovic et al. (2013) then the test planner proposes an additional test to be per- and bug history features that were created by us. This list formed in order to narrow down the set of possible diag- of features includes the McCabe and Halstead (1977) com- noses. This initiates an iterative process in which the test plexity measures, several object oriented measures such as planner plans an additional test, the tester performs it, and the number of methods overriding a superclass, number of the diagnoser uses the new data obtained to compute the public methods, number of other classes referenced, and is set of possible diagnoses and their likelihoods. This process the class abstract; and several process features such as the stops when a diagnosis is found whose probability of being age of the source file, the number of revisions made to it correct is high enough. At this stage a bug report is added to in the last release, the number of developers contributed to the issue tracking system and the diagnosed bug is given to its development, and the number of lines changed since the a developer to fix the (now isolated) bug. latest version.2 Barinel (Elmishali, Stern, and Kalech 2016). 2The exact list of the 405 features that we used can be 9447 Figure 5: The DeBGUer user home page. served software bug. In addition, there are multiple options to define the ba- sic software unit, e.g. class, method or code block. Our tool currently supports defining the based unit as either a class or a method.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us