
The Impact of Feature Selection on Predicting the Number of Bugs Haidar Osman1, Mohammad Ghafari2, Oscar Nierstrasz2 1Swisscom AG, Switzerland 2Software Composition Group, University of Bern, Switzerland fosman, ghafari, [email protected] Abstract—Bug prediction is the process of training a machine in the bug prediction pipeline and its application might alter learning model on software metrics and fault information to previous findings in the literature, especially when it comes predict bugs in software entities. While feature selection is an to comparing different machine learning models or different important step in building a robust prediction model, there is insufficient evidence about its impact on predicting the number software metrics. of bugs in software systems. We study the impact of both In this paper we treat bug prediction as a regression problem correlation-based feature selection (CFS) filter methods and wrapper feature selection methods on five widely-used prediction where a bug predictor predicts the number of bugs in software models and demonstrate how these models perform with or entities as opposed to classifying software entities as buggy or without feature selection to predict the number of bugs in five clean. We investigate the impact of filter and wrapper feature different open source Java software systems. Our results show selection methods on the prediction accuracy of five machine that wrappers outperform the CFS filter; they improve prediction learning models: K-Nearest Neighbour, Linear Regression, accuracy by up to 33% while eliminating more than half of the features. We also observe that though the same feature selection Multilayer Perceptron, Random Forest, and Support Vector method chooses different feature subsets in different projects, Machine. More specifically, we carry out an empirical study this subset always contains a mix of source code and change on five open source Java projects: Eclipse JDT Core, Eclipse metrics. PDE UI, Equinox, Lucene, and Mylyn to answer the following research questions: I. INTRODUCTION In the field of machine learning, when there is a large num- RQ1: How does feature selection impact the prediction accu- ber of features, determining the smallest subset that exhibits racy? Our results show that applying correlation-based feature the strongest effect often decreases model complexity and selection (CFS) improves the prediction accuracy in 32% of increases prediction accuracy. This process is called feature the experiments, degrades it in 24%, and keeps it unchanged selection.1 There are two well-known types of feature selection in the rest. On the other hand, applying the wrapper feature methods: filters and wrappers. Filters select features based on selection method improves prediction accuracy by up to 33% their relevance to the response variable independently of the in 76% of the experiments and never degrades it in any prediction model. Wrappers select features that increase the experiment. However, the impact of feature selection varies prediction accuracy of the model. However, as with any design depending on the underlying machine learning model as dif- decision during the construction of a prediction model, one ferent models vary in their sensitivity to noisy, redundant, and needs to evaluate different feature selection methods in order correlated features in the data. We observe zero to negligible to choose one, and above all to assess whether it is needed or effect in the case of Random Forest models. not. RQ2: Are wrapper feature selection methods better than While there has been extensive research on the impact of filters? Wrapper feature selection methods are consistently arXiv:1807.04486v1 [cs.SE] 12 Jul 2018 feature selection on prediction models in different domains, either better than or similar to CFS. Applying wrapper feature our investigation reveals that it is a rarely studied topic in the selection eliminates noisy and redundant features and keeps domain of bug prediction. Few studies explore how feature only relevant features for that specific project, increasing the selection affects the accuracy of classifying software entities prediction accuracy of the machine learning model. into buggy or clean [1][2][3][4][5][6][7][8], but to the best RQ3: Do different methods choose different feature subsets? of our knowledge no dedicated study exists on the impact of We realize there is no optimal feature subset that works feature selection on the accuracy of predicting the number for every project and feature selection should be applied of bugs. As a result of this research gap, researchers often separately for each new project. We find that not only different overlook feature selection and provide their prediction models methods choose different feature subsets on the same projects, with all the metrics they have on a software project or in a but also the same feature selection method chooses different dataset. We argue that feature selection is a mandatory step feature subsets for different projects. Interestingly however, all 1Feature selection is also known as variable selection, attribute selection, selected feature subsets include a mix of change and source and variable subset selection. code metrics. TABLE I THE CK METRICS SUITE [13] AND OTHER OBJECT-ORIENTED METRICS INCLUDED AS THE SOURCE CODE METRICS IN THE BUG PREDICTION Total Error DATASET [14] Metric Name Description CBO Coupling Between Objects DIT Depth of Inheritance Tree FanIn Number of classes that reference the class FanOut Number of classes referenced by the class Optimum Model Complexity LCOM Lack of Cohesion in Methods Variance Error NOC Number Of Children NOA Number Of Attributes in the class NOIA Number Of Inherited Attributes in the class LOC Number of lines of code NOM Number Of Methods Bias2 NOIM Number of Inherited Methods NOPRA Number Of PRivate Atributes NOPRM Number Of PRivate Methods Model Complexity NOPA Number Of Public Atributes NOPM Number Of Public Methods RFC Response For Class Fig. 1. The relationship between model complexity and model error [9]. WMC Weighted Method Count In summary, this paper makes the following contributions: 1) A detailed comparison between filter and wrapper feature noise [10] and feature multicollinearity [11]. Different feature selection methods in the context of bug prediction as a selection algorithms eliminate this problem in different ways. regression problem. For instance, correlation based filter selection chooses features 2) A detailed analysis on the impact of feature selection with high correlation with the response variable and low on five widely-used machine learning models in the correlation with each other. literature. Also when we build a prediction model, we often favour less 3) A comparison between the selected features by different complex models over more complex ones due to the known methods. relationship between model complexity and model error, as The rest of this paper is organized as follows: In Section II, shown in Figure Figure 1. Feature selection algorithms try to we give a technical background about feature selection in reduce model complexity down to the sweet spot where the machine learning. We motivate our work in Section III, and total error is minimal. This point is called the optimum model show how we are the first to study wrapper feature selection complexity. Model error is computed via the mean squared methods when predicting the number of bugs. In Section IV, error (MSE) as: 1 PN ^ 2 we explain the experimental setup, discuss the results of our MSE = N i=1(Yi − Yi) ^ empirical study, and elaborate on the threats to validity of where Yi is the predicted value and Yi is the actual value. the results. Finally, we discuss the related work in Section V MSE can be decomposed into model bias and model variance showing how our findings are similar or different from the as: state of the art, then conclude this paper in Section VI. MSE = Bias2 + V ariance + IrreducibleError [12] Bias is the difference between the average prediction of our II. TECHNICAL BACKGROUND model to the true unknown value we are trying to predict. Trained on bug data and software metrics, a bug predictor Variance is the variability of a model prediction for a given is a machine learning model that predicts defective software data point. As can be seen in Figure Figure 1, reducing model entities using software metrics. The software metrics are called complexity increases the bias but decreases the variance. the independent variables or the features. The prediction itself Feature selection sacrifices a little bit of bias in order to reduce is called the response variable or the dependent variable. If variance and, consequently, the overall MSE. the response variable is the absence/presence of bugs then bug Every feature selection method consists of two parts: a prediction becomes a classification problem and the machine search strategy and a scoring function. The search strategy learning model is called a classifier. If the response variable guides the addition or removal of features to the subset at hand is the number of bugs in a software entity then bug prediction and the scoring function evaluates the performance of that is a regression problem and the model is called a regressor. subset. This process is repeated until no further improvement Feature selection is an essential part in any machine learning is observed. process. It aims at removing irrelevant and correlated features to achieve better accuracy, build faster models with stable III. MOTIVATION performance, and reduce the cost of collecting features for In this section, we shortly discuss the importance of pre- later models. Model error is known to be increased by both dicting the number of bugs in software entities. Then, we TABLE II to be considered as important as it should be in the field of THE CHANGE METRICS PROPOSED BY MOSER et al. [15] INCLUDED IN bug prediction. For instance, only 25 out of the 64 studied THE BUG PREDICTION DATASET [14] techniques in a recent research apply feature selection before training a machine learning model [26].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-