
How does Context Affect the Distribution of Software Maintainability Metrics? Feng Zhang∗, Audris Mockusy, Ying Zouz, Foutse Khomhx, and Ahmed E. Hassan∗ ∗ School of Computing, Queen’s University, Canada y Department of Software, Avaya Labs Research, Basking Ridge, NJ 07920. USA z Department of Electrical and Computer Engineering, Queen’s University, Canada x SWAT, Ecole´ Polytechnique de Montreal,´ Canada [email protected], [email protected], [email protected], [email protected], [email protected] Abstract—Software metrics have many uses, e.g., defect pre- a different context, e.g., applying benchmarks derived from diction, effort estimation, and benchmarking an organization small-size software systems to assess the maintainability of against peers and industry standards. In all these cases, metrics large-size software systems. COCOMO II1 model supports a may depend on the context, such as the programming language. Here we aim to investigate if the distributions of commonly used number of attribute settings (e.g., the complexity of product) metrics do, in fact, vary with six context factors: application to fine tune the estimation of the cost and system size (i.e., domain, programming language, age, lifespan, the number of source lines of code). However, to the best of our knowledge, changes, and the number of downloads. For this preliminary no study provides empirical evidence on how contexts affect study we select 320 nontrivial software systems from Source- such metric-based models. The context is overlooked in most Forge. These software systems are randomly sampled from nine popular application domains of SourceForge. We calculate 39 existing approaches for building metric-based benchmarks [8], metrics commonly used to assess software maintainability for [9], [10], [11], [12], [13], [14]. each software system and use Kruskal Wallis test and Mann- This preliminary study aims to understand if the distribu- Whitney U test to determine if there are significant differences among the distributions with respect to each of the six context tions of metrics do, in fact, vary with contexts. Considering factors. We use Cliff’s delta to measure the magnitude of the availability and understandability of context factors and the differences and find that all six context factors affect the their potential impact on the distribution of metric values, we distribution of 20 metrics and the programming language factor decide to investigate seven context factors: application domain, affects 35 metrics. We also briefly discuss how each context factor programming language, age, lifespan, system size, the number may affect the distribution of metric values. We expect our results to help software benchmarking and other software engineering of changes, and the number of downloads. Since system size methods that rely on these commonly used metrics to be tailored strongly correlates to the number of changes, we only examine to a particular context. the number of changes of the two factors. Index Terms—context; context factor; metrics; static metrics; In this study, we select 320 nontrivial software systems from software maintainability; benchmark; sampling; mining software 2 repositories; large scale. SourceForge . These software systems are randomly sampled from nine popular application domains. For each software I. INTRODUCTION system, we calculate 39 metrics commonly used to assess The history of software metrics predates software engineer- software maintainability. To better understand the impact on ing. The first active software metric is the number of lines different aspects of software maintainability, we further clas- of code (LOC) which was used in the mid 60’s to assess the sify the 39 metrics into six categories (i.e., complexity, cou- productivity of programmers [1]. Since then, a large number of pling, cohesion, abstraction, encapsulation and documentation) metrics have been proposed [2], [3], [4], [5], and extensively based on Zou and Kontogiannis [15]’s work. We investigate used in software engineering activities, e.g., defect prediction, the following two research questions: effort estimation and software benchmarks. For example, soft- RQ1: What context factors impact the distribution of the ware organizations use benchmarks as management tools to values of software maintainability metrics? assess the quality of their software products in comparison to We find that all six context factors affect the distribution of industry standards (i.e., benchmarks). Since software systems the values of 51% of metrics (i.e., 20 out of 39). The most are developed in different environments, for various purposes, influential context factor is the programming language, since and by teams with diverse organizational cultures, we believe it impacts the distribution of the values of 90% of metrics (i.e., that context factors, such as application domain, programming 35 out of 39). language, and the number of downloads, should be taken into account, when using metrics in software engineering activi- ties (e.g., [6]). It can be problematic [7] to apply metric-based 1http://sunset.usc.edu/csse/research/COCOMOII benchmarks derived from one context to software systems in 2http://www.sourceforge.net RQ2: What guidelines can we provide for benchmarking3 affect the effective values of various metrics [7]. As compli- software maintainability metrics? ments to these studies, our work investigates how contexts We suggest to group all software systems into 13 distinct impact the distribution of software metric values. We further groups using three context factors: application domain, pro- provide guidelines to split software systems using contexts gramming language, and the number of changes, and conse- before applying these approaches (e.g., [9], [10], [11], [12]) quently, we obtain 13 benchmarks. to derive thresholds and ranges of metric values. The remainder of this paper is organized as follows. In Section II, we summarize the related work. We describe the B. Context factors experimental setup of our study in Section III and report our Contexts of software systems are considered in Ferreira et case study results in Section IV. Threats to validity of our al. [14]’s work. They propose to identify thresholds of six work are discussed in Section V. We conclude and provide object-oriented software metrics using three context factors: insights for future work in Section VI. application domain, software types and system size (in terms of the number of classes). However, they directly split all II. RELATED WORK software systems using the context factors without examining In this section, we review previous attempts to build metric- whether the context factors affect the distribution of metric based benchmarks. Such attempts are usually made up of a set values or not, thus result in a high ratio of duplicated thresh- of metrics with proposed thresholds and ranges. olds. To reduce duplications and maximize the samples of measurement software systems, a split is necessary only when A. Thresholds and ranges of metric values a context factor impacts the distribution of metric values. Deriving appropriate thresholds and ranges of metrics is Different from Ferreira et al. [14]’s study, we propose to important to interpret software metrics [17]. For instance, split all software systems based on statistical significance and McCabe [2] proposes a widely used complexity metric to corresponding effect size. We study four additional context measure software maintainability and testability, and further factors and 33 more metrics than their study. interprets the value of his metric in such a way: sub-functions Open source software systems are characterized by with the metric value between 3 and 7 are well structured; sub- Capiluppi et al. [22] using 12 context factors: age, appli- functions with the metric value beyond 10 are unmaintainable cation domain, programming language, size (in terms of and untestable [2]. Lorenz and Kidd [3] propose thresholds physical occupation), the number of developers, the number and ranges for many object-oriented metrics, which are inter- of users, modularity level, documentation level, popularity, pretable in their context. However, direct application of these status, success of project, and vitality. Considering not all thresholds and ranges without taking into account the contexts software systems provide information for these factors, we of systems might be problematic. Erni and Lewerentz [4] decide to investigate five commonly available context factors: propose to consider mean and standard deviations based on application domain, programing language, age, system size the assumption that the metrics follow normal distributions. (we redefined it as the total number of lines), and the number Yet many metrics follow power-law or log-normal distribu- of downloads (measured using average monthly downloads). tions [18], [19]. Thus many researchers propose to derive Since software metrics are also affected by software evolution thresholds and ranges based on statistical properties of metrics. [23], we study two additional context factors: lifespan and the For example, Benlarbi et al. [8] apply a linear regression number of changes. analysis. Shatnawi [9] use a logistic regression analysis. Yoon To the best of our knowledge, we are the first to propose et al. [10] use a k-means cluster algorithm. Herbold et al. [12] detailed investigation of context factors when building metric- use machine learning techniques. Sanchez-Gonz´ alez´ et al. [20] based
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-