Bachelor Degree Project Do Software Code Smell Checkers Smell Themselves?

Total Page:16

File Type:pdf, Size:1020Kb

Bachelor Degree Project Do Software Code Smell Checkers Smell Themselves? Bachelor Degree Project Do Software Code Smell Checkers Smell Themselves? A Self Reflection Author: Amelie Löwe Author: Stefanos Bampovits Supervisor: Francis Palma Semester: VT/HT 2020 Subject: Computer Science Abstract Code smells are defined as poor implementation and coding practices, and as a result decrease the overall quality of a source code. A number of code smell detection tools are available to automatically detect poor implementation choices, i.e., code smells. The detection of code smells is essential in order to improve the quality of the source code. This report aims to evaluate the accuracy and quality of seven different open-source code smell detection tools, with the purpose of establishing their level of trustworthiness. To assess the trustworthiness of a tool, we utilize a controlled experiment in which several versions of each tool are scrutinized using the most recent version of the same tool. In particular, we wanted to verify to what extent the code smell detection tools that reveal code smells in other systems, contain smells themselves. We further study the evolution of code smells in the tools in terms of number, types of code smells and code smell density. Keywords: Code smells, Automatic detection, Code smell density, Refactoring, Best practices. Preface We would like to thank our supervisor Francis Palma for the continuous support, helpful advice and valuable guidance throughout this bachelor thesis process. Furthermore, we would also like to acknowledge all of the cups of coffee we consumed in the past couple of weeks. Without your assistance, this report would not have been possible. Contents 1 Introduction1 1.1 Background.................................1 1.2 Related Work................................2 1.3 Code Smell Definitions...........................2 1.4 Detection Tools Selection..........................3 1.5 Detection Tools...............................4 1.5.1 Current State............................5 1.5.2 Utility Tools............................5 1.6 Problem Formulation............................6 1.7 Terminology and Definitions........................7 1.8 Objectives..................................7 1.9 Research Questions.............................8 1.10 Motivation..................................9 1.11 Scope/Limitation..............................9 1.12 Target group.................................9 1.13 Outline................................... 10 2 Method 11 2.1 Method Description............................. 11 2.2 Project Development............................ 11 2.2.1 Selection Process.......................... 12 2.2.2 Installation Process......................... 12 2.2.3 Execution of Tools......................... 12 2.2.4 Collecting Data........................... 12 2.2.5 Calculating Results......................... 12 2.2.6 Visualization............................ 12 2.3 Reliability And Validity........................... 13 2.4 Ethical Considerations........................... 13 3 Results 14 3.1 Unique Code Smells............................ 14 3.1.1 FindBugs.............................. 14 3.1.2 Checkstyle............................. 15 3.1.3 JDeodorant............................. 15 3.1.4 PMD................................ 16 3.1.5 SonarQube............................. 16 3.1.6 SpotBugs.............................. 17 3.1.7 Summary Of Code Smell Ratios.................. 17 3.2 Code Smell Evolution............................ 18 3.2.1 Quantity of Code Smells...................... 18 3.2.2 Summary of Quantity of Code Smells............... 19 3.2.3 Types of Code Smells....................... 20 3.2.4 Summary of Types of Code Smells................ 23 3.3 Detection Accuracy............................. 24 3.3.1 FindBugs.............................. 24 3.3.2 Checkstyle............................. 25 3.3.3 JDeodorant............................. 25 3.3.4 PMD................................ 26 3.3.5 SonarQube............................. 26 3.3.6 SpotBugs.............................. 27 3.3.7 UCDetector............................. 27 3.3.8 Summary Of The Detection Accuracy............... 27 3.4 Code Smell Density............................. 28 3.4.1 FindBugs.............................. 29 3.4.2 Checkstyle............................. 29 3.4.3 JDeodorant............................. 30 3.4.4 PMD................................ 30 3.4.5 SonarQube............................. 31 3.4.6 SpotBugs.............................. 31 3.4.7 UCDetector............................. 32 3.4.8 Summary Of Code Smell Density................. 32 4 Analysis 33 4.1 Unique Code Smells............................ 33 4.2 Code Smell Evolution............................ 33 4.3 Detection Tools Accuracy.......................... 34 4.4 Code Smell Density............................. 34 5 Discussion 35 6 Conclusion 37 6.1 Future Work................................. 37 References 38 A Appendix 1A 1 Introduction Automatic code smell detection tools have become increasingly popular within software development in the last decade [1]. Code smells decrease the overall maintainability and readability of a system, increase development costs, and make the refactoring process resource-demanding. In addition, manual detection and refactoring is time-consuming and labor-intensive [2]. The definition of a code smell is subjective, however, a code smell is not considered to be a bug but rather an indication of poor design that can in future developments lead to more severe problems if not refactored and improved. There- fore, even if a tool contains code smells in it’s own source code, it does not necessarily contain bugs. Automatic and semi-automatic detection tools claim to solve the aforemen- tioned issues by assessing large projects with minimal effort required by the developer [3]. However, do the code smells detected by the tools exist in the their own code base? By examining the tools and revealing the number of smells they contain, we can deter- mine whether these tools hold up to their own scrutiny in terms of credibility and to what extent. This paper seeks to investigate how well seven open-source code smell detection tools conform to common coding practices. The idea is to execute the tools on their own source code, in order to see whether they themselves contain code smells and, if so, to what degree. The quality and usefulness of the tools are to be determined by the users and developers, as per their own needs. 1.1 Background The term code smell was first introduced in 1999 by Fowler et al. [4]. Code smells are generally used to diagnose symptoms and characteristics in the source code, which tend to indicate poor quality of the software [5]. The exact definition of a code smell is subjective and depends on different factors like programming language, development methodology, and developer. Martin describes code smells as a value system for software craftsmanship [6]. A code smell refers to a symptom that indicates poor design or best practice violation, however, it is not considered to be as severe as a standard bug [3]. By exposing inadequate design structures and violations of basic design principles, these tools allow the developer to have a straightforward approach towards identifying problem areas and improving code quality [7]. Not improving code quality may further lead to erroneous code and larger bugs in the future of a project. To solve the code issues of a project, large refactoring iterations or even reconstruction of the entire code base is required. Code smell detection tools, also known as software analysis tools, are predominantly used to detect programming anomalies and bad practices. However, different tools have varied definitions, thus, as a general rule, code smell detection tools are used to increase the awareness of a developer about the internal quality of the program in development. The detection of code smells can be done manually by a developer or through the use of different code smell detection tools like JDeodorant, PMD, SonarQube, and others. The manual detection of code smells is labor-intensive and error-prone. Thus, code smell detection tools can be used for automatic or semi-automatic detection of code smells and assist the developer in the search for ‘smelly’ entities [1]. 1 1.2 Related Work This report was inspired by a number of different research articles. The first article by Paiva et al. [8] describes how the detection of code smells is a challenge for developers. Furthermore, this article performs an evaluation as well as a comparison on three main code smell detection tools, inFusion, JDeodorant and PMD. Unlike our project, this arti- cle executed a set of tools on a target system called MobileMedia with intent to identify three main code smells, God Class, God Method and Feature Envy. The authors intended to measure the evolution, accuracy and agreement between the three detection tools when run on the same target system detecting the three aforementioned code smells. Another study that inspired our report is by Hamid et al. [9], where the authors chose to execute two detection tools, JDeodorant and InCode on a target system called Xtreme Media Player which is a cross-platform media player. The target code smells for analysis were Feature Envy and God Class. This article focused on executing the tools on different versions of the target system. In addition, they included the use of metrics to determine lines of code, number of packages, number of classes, etc. for each version of the target system, whereas the previous article did
Recommended publications
  • Analysing Aliasing in Java Applications
    IT 20 048 Examensarbete 15 hp Augusti 2020 Analysing Aliasing in Java Applications Björn Berggren Institutionen för informationsteknologi Department of Information Technology Abstract Analysing Aliasing in Java Applications Björn Berggren Teknisk- naturvetenskaplig fakultet UTH-enheten Aliasing refers to the possibility of having multiple references to the same memory location and is a cornerstone concept in the imperative programming paradigm. As Besöksadress: applications grow large, it is hard for programmers to keep track of all places in the Ångströmlaboratoriet Lägerhyddsvägen 1 code where they employ aliasing, possibly resulting in hard-to-predict runtime errors. Hus 4, Plan 0 I present a tool, model and query language to perform dynamic aliasing analysis on Java programs. The tool uses the model to simulate the full execution of a program, Postadress: including how and when objects and references between them are created and Box 536 751 21 Uppsala removed. The query language consists of constructs that can be nested to form logical expressions, operating on the model to check whether a certain object remains in a Telefon: certain condition throughout the execution. The language is designed to monitor 018 – 471 30 03 lifetimes of specific objects and collect information about the execution as a whole. I Telefax: performed an experiment using the tool, examining traces from programs for 018 – 471 30 00 profiling, searching and parsing, showing that applications have different aliasing behaviour. All programs mostly use objects based on built-in Java classes, indicating Hemsida: that programmers could benefit from gaining the ability to control and limit how such http://www.teknat.uu.se/student objects can be aliased.
    [Show full text]
  • Automatic Detection of Bad Smells in Code: an Experimental Assessment
    Journal of Object Technology Published by AITO — Association Internationale pour les Technologies Objets, c JOT 2011 Online at http://www.jot.fm. Automatic detection of bad smells in code: An experimental assessment Francesca Arcelli Fontanaa Pietro Braionea Marco Zanonia a. DISCo, University of Milano-Bicocca, Italy Abstract Code smells are structural characteristics of software that may indicate a code or design problem that makes software hard to evolve and maintain, and may trigger refactoring of code. Recent research is active in defining automatic detection tools to help humans in finding smells when code size becomes unmanageable for manual review. Since the definitions of code smells are informal and subjective, assessing how effective code smell detection tools are is both important and hard to achieve. This paper reviews the current panorama of the tools for automatic code smell detection. It defines research questions about the consistency of their responses, their ability to expose the regions of code most affected by structural decay, and the relevance of their responses with respect to future software evolution. It gives answers to them by analyzing the output of four representative code smell detectors applied to six different versions of GanttProject, an open source system written in Java. The results of these experiments cast light on what current code smell detection tools are able to do and what the relevant areas for further improvement are. Keywords code smells; code smell detection tools; refactoring; software quality evaluation. 1 Introduction Nowadays there is an increasing number of software analysis tools available for detecting bad programming practices, highlighting anomalies, and in general increasing the awareness of the software engineer about the structural features of the program under development.
    [Show full text]
  • Preliminary Report on Empirical Study of Repeated Fragments in Internal Documentation
    Proceedings of the Federated Conference on Computer Science DOI: 10.15439/2016F524 and Information Systems pp. 1573–1576 ACSIS, Vol. 8. ISSN 2300-5963 Preliminary Report on Empirical Study of Repeated Fragments in Internal Documentation Milan Nosál’, Jaroslav Porubän Department of Computers and Informatics, Faculty of Electrical Engineering and Informatics, Technical University of Košice Letná 9, 042 00, Košice, Slovakia Email: [email protected], [email protected] Abstract—In this paper we present preliminary results of an safe and therefore the programmer has to pay extra caution empirical study, in which we used copy/paste detection (PMD when using it in multithreaded systems (she has to use Event CPD implementation) to search for repeating documentation Dispatch Thread to safely work with the Swing components). fragments. The study was performed on 5 open source projects, including Java 8 SDK sources. The study shows that there are Swing JavaDoc documents it and for each affected class many occurrences of copy-pasting documentation fragments in includes a warning (see JPanel documentation in Figure 1). the internal documentation, e.g., copy-pasted method parameter description. Besides these, many of the copy-pasted fragments express some domain or design concern, e.g., that the method is obsolete and deprecated. Therefore the study indicates that the cross-cutting concerns are present in the internal documentation in form of documentation phrases. I. INTRODUCTION RESERVING and comprehending developer’s concerns P (intents) in the source code is still a current challenge in software development [1], [2], [3], [4]. In this paper we analyze the internal documentation (source code comments, JavaDoc, etc.) to recognize repeating documentation fragments that document those concerns (or features [5]).
    [Show full text]
  • Java Programming Standards & Reference Guide
    Java Programming Standards & Reference Guide Version 3.2 Office of Information & Technology Department of Veterans Affairs Java Programming Standards & Reference Guide, Version 3.2 REVISION HISTORY DATE VER. DESCRIPTION AUTHOR CONTRIBUTORS 10-26-15 3.2 Added Logging Sid Everhart JSC Standards , updated Vic Pezzolla checkstyle installation instructions and package name rules. 11-14-14 3.1 Added ground rules for Vic Pezzolla JSC enforcement 9-26-14 3.0 Document is continually Raymond JSC and several being edited for Steele OI&T noteworthy technical accuracy and / PD Subject Matter compliance to JSC Experts (SMEs) standards. 12-1-09 2.0 Document Updated Michael Huneycutt Sr 4-7-05 1.2 Document Updated Sachin Mai L Vo Sharma Lyn D Teague Rajesh Somannair Katherine Stark Niharika Goyal Ron Ruzbacki 3-4-05 1.0 Document Created Sachin Sharma i Java Programming Standards & Reference Guide, Version 3.2 ABSTRACT The VA Java Development Community has been establishing standards, capturing industry best practices, and applying the insight of experienced (and seasoned) VA developers to develop this “Java Programming Standards & Reference Guide”. The Java Standards Committee (JSC) team is encouraging the use of CheckStyle (in the Eclipse IDE environment) to quickly scan Java code, to locate Java programming standard errors, find inconsistencies, and generally help build program conformance. The benefits of writing quality Java code infused with consistent coding and documentation standards is critical to the efforts of the Department of Veterans Affairs (VA). This document stands for the quality, readability, consistency and maintainability of code development and it applies to all VA Java programmers (including contractors).
    [Show full text]
  • Command Line Interface
    Command Line Interface Squore 21.0.2 Last updated 2021-08-19 Table of Contents Preface. 1 Foreword. 1 Licence. 1 Warranty . 1 Responsabilities . 2 Contacting Vector Informatik GmbH Product Support. 2 Getting the Latest Version of this Manual . 2 1. Introduction . 3 2. Installing Squore Agent . 4 Prerequisites . 4 Download . 4 Upgrade . 4 Uninstall . 5 3. Using Squore Agent . 6 Command Line Structure . 6 Command Line Reference . 6 Squore Agent Options. 6 Project Build Parameters . 7 Exit Codes. 13 4. Managing Credentials . 14 Saving Credentials . 14 Encrypting Credentials . 15 Migrating Old Credentials Format . 16 5. Advanced Configuration . 17 Defining Server Dependencies . 17 Adding config.xml File . 17 Using Java System Properties. 18 Setting up HTTPS . 18 Appendix A: Repository Connectors . 19 ClearCase . 19 CVS . 19 Folder Path . 20 Folder (use GNATHub). 21 Git. 21 Perforce . 23 PTC Integrity . 25 SVN . 26 Synergy. 28 TFS . 30 Zip Upload . 32 Using Multiple Nodes . 32 Appendix B: Data Providers . 34 AntiC . 34 Automotive Coverage Import . 34 Automotive Tag Import. 35 Axivion. 35 BullseyeCoverage Code Coverage Analyzer. 36 CANoe. 36 Cantata . 38 CheckStyle. ..
    [Show full text]
  • Squore Acceptance Provides a Fast and High Return on Investment by Efficiently
    Acceptance SQUORE Squoring Technologies delivers an innovative decision-making dashboard dedicated to managing outsourced software development projects. Acceptance represents a key phase of every software development project, whatever the process: Acquisition or Third Party Application Maintenance. Beyond functional suitability, Acceptance must consider all software product dimensions, from quality characteristics such as Reliability, Maintainability and Performance, to work products like source code, requirements and test cases. TREND As required by the CMMI®, Supplier Management INDICATOR implies an objective and impartial assessment of these components, based on quantified measurement criteria adapted to the context and objectives of each project. Squore Acceptance provides a fast and high return on investment by efficiently: Contractualizing non- Increasing confidence Securing deployment functional, technical between customer and and operation. requirements. supplier. Defining common and Demonstrating compliance Reducing acceptance shared acceptance of deliverables with costs and efforts. criteria. quality requirements. Visit www.squore-acceptance.com Innovative features dedicated to the management of outsourced software projects. “Out-of-the-box” standardized control points, metrics and rules using best industry standards, and still customizable to fit in-house practices. Predefined software product quality models based on international standards: ISO SQuaRE 25010, ISO/IEC 9126, ECSS Quality Handbook, SQUALE . Standardized evaluation process in accordance with ISO/IEC 14598 and ISO/IEC 15939 standards. Squore covers all software product quality characteristics under a standard breakdown Quantified acceptance criteria for every type of deliverable, from requirements to documentation, via source code and test cases. Comprehensive overview of software product compliance through Key Performance Indicators and trend analysis. Unrivaled in-depth analysis where at-risk components are immediately identified, down to the most elementary function or method.
    [Show full text]
  • Evaluating Dependency Based Package-Level Metrics for Multi-Objective Maintenance Tasks
    (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 10, 2017 Evaluating Dependency based Package-level Metrics for Multi-objective Maintenance Tasks Mohsin Shaikh Adil Ansari Kashif Memon Qaid-e-Awam University Qaid-e-Awam University UCET, The Islamia University of Engineering Science of Engineering Science of Bahwalpur and Technology, Nawabshah and Technology, Nawabshah Akhtar Hussain Jalbani Ahmed Ali Qaid-e-Awam University Qaid-e-Awam University of Engineering Science of Engineering Science and Technology, Nawabshah and Technology, Nawabshah Abstract—Role of packages in organization and maintenance determine quality attributes of object oriented source code of software systems has acquired vital importance in recent [3], [4]. Conventionally, software evolution process has been research of software quality. With an advancement in modu- subject to structural and architectural changes in the source larization approaches of object oriented software, packages are code, targeting suitable and organized placement of classes widely considered as re-usable and maintainable entities of object- in particular. However, such re-factoring practices can cause oriented software architectures, specially to avoid complicated de- drift and deterioration in modularization quality of software pendencies and insure software design of well identified services. In this context, recently research study of H. Abdeen on automatic [5]. Consequently, to insure flexible software modularization, optimization of package dependencies
    [Show full text]
  • Avatud Lähtekoodiga Vahendite Kohandamine Microsoft Visual C++ Tarkvaralahenduste Kvaliteedi Analüüsiks Sonarqube Serveris
    TALLINNA TEHNIKAÜLIKOOL Infotehnoloogia teaduskond Tarkvarateaduse instituut Anton Ašot Roolaid 980774IAPB AVATUD LÄHTEKOODIGA VAHENDITE KOHANDAMINE MICROSOFT VISUAL C++ TARKVARALAHENDUSTE KVALITEEDI ANALÜÜSIKS SONARQUBE SERVERIS Bakalaureusetöö Juhendaja: Juhan-Peep Ernits PhD Tallinn 2019 Autorideklaratsioon Kinnitan, et olen koostanud antud lõputöö iseseisvalt ning seda ei ole kellegi teise poolt varem kaitsmisele esitatud. Kõik töö koostamisel kasutatud teiste autorite tööd, olulised seisukohad, kirjandusallikatest ja mujalt pärinevad andmed on töös viidatud. Autor: Anton Ašot Roolaid 21.05.2019 2 Annotatsioon Käesolevas bakalaureusetöös uuritakse, kuidas on võimalik saavutada suure hulga C++ lähtekoodi kvaliteedi paranemist, kui ettevõttes kasutatakse arenduseks Microsoft Visual Studiot ning koodikaetuse ja staatilise analüüsi ülevaate saamiseks SonarQube serverit (Community Edition). Seejuures SonarSource'i poolt pakutava tasulise SonarCFamily for C/C++ analüsaatori (mille eelduseks on SonarQube serveri Developer Edition) asemel kasutatakse tasuta ja vaba alternatiivi: SonarQube C++ Community pluginat. Analüüsivahenditena eelistatakse avatud lähtekoodiga vabu tarkvaravahendeid. Valituks osutuvad koodi kaetuse analüüsi utiliit OpenCppCoverage ja staatilise analüüsi utiliit Cppcheck. Siiski selgub, et nende utiliitide töö korraldamiseks ja väljundi sobitamiseks SonarQube Scanneri vajadustega tuleb kirjutada paar skripti: üks PowerShellis ja teine Windowsi pakkfailina. Regulaarselt ajastatud analüüside käivitamist tagab QuickBuild,
    [Show full text]
  • Pro Netbeans IDE 6 Rich Client Platform Edition.Pdf
    CYAN YELLOW MAGENTA BLACK PANTONE 123 C EMPOWERING PRODUCTIVITY FOR THE JAVA™ DEVELOPER THE EXPERT’S VOICE® IN Java™ TECHNOLOGY Companion eBook Available Author of IncIncludesludes newnew Pro NetBeans™ IDE 5.5 ™ NetBeans™ Enterprise Edition Pro NetBeans IDE 6 Pro (J)Ruby/Rails(J)Ruby/Rails IDEIDE Rich Client Platform Edition NetBeans Dear Reader, Today, numerous open source and commercial Java™ Integrated Development Environments (IDEs) are available. It seems that almost every month one of them comes out in a new version, claiming to be the best IDE. Making the decision to Pro migrate to a new IDE can be a big deal for some developers. This is especially true in professional software organizations that have an investment in IDE plugins, code-quality and build tools, and established development processes that can all ™ be affected by changing IDEs. If you or your organization have not yet switched to use NetBeans™ IDE platform, then the recent release of NetBeans IDE 6.0 will make you want to do so. NetBeans IDE 6 NetBeans 6 provides an amazing development environment. The NetBeans 6 Source Editor is arguably one of the most important features of an IDE, since that ™ is where developers spend a great deal of time. Through the newly rewritten core IDE 6 architecture, the NetBeans 6 Source Editor provides extremely intelligent and Rich Client Platform Edition powerful features such as code completion, syntax highlighting, and refactoring. NetBeans 6 has not only an updated code editor, but also many new features, such as Ruby/Rails support, Maven support, JUnit 4 support, and Local History, among others.
    [Show full text]
  • Write Your Own Rules and Enforce Them Continuously
    Ultimate Architecture Enforcement Write Your Own Rules and Enforce Them Continuously SATURN May 2017 Paulo Merson Brazilian Federal Court of Accounts Agenda Architecture conformance Custom checks lab Sonarqube Custom checks at TCU Lessons learned 2 Exercise 0 – setup Open www.dontpad.com/saturn17 Follow the steps for “Exercise 0” Pre-requisites for all exercises: • JDK 1.7+ • Java IDE of your choice • maven 3 Consequences of lack of conformance Lower maintainability, mainly because of undesired dependencies • Code becomes brittle, hard to understand and change Possible negative effect • on reliability, portability, performance, interoperability, security, and other qualities • caused by deviation from design decisions that addressed these quality requirements 4 Factors that influence architecture conformance How effective the architecture documentation is Turnover among developers Haste to fix bugs or implement features Size of the system Distributed teams (outsourcing, offshoring) Accountability for violating design constraints 5 How to avoid code and architecture disparity? 1) Communicate the architecture to developers • Create multiple views • Structural diagrams + behavior diagrams • Capture rationale Not the focus of this tutorial 6 How to avoid code and architecture disparity? 2) Automate architecture conformance analysis • Often done with static analysis tools 7 Built-in checks and custom checks Static analysis tools come with many built-in checks • They are useful to spot bugs and improve your overall code quality • But they’re
    [Show full text]
  • A Machine Learning Approach Towards Automatic Software Design Pattern Recognition Across Multiple Programming Languages
    ICSEA 2020 : The Fifteenth International Conference on Software Engineering Advances A Machine Learning Approach Towards Automatic Software Design Pattern Recognition Across Multiple Programming Languages Roy Oberhauser[0000-0002-7606-8226] Computer Science Dept. Aalen University Aalen, Germany e-mail: [email protected] Abstract—As the amount of software source code increases, languages of programmers that affect naming, tribal manual approaches for documentation or detection of software community effects, the programmer's (lack of) knowledge of design patterns in source code become inefficient relative to the these patterns and use of (proper) naming and notation or value. Furthermore, typical automatic pattern detection tools markers, make it difficult to identify pattern usage by experts are limited to a single programming language. To address this, or tooling. While many code repositories are accessible to our Design Pattern Detection using Machine Learning the public on the web, many more repositories are hidden (DPDML) offers a generalized and programming language within companies or other organizations and are not agnostic approach for automated design pattern detection necessarily accessible for analysis. While determining actual based on Machine Learning (ML). The focus of our evaluation pattern usage is beneficial for identifying which patterns are was on ensuring DPDML can reasonably detect one design used where and can help avoid unintended pattern pattern in the structural, creational, and behavioral category for two popular programming languages (Java and C#). 60 degradation and associated technical debt and quality issues, unique Java and C# code projects were used to train the the investment necessary for manual pattern extraction, artificial neural network (ANN) and 15 projects were then recovery, and archeology is not economically viable.
    [Show full text]
  • Open Source Tools for Measuring the Internal Quality of Java Software Products
    Open source tools for measuring the Internal Quality of Java software products. Asurvey P. Tomas, M.J. Escalona , M. Mejias Department of Computer and Systems, ETS Ingenieria Informatica, University of Seville, Av. Reina Mercedes S/N, 41012 Seville,Spain abstract Keywords: Collecting metrics and indicators to assess objectively the different products resulting during the lifecycle Software product of a software project is a research area that encompasses many different aspects, apart from being highly Tools demanded by companies and software development teams. Open source Focusing on software products, one of the most used methods by development teams for measuring Internal Metrics Quality is the static analysis of the source code. This paper works in this line and presents a study of the state- Internal Quality of-the-art open source software tools that automate the collection of these metrics, particularly for Automation fi Static analysis developments in Java. These tools have been compared according to certain criteria de ned in this study. Source code Java Contents 1. Introduction.............................................................. 245 2. Relatedwork............................................................. 246 3. Planningandconductingthereview................................................... 246 3.1. Acharacterizationschema.................................................... 247 3.1.1. Metricsimplemented.................................................. 248 3.1.2. Functionalfeaturescovered..............................................
    [Show full text]