Language-Independent Volume Measurement

Total Page:16

File Type:pdf, Size:1020Kb

Language-Independent Volume Measurement Language-independent volume measurement Edwin Ouwehand [email protected] Summer 2018, 52 pages Supervisor: Ana Oprescu Organisation supervisor: Lodewijk Bergmans Second reader: Ana Varbanescu Host organisation: Software Improvement Group, http://ww.sig.eu Universiteit van Amsterdam Faculteit der Natuurwetenschappen, Wiskunde en Informatica Master Software Engineering http://www.software-engineering-amsterdam.nl Contents Abstract 3 1 Introduction 4 1.1 Problem statement...................................... 5 1.2 Research Questions...................................... 5 1.3 Software Improvement Group................................ 6 1.4 Outline ............................................ 6 2 Background 7 2.1 Software Sizing........................................ 7 2.1.1 Function Points.................................... 7 2.1.2 Effort estimation................................... 8 2.2 Expressiveness of programming languages......................... 8 2.3 Kolmogorov Complexity................................... 8 2.3.1 Incomputability and Estimation .......................... 9 2.3.2 Applications ..................................... 9 2.4 Data compression....................................... 9 2.4.1 Compression Ratio.................................. 10 2.4.2 Lossless & Lossy................................... 10 2.4.3 Archives........................................ 10 3 Research Method 12 3.1 Methodology ......................................... 12 3.2 Data.............................................. 12 3.3 Counting Lines of Code ................................... 13 3.4 The expressiveness spectrum ................................ 13 4 Measuring information content 15 4.1 Compressor and Algorithm selection............................ 15 4.1.1 Comparing algorithms................................ 16 4.2 Archive selection....................................... 17 4.2.1 Overhead ....................................... 17 4.2.2 Comparing archives ................................. 18 4.3 Project size.......................................... 19 4.4 Discussion........................................... 20 4.5 Conclusion .......................................... 21 5 Expressiveness and information content 22 5.1 Determining language expressiveness levels ........................ 22 5.2 Validation........................................... 25 5.3 Normalising LOC counts................................... 25 5.4 Discussion........................................... 26 5.5 Conclusion .......................................... 27 6 Quality and relative verbosity 28 1 6.1 Experiment.......................................... 28 6.2 Discussion........................................... 30 6.3 Conclusion .......................................... 30 7 Related Work 31 7.1 Normalised Compression Distance ............................. 31 7.2 Calculating software productivity with compression.................... 31 7.3 Determining software complexity.............................. 31 8 Conclusion 33 8.1 Future work.......................................... 33 Acknowledgements 34 Bibliography 35 Appendices 39 A Cut-off Data 40 B Language Distributions 41 2 Abstract The software size of a system is typically measured using lines of code. Lines of code are easy to obtain, but the amount of lines that is needed for implementing a given functionality is strongly influenced by the programming style and programming languages that are used. A new approach towards this is measuring 'information content' of source code as an estimate for the size of a software system. With this approach we have successfully determined the expressiveness of various programming languages, however we were not able to verify the results definitively. As a practical application, we have proposed a new way to normalise lines of code counts. Finally, we found no relation between various quality metrics and a verbose style. 3 Chapter 1 Introduction Determining the size of a software system is done for various reasons. It is typically used to predict the amount of effort that will be required to develop a system, as well as to estimate programming productivity or maintainability once the system has been developed. As explained by Galorath [GE06], estimates are only as good as the size projections they are based on. In the physical world, size is a measure of volume or mass. In the software world, though, size is not as clearly defined. Some metrics include: counting characters, tokens, lines of code (LOC), classes and function points. A well-established way to determine the size of a project, is by counting the lines of code that have been produced. Studies suggest [HPM03] that LOC often correlates to other measures of effort or functionality, such as function points. The programming language of choice and the style of the programmer play a large role in the relation between the size in LOC and the actual effort required to create the system. As a result, for systems that are written in different languages, the number of lines is not comparable. Nevertheless, the amount of lines of code is generally accepted as a sensible and practical measure of the size of a system, because it can be accurately measured, fully automated and is easily comprehensible. The amount of lines required to express a certain amount of functionality in a language is an inherent property of said language and is typically referred to as the language gearing factor, language level or expressiveness. In this study, we are interested in determining a size measure that helps us to compare source code with regard to creation effort, namely the intellectual effort that goes into writing a number of lines of source code. We believe that the relevant size of a software system is proportional to the amount of information that is encapsulated in the code base. This follows the idea that the comprehension of program functionality is the comprehension of information and relationships [BH91], which in turn implies that the development process consists of translating knowledge about processes, activities, data structures and manipulations, and calculations into source code. Certain programming languages allow the programmer to express this in a very concise manner, where others do not. We argue that the bulk of the effort in developing software is spent thinking and reasoning about these concepts, and only a small portion is spent translating the result into source code. Therefore, our approach is based on the idea that the information content of a system is a reflection of its functional size. By applying this way of measuring the size of source code, we can derive a table containing expressiveness levels of various languages, a language level table. This would thus allow LOC measurements of projects in the future to be normalised, which we expect to be a better indication of the intellectual effort that went into creating a project than traditional methods. Lastly, we consider the qualitative aspect of unnecessarily verbose or duplicated code (relatively high line count and low functionality). Inexperienced developers often resort to code duplication, which is more bug-prone and costly to maintain, but it results in a higher LOC count. Therefore, we also investigate the relation between a system's verbosity relative to systems in the same language and its other qualitative attributes. 4 1.1 Problem statement Consider two applications that provide the exact same functionality (screens, reports, databases). One of the applications is written in Java and the other is written in Python. The amount of lines required for the Java implementation is expected to be higher, because Java is a more verbose language. We can observe this effect even at the smallest level. For example, in listings 1.1 and 1.2 we see 'Hello World' coded in Java (5 LOC) and Python (2 LOC) respectively. Similarly, the LOC count of two identical Java applications could be vastly different based on code conventions and stylistic differences of the programmer. An experienced developer may be able to develop the same functionality with far less code. 1 public class HelloWorld f 1 2 public static void main(String[] args) f 2 #!/usr/bin/env python 3 System.out.println("Hello, World"); 3 4 g 4 print "Hello, world!" 5 g 5 Listing 1.1: Hello World coded in Java Listing 1.2: Hello World coded in Python These limitations have led to the inception of backfiring, which is the conversion of lines of code to function points [GE06, Jon95] based on historical data. Examples of this include the SPR Program- ming Languages Table1 and the QSM Function Points Languages Table2, which describe backfiring ratios for various languages. With these ratios, LOC counts can essentially be normalised for the lan- guage used. Sadly, the benchmarking process for these tables is far from ideal. For one, because they are based on function points which in turn are counted from documentation rather than being based on the actual system. Though the counting process is standardised [Sse12] and can be automated, there still often exists a mismatch between the actual software and documentation. Secondly, software is often developed in more than one language. A variety of languages are often employed depending on the complexity and requirements. This means that functionality cannot directly be attributed to a particular language as it is interwoven throughout the system. We have reason to believe that these tables are indeed (to an extent) flawed. We can observe large differences between
Recommended publications
  • The Principles of Sizing and Estimating Projects Using
    Principles of Sizing… www.davidconsultinggroup.com The Principles of Sizing and Estimating Projects Using International Function Point User Group (IFPUG) Function Points By David Garmus, David Consulting Group Introduction Software practitioners are frequently challenged to provide early and accurate software project estimates. It speaks poorly of the software community that the issue of accurate estimating, early in the lifecycle, has not been adequately addressed and standardized. The CHAOS Report by The Standish Group’s study on software development projects revealed: 60% of projects were behind schedule 50% were over cost, and 45% of delivered projects were unusable. At the heart of the estimating challenge are two issues: 1) the need to understand and express (as early as possible) the software problem domain and 2) the need to understand our capability to deliver the required software solution within a specified environment. Then, and only then, can we accurately predict the effort required to deliver the product. The software problem domain can be defined simply as the scope of the required software. The problem domain must be accurately assessed for its size and complexity. To complicate the situation, experience tells us that at the point in time that we need an initial estimate (early in the systems lifecycle), we cannot presume to have all the necessary information at our disposal. Therefore, we must have a rigorous process that permits a further clarification of the problem domain. Various risk factors relating to environment, technology, tools, methodologies, and people skills and motivation influence our capability to deliver. An effective estimating model, as contained in Figure 1, considers three elements: size, complexity and risk factors, to determine an estimate.
    [Show full text]
  • A Metrics-Based Software Maintenance Effort Model
    A Metrics-Based Software Maintenance Effort Model Jane Huffman Hayes Sandip C. Patel Liming Zhao Computer Science Department Computer Science Department Computer Science Department Lab for Advanced Networking University of Louisville University of Kentucky University of Kentucky [email protected] [email protected] [email protected] (corresponding author) Abstract planning a new software development project. Albrecht introduced the notion of function points (FP) to estimate We derive a model for estimating adaptive software effort [1]. Mukhopadhyay [19] proposes early software maintenance effort in person hours, the Adaptive cost estimation based on requirements alone. Software Maintenance Effort Model (AMEffMo). A number of Life Cycle Management (SLIM) [23] is based on the metrics such as lines of code changed and number of Norden/Rayleigh function and is suitable for large operators changed were found to be strongly correlated projects. Shepperd et al. [27] argued that algorithmic cost to maintenance effort. The regression models performed models such as COCOMO and those based on function well in predicting adaptive maintenance effort as well as points suggested an approach based on using analogous provide useful information for managers and maintainers. projects to estimate the effort for a new project. In addition to the traditional off-the-self models such as 1. Introduction COCOMO, machine-learning methods have surfaced recently. In [17], Mair et al. compared machine-learning Software maintenance typically accounts for at least 50 methods in building software effort prediction systems. percent of the total lifetime cost of a software system [16]. There has also been some work toward applying fuzzy Schach et al.
    [Show full text]
  • Understanding the Syntactic Rule Usage in Java
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by UCL Discovery Understanding the Syntactic Rule Usage in Java Dong Qiua, Bixin Lia,∗, Earl T. Barrb, Zhendong Suc aSchool of Computer Science and Engineering, Southeast University, China bDepartment of Computer Science, University College London, UK cDepartment of Computer Science, University of California Davis, USA Abstract Context: Syntax is fundamental to any programming language: syntax defines valid programs. In the 1970s, computer scientists rigorously and empirically studied programming languages to guide and inform language design. Since then, language design has been artistic, driven by the aesthetic concerns and intuitions of language architects. Despite recent studies on small sets of selected language features, we lack a comprehensive, quantitative, empirical analysis of how modern, real-world source code exercises the syntax of its programming language. Objective: This study aims to understand how programming language syntax is employed in actual development and explore their potential applications based on the results of syntax usage analysis. Method: We present our results on the first such study on Java, a modern, mature, and widely-used programming language. Our corpus contains over 5,000 open-source Java projects, totalling 150 million source lines of code (SLoC). We study both independent (i.e. applications of a single syntax rule) and dependent (i.e. applications of multiple syntax rules) rule usage, and quantify their impact over time and project size. Results: Our study provides detailed quantitative information and yields insight, particularly (i) confirming the conventional wisdom that the usage of syntax rules is Zipfian; (ii) showing that the adoption of new rules and their impact on the usage of pre-existing rules vary significantly over time; and (iii) showing that rule usage is highly contextual.
    [Show full text]
  • Development of an Enhanced Automated Software Complexity Measurement System
    Journal of Advances in Computational Intelligence Theory Volume 1 Issue 3 Development of an Enhanced Automated Software Complexity Measurement System Sanusi B.A.1, Olabiyisi S.O.2, Afolabi A.O.3, Olowoye, A.O.4 1,2,4Department of Computer Science, 3Department of Cyber Security, Ladoke Akintola University of Technology, Ogbomoso, Nigeria. Corresponding Author E-mail Id:- [email protected] [email protected] [email protected] 4 [email protected] ABSTRACT Code Complexity measures can simply be used to predict critical information about reliability, testability, and maintainability of software systems from the automatic measurement of the source code. The existing automated code complexity measurement is performed using a commercially available code analysis tool called QA-C for the code complexity of C-programming language which runs on Solaris and does not measure the defect-rate of the source code. Therefore, this paper aimed at developing an enhanced automated system that evaluates the code complexity of C-family programming languages and computes the defect rate. The existing code-based complexity metrics: Source Lines of Code metric, McCabe Cyclomatic Complexity metrics and Halstead Complexity Metrics were studied and implemented so as to extend the existing schemes. The developed system was built following the procedure of waterfall model that involves: Gathering requirements, System design, Development coding, Testing, and Maintenance. The developed system was developed in the Visual Studio Integrated Development Environment (2019) using C-Sharp (C#) programming language, .NET framework and MYSQL Server for database design. The performance of the system was tested efficiently using a software testing technique known as Black-box testing to examine the functionality and quality of the system.
    [Show full text]
  • The Commenting Practice of Open Source
    The Commenting Practice of Open Source Oliver Arafat Dirk Riehle Siemens AG, Corporate Technology SAP Research, SAP Labs LLC Otto-Hahn-Ring 6, 81739 München 3410 Hillview Ave, Palo Alto, CA 94304, USA [email protected] [email protected] Abstract lem domain is well understood [15]. Agile software devel- opment methods can cope with changing requirements and The development processes of open source software are poorly understood problem domains, but typically require different from traditional closed source development proc- co-location of developers and fail to scale to large project esses. Still, open source software is frequently of high sizes [16]. quality. This raises the question of how and why open A host of successful open source projects in both well source software creates high quality and whether it can and poorly understood problem domains and of small to maintain this quality for ever larger project sizes. In this large sizes suggests that open source can cope both with paper, we look at one particular quality indicator, the den- changing requirements and large project sizes. In this pa- sity of comments in open source software code. We find per we focus on one particular code metric, the comment that successful open source projects follow a consistent density, and assess it across 5,229 active open source pro- practice of documenting their source code, and we find jects, representing about 30% of all active open source that the comment density is independent of team and pro- projects. We show that commenting source code is an on- ject size. going and integrated practice of open source software de- velopment that is consistently found across all these pro- Categories and Subject Descriptors D.2.8 [Metrics]: jects.
    [Show full text]
  • Differences in the Definition and Calculation of the LOC Metric In
    Differences in the Definition and Calculation of the LOC Metric in Free Tools∗ István Siket Árpád Beszédes Department of Software Engineering Department of Software Engineering University of Szeged, Hungary University of Szeged, Hungary [email protected] [email protected] John Taylor FrontEndART Software Ltd. Szeged, Hungary [email protected] Abstract The software metric LOC (Lines of Code) is probably one of the most controversial metrics in software engineering practice. It is relatively easy to calculate, understand and use by the different stakeholders for a variety of purposes; LOC is the most frequently applied measure in software estimation, quality assurance and many other fields. Yet, there is a high level of variability in the definition and calculation methods of the metric which makes it difficult to use it as a base for important decisions. Furthermore, there are cases when its usage is highly questionable – such as programmer productivity assessment. In this paper, we investigate how LOC is usually defined and calculated by today’s free LOC calculator tools. We used a set of tools to compute LOC metrics on a variety of open source systems in order to measure the actual differences between results and investigate the possible causes of the deviation. 1 Introduction Lines of Code (LOC) is supposed to be the easiest software metric to understand, compute and in- terpret. The issue with counting code is determining which rules to use for the comparisons to be valid [5]. LOC is generally seen as a measure of system size expressed using the number of lines of its source code as it would appear in a text editor, but the situation is not that simple as we will see shortly.
    [Show full text]
  • Projectcodemeter Users Manual
    ProjectCodeMeter ProjectCodeMeter Pro Software Development Cost Estimation Tool Users Manual Document version 202000501 Home Page: www.ProjectCodeMeter.com ProjectCodeMeter Is a professional software tool for project managers to measure and estimate the Time, Cost, Complexity, Quality and Maintainability of software projects as well as Development Team Productivity by analyzing their source code. By using a modern software sizing algorithm called Weighted Micro Function Points (WMFP) a successor to solid ancestor scientific methods as COCOMO, COSYSMO, Maintainability Index, Cyclomatic Complexity, and Halstead Complexity, It produces more accurate results than traditional software sizing tools, while being faster and simpler to configure. Tip: You can click the icon on the bottom right corner of each area of ProjectCodeMeter to get help specific for that area. General Introduction Quick Getting Started Guide Introduction to ProjectCodeMeter Quick Function Overview Measuring project cost and development time Measuring additional cost and time invested in a project revision Producing a price quote for an Existing project Monitoring an Ongoing project development team productivity Evaluating development team past productivity Evaluating the attractiveness of an outsourcing price quote Predicting a Future project schedule and cost for internal budget planning Predicting a price quote and schedule for a Future project Evaluating the quality of a project source code Software Screen Interface Project Folder Selection Settings File List Charts Summary Reports Extended Information System Requirements Supported File Types Command Line Parameters Frequently Asked Questions ProjectCodeMeter Introduction to the ProjectCodeMeter software ProjectCodeMeter is a professional software tool for project managers to measure and estimate the Time, Cost, Complexity, Quality and Maintainability of software projects as well as Development Team Productivity by analyzing their source code.
    [Show full text]
  • Application of Sizing Estimation Techniques for Business Critical Software Project Management
    International Journal of Soft Computing And Software Engineering (JSCSE) e-ISSN: 2251-7545 Vol.3,No.6, 2013 DOI: 10.7321/jscse.v3.n6.2 Published online: Jun 25, 2013 Application of Sizing Estimation Techniques for Business Critical Software Project Management *1 Parvez Mahmood Khan, 2 M.M. Sufyan Beg Department of Computer Engineering, J.M.I. New Delhi, India 1 [email protected] , 2 [email protected] Abstract Estimation is one of the most critical areas in software project management life cycle, which is still evolving and less matured as compared to many other industries like construction, manufacturing etc. Originally the word estimation, in the context of software projects use to refer to cost and duration estimates only with software-size almost always assumed to be a fixed input. Continued legacy of bad estimates has compelled researchers, practitioners and business organizations to draw their attention towards another dimension of the problem and seriously validate an additional component – size estimation. Recent studies have shown that size is the principal determinant of cost, and therefore an accurate size estimate is crucial to good cost estimation[10]. Improving the accuracy of size estimates is, therefore, instrumental in improving the accuracy of cost and schedule estimates. Moreover, software size and cost estimates have the highest utility at the time of project inception - when most important decisions (e.g. budget allocation, personnel allocation, etc). are taken. The dilemma, however, is that only high- level requirements for a project are available at this stage. Leveraging this high-level information to produce an accurate estimate of software size is an extremely challenging and high risk task.
    [Show full text]
  • Counting Software Size: Is It As Easy As Buying a Gallon of Gas?
    Counting Software Size: Is It as Easy as Buying A Gallon of Gas? October 22, 2008 NDIA – 11th Annual Systems Engineering Conference Lori Vaughan and Dean Caccavo Northrop Grumman Mission Systems Office of Cost Estimation and Risk Analysis Agenda •Introduction • Standards and Definitions •Sample • Implications •Summary 2 NORTHOP GRUMMAN CORPORTION© Introduction • In what ways is software like gasoline? • In what ways is software not like gasoline? 3 NORTHOP GRUMMAN CORPORTION© Industry Data Suggests… • A greater percentage of the functions of the DoD Weapon Systems are performed by software System Functionality Requiring Software • Increased amount of software in Space Systems and DoD Weapon Systems – Ground, Sea and Space/Missile • Increased amount of software in our daily lives: – Cars, Cell Phones, iPod, Appliances, PDAs… Code Size/Complexity Growth The amount of software used in DoD weapon systems has grown exponentially 4 NORTHOP GRUMMAN CORPORTION© Is There a Standard for Counting Software? • Since, increasing percent of our DoD systems are reliant on software we need to be able to quantify the software size – Historical data collection – Estimation and planning – Tracking and monitoring during program performance • Software effort is proportional to the size of the software being developed – SW Engineering Economics 1981 by Dr. Barry Boehm • “Counting” infers there is a standard • Experience as a prime integrator – Do not see a standard being followed There are software counting standards but the message isn’t out or it is not being followed consistently 5 NORTHOP GRUMMAN CORPORTION© Source Line of Code definition From Wikipedia, the free encyclopedia ”Source lines of code (SLOC) is a software metric used to measure the size of a software program by counting the number of lines in the text of the program's source code.
    [Show full text]
  • Empirical Software Engineering at Microsoft Research
    Software Productivity Decoded How Data Science helps to Achieve More Thomas Zimmermann, Microsoft Research, USA © Microsoft Corporation © Microsoft Corporation What makes software engineering research at Microsoft unique? Easy access to industrial problems and data Easy access to engineers Near term impact Collaborations with Microsoft researchers Collaborations with external researchers © Microsoft Corporation Product group engagement © Microsoft Corporation What metrics are the If I increase test coverage, will that best predictors of failures? actually increase software quality? What is the data quality level Are there any metrics that are indicators of used in empirical studies and failures in both Open Source and Commercial how much does it actually domains? matter? I just submitted a bug report. Will it be fixed? Should I be writing unit tests in my software How can I tell if a piece of software will have vulnerabilities? project? Is strong code ownership good or Do cross-cutting concerns bad for software quality? cause defects? Does Distributed/Global software Does Test Driven Development (TDD) development affect quality? produce better code in shorter time? © Microsoft Corporation Data Science Productivity © Microsoft Corporation Data Science © Microsoft Corporation Use of data, analysis, and systematic reasoning to [inform and] make decisions 17 © Microsoft Corporation Web analytics (Slide by Ray Buse) © Microsoft Corporation Game analytics Halo heat maps Free to play © Microsoft Corporation Usage analytics Improving the File Explorer for Windows 8 Explorer in Windows 7 Alex Simons: Improvements in Windows Explorer. http://blogs.msdn.com/b/b8/archive/2011/08/29 /improvements-in-windows-explorer.aspx © Microsoft Corporation Trinity of software analytics Dongmei Zhang, Shi Han, Yingnong Dang, Jian-Guang Lou, Haidong Zhang, Tao Xie: Software Analytics in Practice.
    [Show full text]
  • Investigating the Applicability of Software Metrics and Technical Debt on X++ Abstract Syntax Tree in XML Format – Calculations Using Xquery Expressions
    Linköping University | Department of Computer and Information Science Master’s thesis, 30 ECTS | Datateknik 202019 | LIU-IDA/LITH-EX-A--2019/101--SE Investigating the applicability of Software Metrics and Technical Debt on X++ Abstract Syntax Tree in XML format – calculations using XQuery expressions Tillämpning av mjukvarumetri och tekniska skulder från en XML representation av abstrakta syntaxträd för X++ kodprogram David Tran Supervisor : Jonas Wallgren Examiner : Martin Sjölund External supervisor : Laurent Ricci Linköpings universitet SE–581 83 Linköping +46 13 28 10 00 , www.liu.se Upphovsrätt Detta dokument hålls tillgängligt på Internet ‐ eller dess framtida ersättare ‐ under 25 år frånpublicer‐ ingsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstakako‐ pior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervis‐ ning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. Allannan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säker‐ heten och tillgängligheten finns lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsman‐ nens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/. Copyright The publishers will keep this document online on the Internet ‐ or its possible replacement ‐ for a period of 25 years starting from the date of publication barring exceptional circumstances.
    [Show full text]
  • SOFTWARE METRICS TOOL a Paper
    SOFTWARE METRICS TOOL A Paper Submitted to the Graduate Faculty of the North Dakota State University of Agriculture and Applied Science By Vijay Reddy Reddy In Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE Major Department: Computer Science July 2016 Fargo, North Dakota North Dakota State University Graduate School Title SOFTWARE METRICS TOOL By Vijay Reddy Reddy The Supervisory Committee certifies that this disquisition complies with North Dakota State University’s regulations and meets the accepted standards for the degree of MASTER OF SCIENCE SUPERVISORY COMMITTEE: Kenneth Magel Chair Kendall Nygard Bernhardt Saini-Eidukat Approved: August 4, 2016 Brian M. Slator Date Department Chair ABSTRACT A software metric is the measurement of a particular characteristic of a software program. These metrics are very useful for optimizing the performance of the software, managing resources, debugging, schedule planning, quality assurance and periodic calculation of software metrics increases the success rate of any software project. Therefore, a software metrics tool is developed in this research which provides an extensible way to calculate software metrics. The tool developed is a web-based application which has the ability to calculate some of the important software metrics and statistics as well. It also provides a functionality to the user to add additional metrics and statistics to the application. This software metrics tool is a language- dependent tool which calculates metrics and statistics for only C# source files. iii ACKNOWLEDGEMENTS Though only my name appears on the cover of this disquisition, a great many people have contributed to its production. I owe my gratitude to all those people who have made this possible and because of whom my graduate experience has been one that I will cherish forever.
    [Show full text]