Software Size Measurement: a Framework for Counting Source Statements

Total Page:16

File Type:pdf, Size:1020Kb

Software Size Measurement: a Framework for Counting Source Statements Technical Report CMU/SEI-92-TR-020 ESC-TR-92-020 Software Size Measurement: A Framework for Counting Source Statements Robert E. Park with the Size Subgroup of the Software Metrics Definition Working Group and the Software Process Measurement Project Team Technical Report CMU/SEI-92-TR-020 ESC-TR-92-020 September 1992 (Draft) /Helvetica /B -52 /UL .8 /gray exch def /start exch def /rotval exch def /mode exch def findfont /infont exch def /printme exch def Software Size Measurement: A Framework for Counting Source Statements Robert E. Park with the Size Subgroup of the Software Metrics Definition Working Group and the Software Process Measurement Project Team Unlimited distribution subject to the copyright. Software Engineering Institute Carnegie Mellon University Pittsburgh, Pennsylvania 15213 This report was prepared for the SEI Joint Program Office HQ ESC/AXS 5 Eglin Street Hanscom AFB, MA 01731-2116 The ideas and findings in this report should not be construed as an official DoD position. It is published in the interest of scientific and technical information exchange. (Draft) /Helvetica /B -52 /UL .8 /gray exchFOR def THE COMMANDER /start exch def /rotval exch def /mode exch(signature def on file) findfont /infont exch def /printme exch def Thomas R. Miller, Lt Col, USAF SEI Joint Program Office This work is sponsored by the U.S. Department of Defense. Copyright © 1996 by Carnegie Mellon University. Permission to reproduce this document and to prepare derivative works from this document for internal use is granted, provided the copyright and “No Warranty” statements are included with all reproductions and derivative works. Requests for permission to reproduce this document or to prepare derivative works of this document for external and commercial use should be addressed to the SEI Licensing Agent. NO WARRANTY THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN “AS-IS” BASIS. CARNEGIE MELLON UNIVERSITY MAKES NO WARRAN- TIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTIBILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT. This work was created in the performance of Federal Government Contract Number F19628-95-C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. The Government of the United States has a royalty-free government-purpose license to use, duplicate, or disclose the work, in whole or in part and in any manner, and to have or permit others to do so, for government purposes pursuant to the copyright license under the clause at 52.227-7013. This document is available through Research Access, Inc., 800 Vinial Street, Pittsburgh, PA 15212. Phone: 1-800-685-6510. FAX: (412) 321-2994. RAI also maintains a World Wide Web home page. The URL is http://www.rai.com Copies of this document are available through the National Technical Information Service (NTIS). For informa- tion on ordering, please contact NTIS directly: National Technical Information Service, U.S. Department of Commerce, Springfield, VA 22161. Phone: (703) 487-4600. This document is also available through the Defense Technical Information Center (DTIC). DTIC provides access to and transfer of scientific and technical information for DoD personnel, DoD contractors and potential contrac- tors, and other U.S. Government agency personnel and their contractors. To obtain a copy, please contact DTIC directly: Defense Technical Information Center / 8725 John J. Kingman Road / Suite 0944 / Ft. Belvoir, VA 22060-6218. Phone: (703) 767-8222 or 1-800 225-3842.] Use of any trademarks in this report is not intended in any way to infringe on the rights of the trademark holder. Table of Contents List of Figures v Preface ix Acknowledgments xi 1. Introduction 1 1.1. Scope 1 1.2. Objective and Audience 2 1.3. Relationship to Other Work 2 1.4. Endorsement of Specific Measures 3 2. A Framework for Size Definition 5 2.1. Definition Checklists 7 2.2. Supplemental Rules Forms 9 2.3. Recording and Reporting Forms 10 3. A Checklist for Defining Source Statement Counts 13 3.1. Describing Statement Counts 13 3.2. Modifying the Checklist to Meet Special Needs 21 3.3. Assigning Rules for Counting and Classifying Statement Types 21 4. Size Attributes and Values: Issues to Consider When Creating Definitions 23 4.1. Statement Type 23 4.1.1. Executable statements 24 4.1.2. Declarations 24 4.1.3. Compiler directives 26 4.1.4. Comments 27 4.1.5. Blank lines 28 4.1.6. Other statement types 28 4.1.7. Order of precedence 28 4.1.8. General guidance 29 4.2. How Produced 29 4.2.1. Programmed 30 4.2.2. Generated with source code generators 30 4.2.3. Converted with automated translators 30 4.2.5. Modified 32 4.2.6. Removed 34 4.3. Origin 34 4.3.1. New work: no prior existence 35 4.3.2. Prior work 35 4.4. Usage 37 4.4.1. In or as part of the primary product 38 4.4.2. External to or in support of the primary product 38 CMU/SEI-92-TR-20 i 4.5. Delivery 39 4.5.1. Delivered as source 39 4.5.2. Delivered in compiled or executable form, but not as source 40 4.5.3. Not delivered 41 4.6. Functionality 42 4.6.1. Operative statements 42 4.6.2. Inoperative statements (dead, bypassed, unused, unreferenced, or unaccessed) 42 4.7. Replications 45 4.7.1. Master source statements (originals) 45 4.7.2. Physical replicates of master statements, stored in the master code 46 4.7.3. Copies inserted, instantiated, or expanded when compiling or linking 46 4.7.4. Postproduction replicates 47 4.8. Development Status 48 4.9. Language 49 4.10. Clarifications (general) 50 4.10.1. Nulls, continues, and no-ops 51 4.10.2. Empty statements 51 4.10.3. Statements that instantiate generics 52 4.10.4. Begin…end and {…} pairs used as executable statements 52 4.10.5. Begin…end and {…} pairs that delimit (sub)program bodies 52 4.10.6. Logical expressions used as test conditions 53 4.10.7. Expression evaluations used as subprogram arguments 53 4.10.8. End symbols that terminate executable statements 53 4.10.9. End symbols that terminate declarations or (sub)program bodies 54 4.10.10. Then, else, and otherwise symbols 54 4.10.11. Elseif and elsif statements 54 4.10.12. Keywords like procedure division, interface, and implementation 54 4.10.13 Labels (branching destinations) on lines by themselves 54 4.11. Clarifications for Specific Languages 55 4.11.1. Stylistic use of braces 55 4.11.2. Expression statements 56 4.11.3. Format statements 56 5. Using the Checklist to Define Size Measures—Nine Examples 59 5.1. Physical Source Lines—A Basic Definition 60 5.2. Logical Source Statements—A Basic Definition 66 5.3. Data Specifications—Getting Data for Special Purposes 72 5.3.1. Specifying data for project tracking 72 5.3.2. Specifying data for project analysis 78 5.3.3. Specifying data for reuse measurement 82 5.3.4. Combining data specifications 85 5.3.5. Counting dead code 88 ii CMU/SEI-92-TR-20 5.3.6. Relaxing the rule of mutual exclusivity 88 5.3.7. Asking for lower level details—subsets of attribute values 89 5.4. Other Definitions and Data Specifications 90 5.5. Other Measures 91 6. Defining the Counting Unit 93 6.1. Statement Delimiters 93 6.2. Logical Source Statements 93 6.3. Physical Source Lines 97 6.4. Physical and Logical Measures are Different Measures 99 6.5. Conclusion and Recommendation 101 7. Reporting Other Rules and Practices—The Supplemental Forms 103 7.1. Physical Source Lines 104 7.2. Logical Source Statements 105 7.3. Dead Code 106 8. Recording Measurement Results 107 8.1. Example 1—Development Planning and Tracking 108 8.2. Example 2—Planning and Estimating Future Products 109 8.3. Example 3—Reuse Tracking, Productivity Analysis, and Quality Normalization 110 8.4. Example 4—Support for Alternative Definitions and Recording Needs 112 9. Reporting and Requesting Measurement Results 115 9.1. Summarizing Totals for Individual Attributes 115 9.2. Summarizing Totals for Data Spec A (Project Tracking) 119 9.3. A Structured Process for Requesting Summary Reports 120 9.4. Requesting Arrayed Data 120 9.5. Requesting Individual (Marginal) Totals 126 10. Meeting the Needs of Different Users 129 11. Applying Size Definitions to Builds, Versions, and Releases 133 12. Recommendations for Initiating Size Measurement 135 12.1. Start with Physical Source Lines 135 12.2. Use a Checklist-Based Process to Define, Record, and Report Your Measurement Results 137 12.3. Report Separate Results for Questionable Elements 138 12.4. Look for Opportunities to Add Other Measures 138 12.5. Use the Same Definitions for Estimates that You Use for Measurement Reports 139 12.6. Dealing with Controversy 139 12.7. From Definition to Action—A Concluding Note 140 References 141 Appendix A: Acronyms and Terms 143 A.1. Acronyms 143 A.2. Terms Used 144 CMU/SEI-92-TR-20 iii Appendix B: Candidate Size Measures 147 Appendix C: Selecting Measures for Definition—Narrowing the Field 153 C.1. The Choices Made 153 C.2. The Path Followed 155 Appendix D: Using Size Measures—Illustrations and Examples 157 D.1.
Recommended publications
  • Measuring the Software Size of Sliced V-Model Projects
    2014 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement Measuring the Software Size of Sliced V-model Projects Andreas Deuter PHOENIX CONTACT Electronics GmbH Gregor Engels Dringenauer Str. 30 University of Paderborn 31812 Bad Pyrmont, Germany Zukunftsmeile 1 Email: [email protected] 33102 Paderborn, Germany Email: [email protected] Abstract—Companies expect higher productivity of their soft- But, the “manufacturing” of software within the standard ware teams when introducing new software development methods. production process is just a very short process when bringing Productivity is commonly understood as the ratio of output the binaries of the software to the device. This process is hardly created and resources consumed. Whereas the measurement of to optimize and does not reflect the “production of software” the resources consumed is rather straightforward, there are at all. The creation of the software is namely done in the several definitions for counting the output of a software de- developing teams by applying typical software engineering velopment. Source code-based metrics create a set of valuable figures direct from the heart of the software - the code. However, methods. However, to keep up with the high demands on depending on the chosen process model software developers and implementing new functionality (e.g. for PLC) the software testers produce also a fair amount of documentation. Up to development process within these companies must improve. now this output remains uncounted leading to an incomplete Therefore, they start to analyze their software processes in view on the development output. This article addresses this open detail and to identify productivity drivers.
    [Show full text]
  • Software Error Analysis Technology
    NIST Special Publication 500-209 Computer Systems Software Error Analysis Technology U.S. DEPARTMENT OF COMMERCE Wendy W. Peng Technology Administration Dolores R. Wallace National Institute of Standards and Technology NAT L INST. OF ST4ND & TECH R I.C. NISI PUBLICATIONS A111D3 TTSTll ^QC ' 100 .U57 //500-209 1993 7he National Institute of Standards and Technology was established in 1988 by Congress to "assist industry in the development of technology . needed to improve product quality, to modernize processes, to reliability . manufacturing ensure product and to facilitate rapid commercialization . , of products based on new scientific discoveries." NIST, originally founded as the National Bureau of Standards in 1901, works to strengthen U.S. industry's competitiveness; advance science and engineering; and improve public health, safety, and the environment. One of the agency's basic functions is to develop, maintain, and retain custody of the national standards of measurement, and provide the means and methods for comparing standards used in science, engineering, manufacturing, commerce, industry, and education with the standards adopted or recognized by the Federal Government. As an agency of the U.S. Commerce Department's Technology Administration, NIST conducts basic and applied research in the physical sciences and engineering and performs related services. The Institute does generic and precompetitive work on new and advanced technologies. NIST's research facilities are located at Gaithersburg, MD 20899, and at Boulder, CO 80303.
    [Show full text]
  • Software Metrics: Successes, Failures and New Directions
    Software Metrics: Successes, Failures and New Directions Norman E Fenton Centre for Software Reliability Agena Ltd City University 11 Main Street Northampton Square Caldecote London EC1V OHB Cambridge CB3 7NU United Kingdom United Kingdom Tel: 44 171 477 8425 Tel 44 1223 263753 / 44 181 530 5981 Fax: 44 171 477 8585 Fax: 01223 263899 [email protected] [email protected] www.csr.city.ac.uk/people/norman.fenton/ www.agena.co.uk Abstract The history of software metrics is almost as old as the history of software engineering. Yet, the extensive research and literature on the subject has had little impact on industrial practice. This is worrying given that the major rationale for using metrics is to improve the software engineering decision making process from a managerial and technical perspective. Industrial metrics activity is invariably based around metrics that have been around for nearly 30 years (notably Lines of Code or similar size counts, and defects counts). While such metrics can be considered as massively successful given their popularity, their limitations are well known, and mis-applications are still common. The major problem is in using such metrics in isolation. We argue that it is possible to provide genuinely improved management decision support systems based on such simplistic metrics, but only by adopting a less isolationist approach. Specifically, we feel it is important to explicitly model: a) cause and effect relationships and b) uncertainty and combination of evidence. Our approach uses Bayesian Belief nets which are increasingly seen as the best means of handling decision-making under uncertainty.
    [Show full text]
  • Understanding the Syntactic Rule Usage in Java
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by UCL Discovery Understanding the Syntactic Rule Usage in Java Dong Qiua, Bixin Lia,∗, Earl T. Barrb, Zhendong Suc aSchool of Computer Science and Engineering, Southeast University, China bDepartment of Computer Science, University College London, UK cDepartment of Computer Science, University of California Davis, USA Abstract Context: Syntax is fundamental to any programming language: syntax defines valid programs. In the 1970s, computer scientists rigorously and empirically studied programming languages to guide and inform language design. Since then, language design has been artistic, driven by the aesthetic concerns and intuitions of language architects. Despite recent studies on small sets of selected language features, we lack a comprehensive, quantitative, empirical analysis of how modern, real-world source code exercises the syntax of its programming language. Objective: This study aims to understand how programming language syntax is employed in actual development and explore their potential applications based on the results of syntax usage analysis. Method: We present our results on the first such study on Java, a modern, mature, and widely-used programming language. Our corpus contains over 5,000 open-source Java projects, totalling 150 million source lines of code (SLoC). We study both independent (i.e. applications of a single syntax rule) and dependent (i.e. applications of multiple syntax rules) rule usage, and quantify their impact over time and project size. Results: Our study provides detailed quantitative information and yields insight, particularly (i) confirming the conventional wisdom that the usage of syntax rules is Zipfian; (ii) showing that the adoption of new rules and their impact on the usage of pre-existing rules vary significantly over time; and (iii) showing that rule usage is highly contextual.
    [Show full text]
  • Software Measurement: a Necessary Scientific Basis Norman Fenton
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING. VOL. 20, NO. 3, MARCH I994 I99 Software Measurement: A Necessary Scientific Basis Norman Fenton Abstruct- Software measurement, like measurement in any 11. MEASUREMENTFUNDAMENTALS other discipline, must adhere to the science of measurement if it is to gain widespread acceptance and validity. The observation In this section, we provide a summary of the key concepts of some very simple, but fundamental, principles of measurement from the science of measurement which are relevant to soft- can have an extremely beneficial effect on the subject. Measure- ware metrics. First, we define the fundamental notions (which ment theory is used to highlight both weaknesses and strengths of are generally not well understood) and then we summarise the software metrics work, including work on metrics validation. We representational theory of measurement. Finally, we explain identify a problem with the well-known Weyuker properties, but also show that a criticism of these properties by Cherniavsky and how this leads inevitably to a goal-oriented approach. Smith is invalid. We show that the search for general software complexity measures is doomed to failure. However, the theory A. What is Measurement? does help us to define and validate measures of specific complexity attributes. Above all, we are able to view software measurement Measurement is defined as the process by which numbers or in a very wide perspective, rationalising and relating its many symbols are assigned to attributes of entities in the real world diverse activities. in such a way as to describe them according to clearly defined Index Terms-Software measurement, empirical studies, met- rules [ 131, [36].
    [Show full text]
  • Development of an Enhanced Automated Software Complexity Measurement System
    Journal of Advances in Computational Intelligence Theory Volume 1 Issue 3 Development of an Enhanced Automated Software Complexity Measurement System Sanusi B.A.1, Olabiyisi S.O.2, Afolabi A.O.3, Olowoye, A.O.4 1,2,4Department of Computer Science, 3Department of Cyber Security, Ladoke Akintola University of Technology, Ogbomoso, Nigeria. Corresponding Author E-mail Id:- [email protected] [email protected] [email protected] 4 [email protected] ABSTRACT Code Complexity measures can simply be used to predict critical information about reliability, testability, and maintainability of software systems from the automatic measurement of the source code. The existing automated code complexity measurement is performed using a commercially available code analysis tool called QA-C for the code complexity of C-programming language which runs on Solaris and does not measure the defect-rate of the source code. Therefore, this paper aimed at developing an enhanced automated system that evaluates the code complexity of C-family programming languages and computes the defect rate. The existing code-based complexity metrics: Source Lines of Code metric, McCabe Cyclomatic Complexity metrics and Halstead Complexity Metrics were studied and implemented so as to extend the existing schemes. The developed system was built following the procedure of waterfall model that involves: Gathering requirements, System design, Development coding, Testing, and Maintenance. The developed system was developed in the Visual Studio Integrated Development Environment (2019) using C-Sharp (C#) programming language, .NET framework and MYSQL Server for database design. The performance of the system was tested efficiently using a software testing technique known as Black-box testing to examine the functionality and quality of the system.
    [Show full text]
  • Ssfm BPG 1: Validation of Software in Measurement Systems
    NPL Report DEM-ES 014 Software Support for Metrology Best Practice Guide No. 1 Validation of Software in Measurement Systems Brian Wichmann, Graeme Parkin and Robin Barker NOT RESTRICTED January 2007 National Physical Laboratory Hampton Road Teddington Middlesex United Kingdom TW11 0LW Switchboard 020 8977 3222 NPL Helpline 020 8943 6880 Fax 020 8943 6458 www.npl.co.uk Software Support for Metrology Best Practice Guide No. 1 Validation of Software in Measurement Systems Brian Wichmann with Graeme Parkin and Robin Barker Mathematics and Scientific Computing Group January 2007 ABSTRACT The increasing use of software within measurement systems implies that the validation of such software must be considered. This Guide addresses the validation both from the perspective of the user and the supplier. The complete Guide consists of a Management Overview and a Technical Application together with consideration of its use within safety systems. Version 2.2 c Crown copyright 2007 Reproduced with the permission of the Controller of HMSO and Queen’s Printer for Scotland ISSN 1744–0475 National Physical Laboratory, Hampton Road, Teddington, Middlesex, United Kingdom TW11 0LW Extracts from this guide may be reproduced provided the source is acknowledged and the extract is not taken out of context We gratefully acknowledge the financial support of the UK Department of Trade and Industry (National Measurement System Directorate) Approved on behalf of the Managing Director, NPL by Jonathan Williams, Knowledge Leader for the Electrical and Software team Preface The use of software in measurement systems has dramatically increased in the last few years, making many devices easier to use, more reliable and more accurate.
    [Show full text]
  • The Commenting Practice of Open Source
    The Commenting Practice of Open Source Oliver Arafat Dirk Riehle Siemens AG, Corporate Technology SAP Research, SAP Labs LLC Otto-Hahn-Ring 6, 81739 München 3410 Hillview Ave, Palo Alto, CA 94304, USA [email protected] [email protected] Abstract lem domain is well understood [15]. Agile software devel- opment methods can cope with changing requirements and The development processes of open source software are poorly understood problem domains, but typically require different from traditional closed source development proc- co-location of developers and fail to scale to large project esses. Still, open source software is frequently of high sizes [16]. quality. This raises the question of how and why open A host of successful open source projects in both well source software creates high quality and whether it can and poorly understood problem domains and of small to maintain this quality for ever larger project sizes. In this large sizes suggests that open source can cope both with paper, we look at one particular quality indicator, the den- changing requirements and large project sizes. In this pa- sity of comments in open source software code. We find per we focus on one particular code metric, the comment that successful open source projects follow a consistent density, and assess it across 5,229 active open source pro- practice of documenting their source code, and we find jects, representing about 30% of all active open source that the comment density is independent of team and pro- projects. We show that commenting source code is an on- ject size. going and integrated practice of open source software de- velopment that is consistently found across all these pro- Categories and Subject Descriptors D.2.8 [Metrics]: jects.
    [Show full text]
  • Differences in the Definition and Calculation of the LOC Metric In
    Differences in the Definition and Calculation of the LOC Metric in Free Tools∗ István Siket Árpád Beszédes Department of Software Engineering Department of Software Engineering University of Szeged, Hungary University of Szeged, Hungary [email protected] [email protected] John Taylor FrontEndART Software Ltd. Szeged, Hungary [email protected] Abstract The software metric LOC (Lines of Code) is probably one of the most controversial metrics in software engineering practice. It is relatively easy to calculate, understand and use by the different stakeholders for a variety of purposes; LOC is the most frequently applied measure in software estimation, quality assurance and many other fields. Yet, there is a high level of variability in the definition and calculation methods of the metric which makes it difficult to use it as a base for important decisions. Furthermore, there are cases when its usage is highly questionable – such as programmer productivity assessment. In this paper, we investigate how LOC is usually defined and calculated by today’s free LOC calculator tools. We used a set of tools to compute LOC metrics on a variety of open source systems in order to measure the actual differences between results and investigate the possible causes of the deviation. 1 Introduction Lines of Code (LOC) is supposed to be the easiest software metric to understand, compute and in- terpret. The issue with counting code is determining which rules to use for the comparisons to be valid [5]. LOC is generally seen as a measure of system size expressed using the number of lines of its source code as it would appear in a text editor, but the situation is not that simple as we will see shortly.
    [Show full text]
  • Objectives and Context of Software Measurement, Analysis, and Control Michael A
    Objectives and Context of Software Measurement, Analysis, and Control Michael A. Cusumano Massachusetts Institute of Technology Sloan School WP#3471-92/BPS October 8, 1992 Objectives and Context of Software Measurement, Analysis and Control Michael A. Cusumano Sloan School of Management Massachusetts Institute of Technology Cambridge, Massachusetts USA 02142 Abstract This paper focuses on the what and why of measurement in software development, and the impact of the context of the development organization or of specific projects on approaches to measurement, analysis, and control. It also presents observations on contextual issues based on studies of the major Japanese software producers. 1 Introduction This paper focuses on the what and why of measurement in software development, and the impact of the context of the development organization or of specific projects on approaches to measurement, analysis, and control. It begins by discussing three basic questions: (1) Why do we measure? (2) What is it we want to know? And (3) how does the context affect the ability to measure? The second section presents observations on contextual issues based on my ongoing study of the Japanese [3]. This paper also relies heavily on extensive interviews conducted at Hitachi, Fujitsu, NEC, and Toshiba during May, August, and September 1992. 2 Basic Questions Why we measure aspects of the software development process and conduct analytical exercises or experiments, as well as what we try to do with this information, depends very much on who is doing or requesting the measuring and analysis. There already exists a broad literature on technical aspects software measurement and the relationship to process management, which will not be reviewed here [1, 2, 5, 6].
    [Show full text]
  • 12 Steps to Useful Software Metrics
    12 Steps to Useful Software Metrics Linda Westfall The Westfall Team [email protected] PMB 101, 3000 Custer Road, Suite 270 Plano, TX 75075 972-867-1172 (voice) 972-943-1484 (fax) Abstract: 12 Steps to Useful Software Metrics introduces the reader to a practical process for establishing and tailoring a software metrics program that focuses on goals and information needs. The process provides a practical, systematic, start-to-finish method of selecting, designing and implementing software metrics. It outlines a cookbook method that the reader can use to simplify the journey from software metrics in concept to delivered information. Bio: Linda Westfall is the President of The Westfall Team, which provides Software Metrics and Software Quality Engineering training and consulting services. Prior to starting her own business, Linda was the Senior Manager of the Quality Metrics and Analysis at DSC Communications where her team designed and implemented a corporate wide metric program. Linda has more than twenty-five years of experience in real-time software engineering, quality and metrics. She has worked as a Software Engineer, Systems Analyst, Software Process Engineer and Manager of Production Software. Very active professionally, Linda Westfall is a past chair of the American Society for Quality (ASQ) Software Division. She has also served as the Software Division’s Program Chair and Certification Chair and on the ASQ National Certification Board. Linda is a past-chair of the Association for Software Engineering Excellence (ASEE) and has chaired several conference program committees. Linda Westfall has an MBA from the University of Texas at Dallas and BS in Mathematics from Carnegie- Mellon University.
    [Show full text]
  • Projectcodemeter Users Manual
    ProjectCodeMeter ProjectCodeMeter Pro Software Development Cost Estimation Tool Users Manual Document version 202000501 Home Page: www.ProjectCodeMeter.com ProjectCodeMeter Is a professional software tool for project managers to measure and estimate the Time, Cost, Complexity, Quality and Maintainability of software projects as well as Development Team Productivity by analyzing their source code. By using a modern software sizing algorithm called Weighted Micro Function Points (WMFP) a successor to solid ancestor scientific methods as COCOMO, COSYSMO, Maintainability Index, Cyclomatic Complexity, and Halstead Complexity, It produces more accurate results than traditional software sizing tools, while being faster and simpler to configure. Tip: You can click the icon on the bottom right corner of each area of ProjectCodeMeter to get help specific for that area. General Introduction Quick Getting Started Guide Introduction to ProjectCodeMeter Quick Function Overview Measuring project cost and development time Measuring additional cost and time invested in a project revision Producing a price quote for an Existing project Monitoring an Ongoing project development team productivity Evaluating development team past productivity Evaluating the attractiveness of an outsourcing price quote Predicting a Future project schedule and cost for internal budget planning Predicting a price quote and schedule for a Future project Evaluating the quality of a project source code Software Screen Interface Project Folder Selection Settings File List Charts Summary Reports Extended Information System Requirements Supported File Types Command Line Parameters Frequently Asked Questions ProjectCodeMeter Introduction to the ProjectCodeMeter software ProjectCodeMeter is a professional software tool for project managers to measure and estimate the Time, Cost, Complexity, Quality and Maintainability of software projects as well as Development Team Productivity by analyzing their source code.
    [Show full text]