Components of a Modern Quality Approach to Software Development

Total Page:16

File Type:pdf, Size:1020Kb

Components of a Modern Quality Approach to Software Development Components of a Modern Quality Approach To Software Development Dolores Zage, Wayne Zage Ball State University Sponsor: iconectiv Final Report October 2015 1 Table of Contents Section 1: Software Development Process and Quality 4 1.1 New Versus Traditional Process Overview 4 1.2 DevOps 5 1.2.1 Distinguishing Characteristics of High-Performing DevOps Development Cultures 5 1.2.2 Zeroturnaround Survey 5 1.2.2.1 Survey: Quality of Development 6 1.2.2.2 Survey Results: Predictability 6 1.2.2.3 Survey Results: Tool Type Usage 7 1.2.3 DevOps Process Maturity Model 8 1.2.4 DevOps Workflows 9 1.3 Code Reviews 10 1.3.1 Using Checklists 11 1.3.1.1 Sample Checklist Items 11 1.3.2 Checklists for Security 12 1.3.3 Monitoring the Code Review Process 12 1.3.4 Evolvability Defects 13 1.3.5 Other Guidelines 13 1.3.6 High Risk Code and High Risk Changes 14 1.4 Testing 14 1.4.1 Number of Test Cases 15 1.5 Agile Process and QA 16 1.5.1 Agile Quality Assessment (QA) on Scrum 17 1.6 Product Backlog 19 Section 2: Software Product Quality and its Measurement 19 2.1 Goal Question Metric Model 19 2.2 Generic Software Quality Models 20 2.3 Comparing Traditional and Agile Metrics 22 2.4 NESMA Agile Metrics 23 2.4.1 Metrics for Planning and Forecasting 23 2.4.2 Dividing the Work into Manageable Pieces 24 2.4.3 Metrics for Monitoring and Control 24 2.4.4 Metrics for Improvement (Product Quality and Process Improvement) 25 2.5 Another State-of-the-Practice Survey on Agile Metrics 26 2.6 Metric Trends are Important 27 2.7 Defect Measurement 28 2.8 Defects and Complexity Linked 32 2.9 Performance, a Factor in Quality 35 2.10 Security, a Factor in Quality 37 2.10.1 Security Standards 37 2.10.2 Shift of Security Responsibilities within Development 39 2 2.10.3 List of Current Practices 40 2.10.4 Risk of Third Party Applications 40 2.10.5 Rate of Repairs 41 2.10.6 Other Code Security Strategies 41 2.10.7 Design Vulnerabilities 42 Section 3: Assessment of Development Methods and Project Data 43 3.1 The Namcook Analytics Software Risk Master (SRM) tool 43 3.2 Crosstalk Table 46 3.2.1 Ranges of Software Development Quality 49 3.3 Scoring Method of Methods, Tools and Practices in Software Development 50 3.4 DevOps Self-Assessment by IBM 50 3.5 Thirty Software Engineering Issues that have Stayed Constant for Thirty Years 51 3.6 Quality and Defect Removal 52 3.6.1 Error-Prone Modules (EPM) 52 3.6.2 Inspection Metrics 53 3.6.3 General Terms of Software Failure and Software Success 53 Section 4: Conclusions and Project Take-Aways 53 4.1 Process 53 4.2 Product Measurements 55 Acknowledgements 56 Appendix A – Namcook Analytics Estimation Report 56 Appendix B – Sample DevOps Self-Assessment 65 References 68 3 Section 1: Software Development Process and Quality 1.1 New Versus Traditional Process Overview At first enterprises used Agile development techniques for pilot projects developed by small teams. Realizing the benefits of shorter delivery and release phases and responsiveness to change while still delivering quality software, enterprises searched for methods to mimic similar benefits for their larger development efforts by scaling Agile. Thus, many frameworks and methods were developed to satisfy this need. Scaled Agile Framework (SAFe) is one of the most implemented scaled Agile frameworks. Most of the pieces of the framework are borrowed, existing Agile methods packaged and organized in a different way to accommodate the scale. Integrated within SAFe and other enterprise development scaled methodologies are Agile methods, such as Scrum and other techniques to foster the delivery of software. Why evaluate the process? Developing good software is difficult and a good process or method can make this difficult task a little easier and perhaps more predictable. In the past, researchers performed an analysis of software standards and determined that standards heavily focus on processes rather than products [PFLE]. They characterized software standards as prescribing “the recipe, the utensils, and the cooking techniques, and then assume that the pudding will taste good.” This corresponds with Deming who argued that, “The quality of a product is directly related to the quality of the process used to create it” [DEMI]. Watts Humphrey, the creator of the CMM, believes that high quality software can only be produced by a high quality process. Most would agree that the probability of producing high quality software is greater if the process is also of high quality. All recognize that possessing a good process in isolation is not enough. The process has to be filled with skilled, motivated people. Can a process promote motivation? Agile methods are people-oriented rather than process-oriented. Agile methods assert that no process will ever make up the skill of the development team and, therefore, the role of a process is to support the development team in their work. Another movement, DevOps, encourages collaboration through integrating development and operations. Figure 1 compares the old and new way of delivering software. The catalyst for many of the enumerated changes is teamwork and cooperation. Everyone should be participating and all share in responsibility and accountability. In true Agile, teams have the autonomy of choosing development tools. In the new world, disconnects should be removed and the development tools and processes should be chosen so that people can receive feedback quickly and make necessary changes. Successful integration of DevOps and Agile development will play a key role in the delivery of rapid, continuous, high quality software. Institutions that can accomplish this merger at the enterprise scale will outpace those struggling to adapt. Figure 1: Old and New Way of Developing Software For this reason, identifying the characteristics of high-performing Agile and DevOps cultures is important to assist in outlining a new transformational technology. 4 1.2 DevOps DevOps is the fusion of “Development” and “Operations” functions to reduce risk, liability, and time-to- market while increasing operational awareness. Over the past decade, it is one of the newest and largest movements in Information Technology. DevOps evolution stemmed from many of the previous ideas in software development such as automation tooling, culture shifts, and iterative development models such as Agile. During the fusion process, DevOps was not provided with a central set of operational guidelines. Once an organization decided to use DevOps for its software development, it had free reign on deciding how to implement DevOps which produced its own challenges. Even Agile, whose many attributes were adopted by the DevOps movement, falls into the same predicament. In 2006, Gilb and Brody wrote an article suggesting the same lack of quantification for Agile methods, and this is a major weakness [GILB]. There is insufficient focus on quantified performance levels such as metrics evaluating required qualities, resource savings and workload capacities of the developed software. This omission of not being able to measure the change to DevOps does not suggest that DevOps does not prescribe monitoring and measuring. The purpose of monitoring and measuring within DevOps is to compare the current state of a project to the same project from some time in the near past, providing an answer to current project progress. However, this quantification does not help to answer the question about benchmarking the overall DevOps implementation. 1.2.1 Distinguishing Characteristics of High-Performing DevOps Development Cultures A possible description of a high-performing DevOps environment is that it produces good quality systems on time. It is important to identify the characteristics of such high-performing cultures so that these practices are emulated and metrics can be identified that quantify these typical characteristics and track successful trends. Below are seven key points of a high-performing DevOps culture [DYNA]: 1. Deploy daily – decouple code deployment from feature releases 2. Handle all non-functional requirements (performance, security) at every stage 3. Exploit automated testing to catch errors early and quickly - 82% of high-performance DevOps organizations use automation [PUPP]. 4. Employ strict version control – version control in operations has the strongest correlation with high performing DevOps organizations [PUPP]. 5. Implement end-to-end performance monitoring and metrics 6. Perform peer code review 7. Allocate more cycle time to reduction of technical debt. Other attributes of high-performing DevOps environments are that operations are involved early on in the development process so that plans for any necessary changes to the production environment can be formulated, and a continuous feedback loop between development, testing and operations is present. Just as development has its daily stand-up meeting, development, QA and operations should meet frequently to jointly analyze issues in production. 1.2.2 Zeroturnaround Survey Many of the above trends are highlighted in a survey conducted by Zeroturnaround of 1,006 developers on the practices developers use during software development. The goal of this survey was to prove or disprove the effectiveness of the best quality practices including the methodologies, tools and company size and industry within the context of these practices [KABA]. Noted in the report was that the survey respondents possessed a disproportionate bias towards Java and were employed in Software/Technology 5 companies. Zeroturnaround also divided the respondents into three categories depending on their responses into three groups. The top 10% were identified as rock stars, the middle 80% as average and the bottom 10% as laggards. For this report, Zeroturnaround concentrated on two aspects of software development, namely, the quality of the software and the predictability of delivery.
Recommended publications
  • The Use of Summation to Aggregate Software Metrics Hinders the Performance of Defect Prediction Models
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. XX, NO. XX, XXX 2016 1 The Use of Summation to Aggregate Software Metrics Hinders the Performance of Defect Prediction Models Feng Zhang, Ahmed E. Hassan, Member, IEEE, Shane McIntosh, Member, IEEE, and Ying Zou, Member, IEEE Abstract—Defect prediction models help software organizations to anticipate where defects will appear in the future. When training a defect prediction model, historical defect data is often mined from a Version Control System (VCS, e.g., Subversion), which records software changes at the file-level. Software metrics, on the other hand, are often calculated at the class- or method-level (e.g., McCabe’s Cyclomatic Complexity). To address the disagreement in granularity, the class- and method-level software metrics are aggregated to file-level, often using summation (i.e., McCabe of a file is the sum of the McCabe of all methods within the file). A recent study shows that summation significantly inflates the correlation between lines of code (Sloc) and cyclomatic complexity (Cc) in Java projects. While there are many other aggregation schemes (e.g., central tendency, dispersion), they have remained unexplored in the scope of defect prediction. In this study, we set out to investigate how different aggregation schemes impact defect prediction models. Through an analysis of 11 aggregation schemes using data collected from 255 open source projects, we find that: (1) aggregation schemes can significantly alter correlations among metrics, as well as the correlations between metrics and
    [Show full text]
  • Measuring the Software Size of Sliced V-Model Projects
    2014 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement Measuring the Software Size of Sliced V-model Projects Andreas Deuter PHOENIX CONTACT Electronics GmbH Gregor Engels Dringenauer Str. 30 University of Paderborn 31812 Bad Pyrmont, Germany Zukunftsmeile 1 Email: [email protected] 33102 Paderborn, Germany Email: [email protected] Abstract—Companies expect higher productivity of their soft- But, the “manufacturing” of software within the standard ware teams when introducing new software development methods. production process is just a very short process when bringing Productivity is commonly understood as the ratio of output the binaries of the software to the device. This process is hardly created and resources consumed. Whereas the measurement of to optimize and does not reflect the “production of software” the resources consumed is rather straightforward, there are at all. The creation of the software is namely done in the several definitions for counting the output of a software de- developing teams by applying typical software engineering velopment. Source code-based metrics create a set of valuable figures direct from the heart of the software - the code. However, methods. However, to keep up with the high demands on depending on the chosen process model software developers and implementing new functionality (e.g. for PLC) the software testers produce also a fair amount of documentation. Up to development process within these companies must improve. now this output remains uncounted leading to an incomplete Therefore, they start to analyze their software processes in view on the development output. This article addresses this open detail and to identify productivity drivers.
    [Show full text]
  • Software Error Analysis Technology
    NIST Special Publication 500-209 Computer Systems Software Error Analysis Technology U.S. DEPARTMENT OF COMMERCE Wendy W. Peng Technology Administration Dolores R. Wallace National Institute of Standards and Technology NAT L INST. OF ST4ND & TECH R I.C. NISI PUBLICATIONS A111D3 TTSTll ^QC ' 100 .U57 //500-209 1993 7he National Institute of Standards and Technology was established in 1988 by Congress to "assist industry in the development of technology . needed to improve product quality, to modernize processes, to reliability . manufacturing ensure product and to facilitate rapid commercialization . , of products based on new scientific discoveries." NIST, originally founded as the National Bureau of Standards in 1901, works to strengthen U.S. industry's competitiveness; advance science and engineering; and improve public health, safety, and the environment. One of the agency's basic functions is to develop, maintain, and retain custody of the national standards of measurement, and provide the means and methods for comparing standards used in science, engineering, manufacturing, commerce, industry, and education with the standards adopted or recognized by the Federal Government. As an agency of the U.S. Commerce Department's Technology Administration, NIST conducts basic and applied research in the physical sciences and engineering and performs related services. The Institute does generic and precompetitive work on new and advanced technologies. NIST's research facilities are located at Gaithersburg, MD 20899, and at Boulder, CO 80303.
    [Show full text]
  • Software Maintainability a Brief Overview & Application to the LFEV 2015
    Software Maintainability A Brief Overview & Application to the LFEV 2015 Adam I. Cornwell Lafayette College Department of Electrical & Computer Engineering Easton, Pennsylvania [email protected] Abstract — This paper gives a brief introduction on For “lexical level” approaches which base what software maintainability is, the various methods complexity on program code, the following which have been devised to quantify and measure software maintainability, its relevance to the ECE 492 measurands are typical: individual and average Senior Design course, and some practical lines of code; number of commented lines, implementation guidelines. executable statements, blank lines, tokens, and data declarations; source code readability, or the Keywords — Software Maintainability; Halstead ratio of LOC to commented LOC; Halstead Metrics; McCabe’s Cyclomatic complexity; radon Metrics, including Halstead Length, Volume, and Effort; McCabe’s Cyclomatic Complexity; the I. INTRODUCTION control structure nesting level; and the number of Software Maintainability is an important knots, or the number of times the control flow concept in the upkeep of long-term software crosses. The final measurand is not as useful with systems. According to the IEEE, software object oriented programming but can still be of maintainability is defined as “the ease with which some use. a software system or component can be modified to correct faults, improve performance or other “Psychological complexity” approaches measure attributes, or adapt to a changed environment difficulty and are based on understandability and [1].” Software Maintainability can be measured the user [3]. The nature of the file and the using various devised methods, although none difficulty experienced by those working with the have been conclusively shown to work in a large file are what contribute to these kinds of variety of software systems [6].
    [Show full text]
  • A Metrics-Based Software Maintenance Effort Model
    A Metrics-Based Software Maintenance Effort Model Jane Huffman Hayes Sandip C. Patel Liming Zhao Computer Science Department Computer Science Department Computer Science Department Lab for Advanced Networking University of Louisville University of Kentucky University of Kentucky [email protected] [email protected] [email protected] (corresponding author) Abstract planning a new software development project. Albrecht introduced the notion of function points (FP) to estimate We derive a model for estimating adaptive software effort [1]. Mukhopadhyay [19] proposes early software maintenance effort in person hours, the Adaptive cost estimation based on requirements alone. Software Maintenance Effort Model (AMEffMo). A number of Life Cycle Management (SLIM) [23] is based on the metrics such as lines of code changed and number of Norden/Rayleigh function and is suitable for large operators changed were found to be strongly correlated projects. Shepperd et al. [27] argued that algorithmic cost to maintenance effort. The regression models performed models such as COCOMO and those based on function well in predicting adaptive maintenance effort as well as points suggested an approach based on using analogous provide useful information for managers and maintainers. projects to estimate the effort for a new project. In addition to the traditional off-the-self models such as 1. Introduction COCOMO, machine-learning methods have surfaced recently. In [17], Mair et al. compared machine-learning Software maintenance typically accounts for at least 50 methods in building software effort prediction systems. percent of the total lifetime cost of a software system [16]. There has also been some work toward applying fuzzy Schach et al.
    [Show full text]
  • Software Metrics: Successes, Failures and New Directions
    Software Metrics: Successes, Failures and New Directions Norman E Fenton Centre for Software Reliability Agena Ltd City University 11 Main Street Northampton Square Caldecote London EC1V OHB Cambridge CB3 7NU United Kingdom United Kingdom Tel: 44 171 477 8425 Tel 44 1223 263753 / 44 181 530 5981 Fax: 44 171 477 8585 Fax: 01223 263899 [email protected] [email protected] www.csr.city.ac.uk/people/norman.fenton/ www.agena.co.uk Abstract The history of software metrics is almost as old as the history of software engineering. Yet, the extensive research and literature on the subject has had little impact on industrial practice. This is worrying given that the major rationale for using metrics is to improve the software engineering decision making process from a managerial and technical perspective. Industrial metrics activity is invariably based around metrics that have been around for nearly 30 years (notably Lines of Code or similar size counts, and defects counts). While such metrics can be considered as massively successful given their popularity, their limitations are well known, and mis-applications are still common. The major problem is in using such metrics in isolation. We argue that it is possible to provide genuinely improved management decision support systems based on such simplistic metrics, but only by adopting a less isolationist approach. Specifically, we feel it is important to explicitly model: a) cause and effect relationships and b) uncertainty and combination of evidence. Our approach uses Bayesian Belief nets which are increasingly seen as the best means of handling decision-making under uncertainty.
    [Show full text]
  • Software Measurement: a Necessary Scientific Basis Norman Fenton
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING. VOL. 20, NO. 3, MARCH I994 I99 Software Measurement: A Necessary Scientific Basis Norman Fenton Abstruct- Software measurement, like measurement in any 11. MEASUREMENTFUNDAMENTALS other discipline, must adhere to the science of measurement if it is to gain widespread acceptance and validity. The observation In this section, we provide a summary of the key concepts of some very simple, but fundamental, principles of measurement from the science of measurement which are relevant to soft- can have an extremely beneficial effect on the subject. Measure- ware metrics. First, we define the fundamental notions (which ment theory is used to highlight both weaknesses and strengths of are generally not well understood) and then we summarise the software metrics work, including work on metrics validation. We representational theory of measurement. Finally, we explain identify a problem with the well-known Weyuker properties, but also show that a criticism of these properties by Cherniavsky and how this leads inevitably to a goal-oriented approach. Smith is invalid. We show that the search for general software complexity measures is doomed to failure. However, the theory A. What is Measurement? does help us to define and validate measures of specific complexity attributes. Above all, we are able to view software measurement Measurement is defined as the process by which numbers or in a very wide perspective, rationalising and relating its many symbols are assigned to attributes of entities in the real world diverse activities. in such a way as to describe them according to clearly defined Index Terms-Software measurement, empirical studies, met- rules [ 131, [36].
    [Show full text]
  • Automated Software Process Performance Analysis and Improvement Recommendation
    FACULDADE DE ENGENHARIA DA UNIVERSIDADE DO PORTO Automated Software Process Performance Analysis and Improvement Recommendation Mushtaq Raza MAP-i Doctoral Program in Computer Science Supervisor: João Pascoal Faria June 27, 2017 c Mushtaq Raza, 2017 Automated Software Process Performance Analysis and Improvement Recommendation Mushtaq Raza MAP-i Doctoral Program in Computer Science Dissertation submitted to the Faculty of Engineering, University of Porto in partial fulfillment of the requirements for the degree of Doctor of Philosophy Approved by: President: Dr. Eugénio da Costa Oliveira Referee: Dr. António Manuel Ferreira Rito da Silva Referee: Dr. Paulo Jorge dos Santos Gonçalves Ferreira Referee: Dr. Fernando Manuel Pereira da Costa Brito e Abreu Referee: Dr. Ademar Manuel Teixeira de Aguiar Supervisor: Dr. João Pascoal Faria June 27, 2017 Abstract Software development processes can generate significant amounts of data that can be periodically analyzed to identify performance problems, determine their root causes and devise improvement actions. However, there is a lack of methods and tools for helping in that kind of analysis. Con- ducting the analysis manually is challenging because of the potentially large amount of data to analyze, the effort and expertise required and the lack of benchmarks for comparison. Hence, the goal of this dissertation is to develop methods, models and tools for automating the analysis of process performance data and identifying and ranking performance problems and their root causes, reducing effort and errors and improving user satisfaction as compared to previous approaches. The main contributions of the dissertation are a novel method for process performance analysis and improvement recommendation (the ProcessPAIR method), a support tool (the ProcessPAIR tool), and a performance model for instantiating ProcessPAIR for the Personal Software Process (the ProcessPAIR model for the PSP).
    [Show full text]
  • Stephan Goericke Editor the Future of Software Quality Assurance the Future of Software Quality Assurance Stephan Goericke Editor
    Stephan Goericke Editor The Future of Software Quality Assurance The Future of Software Quality Assurance Stephan Goericke Editor The Future of Software Quality Assurance Editor Stephan Goericke iSQI GmbH Potsdam Germany Translated from the Dutch Original book: ‘AGILE’, © 2018, Rini van Solingen & Manage- ment Impact – translation by tolingo GmbH, © 2019, Rini van Solingen ISBN 978-3-030-29508-0 ISBN 978-3-030-29509-7 (eBook) https://doi.org/10.1007/978-3-030-29509-7 This book is an open access publication. © The Editor(s) (if applicable) and the Author(s) 2020 Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 Inter- national License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made. The images or other third party material in this book are included in the book’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the book’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
    [Show full text]
  • Ssfm BPG 1: Validation of Software in Measurement Systems
    NPL Report DEM-ES 014 Software Support for Metrology Best Practice Guide No. 1 Validation of Software in Measurement Systems Brian Wichmann, Graeme Parkin and Robin Barker NOT RESTRICTED January 2007 National Physical Laboratory Hampton Road Teddington Middlesex United Kingdom TW11 0LW Switchboard 020 8977 3222 NPL Helpline 020 8943 6880 Fax 020 8943 6458 www.npl.co.uk Software Support for Metrology Best Practice Guide No. 1 Validation of Software in Measurement Systems Brian Wichmann with Graeme Parkin and Robin Barker Mathematics and Scientific Computing Group January 2007 ABSTRACT The increasing use of software within measurement systems implies that the validation of such software must be considered. This Guide addresses the validation both from the perspective of the user and the supplier. The complete Guide consists of a Management Overview and a Technical Application together with consideration of its use within safety systems. Version 2.2 c Crown copyright 2007 Reproduced with the permission of the Controller of HMSO and Queen’s Printer for Scotland ISSN 1744–0475 National Physical Laboratory, Hampton Road, Teddington, Middlesex, United Kingdom TW11 0LW Extracts from this guide may be reproduced provided the source is acknowledged and the extract is not taken out of context We gratefully acknowledge the financial support of the UK Department of Trade and Industry (National Measurement System Directorate) Approved on behalf of the Managing Director, NPL by Jonathan Williams, Knowledge Leader for the Electrical and Software team Preface The use of software in measurement systems has dramatically increased in the last few years, making many devices easier to use, more reliable and more accurate.
    [Show full text]
  • Week Assignment
    https://www.halvorsen.blog Week Assignment Project Kick-off & Planning Hans-Petter Halvorsen All Documents, Code, etc. should be uploaded to Teams/Azure DevOps! Week Assignment 1. Project Start: Define Teams & Roles. Make CV 2. Create a Software Development Plan (SDP) 3. Team Brainstorming (What/How?) 4. Development Tools – Install necessary Software – Get Started with “Azure DevOps”* *Previously Visual Studio Team Services (VSTS) See Next Slides for more details... Textbooks (Topics this Week) Software Engineering, Ian Sommerville Ch.1: Introduction Ch.2: Software Processes Ch.23: Project Planning (with SDP example) Video: What is Software Engineering and Why do we need it? https://youtu.be/R3NzTt0BTWE Video: Introduction to Software Engineering (10 Questions to Introduce Software Engineering) https://youtu.be/gi5kxGslkNc Video: FundaMental Activities of Software Engineering https://youtu.be/Z2no7DxDWRI Office Hours: Tirsdager 10:15-14:00 “The Office” Fredager: 10:15-14:00 Lunch 11:30-12:15 Det er meningen at dere skal være tilstede på kontoret i kontortiden – selv om ikke “sjefene” (les «lærerne») er der. Dvs. det er ikke behov for å tilkalle lærerne hvis de ikke C-139a skulle dukke opp hver gang. Når dere ankommer “kontoret” (C-139a), begynner dere å jobbe videre med prosjektet. Dere trenger ikke å sitte å vente på at dere skal få beskjed Team 2 om hva dere skal gjøre, da dere er selvstendige Team som har ansvaret for hvert deres prosjekt og fremdriften av dette. Slik er det i arbeidslivet, og slik er det her. Team 1 Management Team 3 Dere er Midlertidig ansatt (5 Måneders prøvetid) soM systemutviklere i firmaet “USN Software AS”.
    [Show full text]
  • DRE) for Software
    Exceeding 99% in Defect Removal Efficiency (DRE) for Software Draft 11.0 September 6, 2016 Capers Jones, VP and CTO, Namcook Analytics LLC Abstract Software quality depends upon two important variables. The first variable is that of “defect potentials” or the sum total of bugs likely to occur in requirements, architecture, design, code, documents, and “bad fixes” or new bugs in bug repairs. Defect potentials are measured using function point metrics, since “lines of code” cannot deal with requirements and design defects. (This paper uses IFPUG function points version 4.3. The newer SNAP metrics are only shown experimentally due to insufficient empirical quality data with SNAP as of 2016. However an experimental tool is included for calculating SNAP defects.) The second important measure is “defect removal efficiency (DRE)” or the percentage of bugs found and eliminated before release of software to clients. The metrics of Defect Potentials and Defect Removal Efficiency (DRE) were developed by IBM circa 1973 and are widely used by technology companies and also by insurance companies, banks, and other companies with large software organizations. The author’s Software Risk Master (SRM) estimating tool predicts defect potentials and defect removal efficiency (DRE) as standard quality outputs for all software projects. Web: www.Namcook.com Email: [email protected] Copyright © 2016 by Capers Jones. All rights reserved. 1 Introduction Defect potentials and defect removal efficiency (DRE) are useful quality metrics developed by IBM circa 1973 and widely used by technology companies as well as by banks, insurance companies, and other organizations with large software staffs. This combination of defect potentials using function points and defect removal efficiency (DRE) are the only accurate and effective measures for software quality.
    [Show full text]