The Effect of Applying Design of Experiments Techniques to Software Performance

Total Page:16

File Type:pdf, Size:1020Kb

The Effect of Applying Design of Experiments Techniques to Software Performance Walden University ScholarWorks Walden Dissertations and Doctoral Studies Walden Dissertations and Doctoral Studies Collection 2015 The ffecE t of Applying Design of Experiments Techniques to Software Performance Testing Gloria Johnson Walden University Follow this and additional works at: https://scholarworks.waldenu.edu/dissertations Part of the Business Administration, Management, and Operations Commons, and the Management Sciences and Quantitative Methods Commons This Dissertation is brought to you for free and open access by the Walden Dissertations and Doctoral Studies Collection at ScholarWorks. It has been accepted for inclusion in Walden Dissertations and Doctoral Studies by an authorized administrator of ScholarWorks. For more information, please contact [email protected]. Walden University College of Management and Technology This is to certify that the doctoral dissertation by Gloria Johnson has been found to be complete and satisfactory in all respects, and that any and all revisions required by the review committee have been made. Review Committee Dr. Branford McAllister, Committee Chairperson, Applied Management and Decision Sciences Faculty Dr. Robert Kilmer, Committee Member, Applied Management and Decision Sciences Faculty Dr. Christos Makrigeorgis, University Reviewer Applied Management and Decision Sciences Faculty Chief Academic Officer Eric Riedel, Ph.D. Walden University 2015 Abstract The Effect of Applying Design of Experiments Techniques to Software Performance Testing by Gloria Johnson MBA, University of Dallas, 1995 BS, Jackson State University, 1975 Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Applied Management and Decision Sciences Walden University January 2015 Abstract Effective software performance testing is essential to the development and delivery of quality software products. Many software testing investigations have reported software performance testing improvements, but few have quantitatively validated measurable software testing performance improvements across an aggregate of studies. This study addressed that gap by conducting a meta-analysis to assess the relationship between applying Design of Experiments (DOE) techniques in the software testing process and the reported software performance testing improvements. Software performance testing theories and DOE techniques composed the theoretical framework for this study. Software testing studies (n = 96) were analyzed, where half had DOE techniques applied and the other half did not. Five research hypotheses were tested, where findings were measured in (a) the number of detected defects, (b) the rate of defect detection, (c) the phase in which the defect was detected, (d) the total number of hours it took to complete the testing, and (e) an overall hypothesis which included all measurements for all findings. The data were analyzed by first computing standard difference in means effect sizes, then through the Z test, the Q test, and the t test in statistical comparisons. Results of the meta-analysis showed that applying DOE techniques in the software testing process improved software performance testing (p < 05). These results have social implications for the software testing industry and software testing professionals, providing another empirically-validated testing methodology. Software organizations can use this methodology to differentiate their software testing process, to create more quality products, and to benefit the consumer and society in general. The Effect of Applying Design of Experiments Techniques to Software Performance Testing by Gloria Johnson MBA, University of Dallas, 1995 BS, Jackson State University, 1975 Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Applied Management and Decision Sciences Walden University January 2015 Dedication I dedicate this study to my faithful champion, my son, Kevin. I am deeply grateful for the tenacity and consistency of your encouragement and support. Acknowledgments First and foremost, all the glory, honor, and praise go to God for getting me through this program. I would like to acknowledge and thank my committee for all their guidance and patience throughout this process. Thank you to Dr. Branford McAllister, Dr. Robert Kilmer, and Dr. Christos Makrigeorgis. Thank you Dr. Teresa Bittner for you encouragement as I started this journey. I am deeply grateful to Cheryl Lockett for the help, encouragement, and unwavering friendship throughout this doctoral journey. To my Walden colleagues, thank you for your encouragement, advice, and support during our time together on this doctoral journey. To my son, my number one supporter, my extended family, friends, and church family, my deepest gratitude for your prayers and unfailing support. Finally, thank you Walden faculty and staff for your expertise, advice, and guidance. Table of Contents List of Tables ..................................................................................................................... vi List of Figures .................................................................................................................. viii Chapter 1: Introduction to the Study ....................................................................................1 Background ....................................................................................................................1 Purpose of the Study ......................................................................................................8 Nature of the Study ........................................................................................................9 Research Questions and Hypotheses ...........................................................................10 Research Questions ............................................................................................... 11 Research Hypotheses ............................................................................................ 12 Theoretical Bases for the Study ...................................................................................14 Definitions of Terms and Acronyms............................................................................15 Assumptions, Limitations, Scope, and Delimitations ..................................................19 Assumptions .......................................................................................................... 19 Limitations ............................................................................................................ 19 Scope ................................................................................................................... 19 Delimitations ......................................................................................................... 20 Significance of the Study .............................................................................................20 Gap in Current Literature ...................................................................................... 21 Professional Application ....................................................................................... 21 Implications for Positive Social Change ............................................................... 22 Summary ......................................................................................................................22 i Chapter 2: Literature Review .............................................................................................24 Introduction ..................................................................................................................24 Literature Search ..........................................................................................................25 Search Strategy ..................................................................................................... 25 General Theortical Concepts........................................................................................27 Software Testing ................................................................................................... 27 DOE Strategies...................................................................................................... 32 Meta-Analysis ....................................................................................................... 38 Literature Examination and Analysis...........................................................................49 Early Prior Research ............................................................................................. 49 Current Research ................................................................................................... 50 Variables of Interest .............................................................................................. 55 Summary ......................................................................................................................56 Chapter 3: Methodology and Design .................................................................................59 Introduction ..................................................................................................................59 Target Population .........................................................................................................60 Sampling Design ..................................................................................................
Recommended publications
  • Design of Experiments Application, Concepts, Examples: State of the Art
    Periodicals of Engineering and Natural Scinces ISSN 2303-4521 Vol 5, No 3, December 2017, pp. 421‒439 Available online at: http://pen.ius.edu.ba Design of Experiments Application, Concepts, Examples: State of the Art Benjamin Durakovic Industrial Engineering, International University of Sarajevo Article Info ABSTRACT Article history: Design of Experiments (DOE) is statistical tool deployed in various types of th system, process and product design, development and optimization. It is Received Aug 3 , 2018 multipurpose tool that can be used in various situations such as design for th Revised Oct 20 , 2018 comparisons, variable screening, transfer function identification, optimization th Accepted Dec 1 , 2018 and robust design. This paper explores historical aspects of DOE and provides state of the art of its application, guides researchers how to conceptualize, plan and conduct experiments, and how to analyze and interpret data including Keyword: examples. Also, this paper reveals that in past 20 years application of DOE have Design of Experiments, been grooving rapidly in manufacturing as well as non-manufacturing full factorial design; industries. It was most popular tool in scientific areas of medicine, engineering, fractional factorial design; biochemistry, physics, computer science and counts about 50% of its product design; applications compared to all other scientific areas. quality improvement; Corresponding Author: Benjamin Durakovic International University of Sarajevo Hrasnicka cesta 15 7100 Sarajevo, Bosnia Email: [email protected] 1. Introduction Design of Experiments (DOE) mathematical methodology used for planning and conducting experiments as well as analyzing and interpreting data obtained from the experiments. It is a branch of applied statistics that is used for conducting scientific studies of a system, process or product in which input variables (Xs) were manipulated to investigate its effects on measured response variable (Y).
    [Show full text]
  • Types of Software Testing
    Types of Software Testing We would be glad to have feedback from you. Drop us a line, whether it is a comment, a question, a work proposition or just a hello. You can use either the form below or the contact details on the rightt. Contact details [email protected] +91 811 386 5000 1 Software testing is the way of assessing a software product to distinguish contrasts between given information and expected result. Additionally, to evaluate the characteristic of a product. The testing process evaluates the quality of the software. You know what testing does. No need to explain further. But, are you aware of types of testing. It’s indeed a sea. But before we get to the types, let’s have a look at the standards that needs to be maintained. Standards of Testing The entire test should meet the user prerequisites. Exhaustive testing isn’t conceivable. As we require the ideal quantity of testing in view of the risk evaluation of the application. The entire test to be directed ought to be arranged before executing it. It follows 80/20 rule which expresses that 80% of defects originates from 20% of program parts. Start testing with little parts and extend it to broad components. Software testers know about the different sorts of Software Testing. In this article, we have incorporated majorly all types of software testing which testers, developers, and QA reams more often use in their everyday testing life. Let’s understand them!!! Black box Testing The black box testing is a category of strategy that disregards the interior component of the framework and spotlights on the output created against any input and performance of the system.
    [Show full text]
  • Design of Experiments (DOE) Using JMP Charles E
    PharmaSUG 2014 - Paper PO08 Design of Experiments (DOE) Using JMP Charles E. Shipp, Consider Consulting, Los Angeles, CA ABSTRACT JMP has provided some of the best design of experiment software for years. The JMP team continues the tradition of providing state-of-the-art DOE support. In addition to the full range of classical and modern design of experiment approaches, JMP provides a template for Custom Design for specific requirements. The other choices include: Screening Design; Response Surface Design; Choice Design; Accelerated Life Test Design; Nonlinear Design; Space Filling Design; Full Factorial Design; Taguchi Arrays; Mixture Design; and Augmented Design. Further, sample size and power plots are available. We give an introduction to these methods followed by a few examples with factors. BRIEF HISTORY AND BACKGROUND From early times, there has been a desire to improve processes with controlled experimentation. A famous early experiment cured scurvy with seamen taking citrus fruit. Lives were saved. Thomas Edison invented the light bulb with many experiments. These experiments depended on trial and error with no software to guide, measure, and refine experimentation. Japan led the way with improving manufacturing—Genichi Taguchi wanted to create repeatable and robust quality in manufacturing. He had been taught management and statistical methods by William Edwards Deming. Deming taught “Plan, Do, Check, and Act”: PLAN (design) the experiment; DO the experiment by performing the steps; CHECK the results by testing information; and ACT on the decisions based on those results. In America, other pioneers brought us “Total Quality” and “Six Sigma” with “Design of Experiments” being an advanced methodology.
    [Show full text]
  • Concepts Underlying the Design of Experiments
    Concepts Underlying the Design of Experiments March 2000 University of Reading Statistical Services Centre Biometrics Advisory and Support Service to DFID Contents 1. Introduction 3 2. Specifying the objectives 5 3. Selection of treatments 6 3.1 Terminology 6 3.2 Control treatments 7 3.3 Factorial treatment structure 7 4. Choosing the sites 9 5. Replication and levels of variation 10 6. Choosing the blocks 12 7. Size and shape of plots in field experiments 13 8. Allocating treatment to units 14 9. Taking measurements 15 10. Data management and analysis 17 11. Taking design seriously 17 © 2000 Statistical Services Centre, The University of Reading, UK 1. Introduction This guide describes the main concepts that are involved in designing an experiment. Its direct use is to assist scientists involved in the design of on-station field trials, but many of the concepts also apply to the design of other research exercises. These range from simulation exercises, and laboratory studies, to on-farm experiments, participatory studies and surveys. What characterises the design of on-station experiments is that the researcher has control of the treatments to apply and the units (plots) to which they will be applied. The results therefore usually provide a detailed knowledge of the effects of the treat- ments within the experiment. The major limitation is that they provide this information within the artificial "environment" of small plots in a research station. A laboratory study should achieve more precision, but in a yet more artificial environment. Even more precise is a simulation model, partly because it is then very cheap to collect data.
    [Show full text]
  • The Theory of the Design of Experiments
    The Theory of the Design of Experiments D.R. COX Honorary Fellow Nuffield College Oxford, UK AND N. REID Professor of Statistics University of Toronto, Canada CHAPMAN & HALL/CRC Boca Raton London New York Washington, D.C. C195X/disclaimer Page 1 Friday, April 28, 2000 10:59 AM Library of Congress Cataloging-in-Publication Data Cox, D. R. (David Roxbee) The theory of the design of experiments / D. R. Cox, N. Reid. p. cm. — (Monographs on statistics and applied probability ; 86) Includes bibliographical references and index. ISBN 1-58488-195-X (alk. paper) 1. Experimental design. I. Reid, N. II.Title. III. Series. QA279 .C73 2000 001.4 '34 —dc21 00-029529 CIP This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher. The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC for such copying. Direct all inquiries to CRC Press LLC, 2000 N.W.
    [Show full text]
  • Argus: Interactive a Priori Power Analysis
    © 2020 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization and Computer Graphics. The final version of this record is available at: xx.xxxx/TVCG.201x.xxxxxxx/ Argus: Interactive a priori Power Analysis Xiaoyi Wang, Alexander Eiselmayer, Wendy E. Mackay, Kasper Hornbæk, Chat Wacharamanotham 50 Reading Time (Minutes) Replications: 3 Power history Power of the hypothesis Screen – Paper 1.0 40 1.0 Replications: 2 0.8 0.8 30 Pairwise differences 0.6 0.6 20 0.4 0.4 10 0 1 2 3 4 5 6 One_Column Two_Column One_Column Two_Column 0.2 Minutes with 95% confidence interval 0.2 Screen Paper Fatigue Effect Minutes 0.0 0.0 0 2 3 4 6 8 10 12 14 8 12 16 20 24 28 32 36 40 44 48 Fig. 1: Argus interface: (A) Expected-averages view helps users estimate the means of the dependent variables through interactive chart. (B) Confound sliders incorporate potential confounds, e.g., fatigue or practice effects. (C) Power trade-off view simulates data to calculate statistical power; and (D) Pairwise-difference view displays confidence intervals for mean differences, animated as a dance of intervals. (E) History view displays an interactive power history tree so users can quickly compare statistical power with previously explored configurations. Abstract— A key challenge HCI researchers face when designing a controlled experiment is choosing the appropriate number of participants, or sample size. A priori power analysis examines the relationships among multiple parameters, including the complexity associated with human participants, e.g., order and fatigue effects, to calculate the statistical power of a given experiment design.
    [Show full text]
  • Design of Experiments and Data Analysis,” 2012 Reliability and Maintainability Symposium, January, 2012
    Copyright © 2012 IEEE. Reprinted, with permission, from Huairui Guo and Adamantios Mettas, “Design of Experiments and Data Analysis,” 2012 Reliability and Maintainability Symposium, January, 2012. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of ReliaSoft Corporation's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. 2012 Annual RELIABILITY and MAINTAINABILITY Symposium Design of Experiments and Data Analysis Huairui Guo, Ph. D. & Adamantios Mettas Huairui Guo, Ph.D., CPR. Adamantios Mettas, CPR ReliaSoft Corporation ReliaSoft Corporation 1450 S. Eastside Loop 1450 S. Eastside Loop Tucson, AZ 85710 USA Tucson, AZ 85710 USA e-mail: [email protected] e-mail: [email protected] Tutorial Notes © 2012 AR&MS SUMMARY & PURPOSE Design of Experiments (DOE) is one of the most useful statistical tools in product design and testing. While many organizations benefit from designed experiments, others are getting data with little useful information and wasting resources because of experiments that have not been carefully designed. Design of Experiments can be applied in many areas including but not limited to: design comparisons, variable identification, design optimization, process control and product performance prediction. Different design types in DOE have been developed for different purposes.
    [Show full text]
  • SPIRENT NETWORK SECURITY TESTING Ensure Your Carrier, Enterprise, and Mobile Network Infrastructure Is Secure
    SPIRENT NETWORK SECURITY TESTING Ensure Your Carrier, Enterprise, and Mobile Network Infrastructure Is Secure Network protection is the number one IT concern of businesses globally and significant investment is made in security products. Can they really be trusted? Don’t rely on vendor claims to thwart attacks that are becoming more complex by the day. Don’t put your network at risk. Trust in Spirent to validate that your network security system performs as intended. The key to successful security testing is knowing what to test, Trusted Performance when to test and how to test. Any security solution must have the Spirent TestCenter with Spirent Avalanche™ and Spirent Studio™ ability to: security testing solutions are trusted industry-wide. Spirent Ŋ Detect and prevent hundreds of thousands of known attacks, TestCenter has won numerous industry awards and has been blended attacks and vulnerabilities selected as the test solution of choice for countless public tests Ŋ Maintain throughput with processor-intensive features like because of its capabilities. content-filtering enabled Ŋ Maintain acceptable performance while under attack Optimal times to test are prior to making any purchase decision, deployment on the live network, after network upgrades have been implemented to ensure devices are correctly configured, and before new services rollouts. Periodic testing is also advisable, especially at the rate new attacks are surfacing. Spirent TestCenter™ provides purpose-built, end-to-end security SPIRENT TESTCENTER C100 testing ranging from Terabit-scale, at line rate speeds to emulate daily business traffic, to validating security capabilities via fuzzing testing, DDoS replication and much more. Spirent has experts and solutions that will assist in security test planning, either on-site CHeck POINT 61000 Security SysteM or virtually.
    [Show full text]
  • Subject Experiments in Augmented Reality
    Quan%ta%ve and Qualita%ve Methods for Human-Subject Experiments in Augmented Reality IEEE VR 2014 Tutorial J. Edward Swan II, Mississippi State University (organizer) Joseph L. Gabbard, Virginia Tech (co-organizer) 1 Schedule 9:00–10:30 1.5 hrs Experimental Design and Analysis Ed 10:30–11:00 0.5 hrs Coffee Break 11:00–12:30 1.5 hrs Experimental Design and Analysis / Ed / Formative Usability Evaluation Joe 12:30–2:00 1.5 hrs Lunch Break 2:00–3:30 1.5 hrs Formative Usability Evaluation Joe 3:30–4:00 0.5 hrs Coffee Break 4:00–5:30 1.5 hrs Formative Usability Evaluation / Joe / Panel Discussion Ed 2 Experimental Design and Analysis J. Edward Swan II, Ph.D. Department of Computer Science and Engineering Department of Psychology (Adjunct) Mississippi State University Motivation and Goals • Course attendee backgrounds? • Studying experimental design and analysis at Mississippi State University: – PSY 3103 Introduction to Psychological Statistics – PSY 3314 Experimental Psychology – PSY 6103 Psychometrics – PSY 8214 Quantitative Methods In Psychology II – PSY 8803 Advanced Quantitative Methods – IE 6613 Engineering Statistics I – IE 6623 Engineering Statistics II – ST 8114 Statistical Methods – ST 8214 Design & Analysis Of Experiments – ST 8853 Advanced Design of Experiments I – ST 8863 Advanced Design of Experiments II • 7 undergrad hours; 30 grad hours; 3 departments! 4 Motivation and Goals • What can we accomplish in one day? • Study subset of basic techniques – Presenters have found these to be the most applicable to VR, AR systems • Focus on intuition behind basic techniques • Become familiar with basic concepts and terms – Facilitate working with collaborators from psychology, industrial engineering, statistics, etc.
    [Show full text]
  • Model Alignment Using Optimization and Design of Experiments
    Proceedings of the 2017 Winter Simulation Conference W. K. V. Chan, A. D'Ambrogio, G. Zacharewicz, N. Mustafee, G. Wainer, and E. Page, eds. MODEL ALIGNMENT USING OPTIMIZATION AND DESIGN OF EXPERIMENTS Alejandro Teran-Somohano Levent Yilmaz Alice E. Smith Department of Industrial and Systems Engineering Department of Computer Science and Software Engineering Auburn University Auburn University 3301 Shelby Center 3101 Shelby Center Auburn, AL 36849 USA Auburn, AL 36849 USA ABSTRACT The use of simulation modeling for scientific tasks demands that these models be replicated and independently verified by other members of the scientific community. However, determining whether two independently developed simulation models are “equal,” is not trivial. Model alignment is the term for this type of comparison. In this paper, we present an extension of the model alignment methodology for comparing the outcome of two simulation models that searches the response surface of both models for significant differences. Our approach incorporates elements of both optimization and design of experiments for achieving this goal. We discuss the general framework of our methodology, its feasibility of implementation, as well as some of the obstacles we foresee in its generalized application. 1 INTRODUCTION The present work is part of a larger research endeavor focused on simulation model transformation across different platforms. The overarching goal is to support the replicability and reproducibility of simulation experiments for scientific discovery. Model transformation is not, of itself, sufficient to achieve this goal. It is also necessary to verify that the transformed model properly mimics the behavior of the original. But what does it mean for a model to mimic another model? In this paper, we consider two models to be equal if the response surfaces of both models are equal.
    [Show full text]
  • Design and Analysis of Experiments Course Notes for STAT 568
    Design and Analysis of Experiments Course notes for STAT 568 Adam B Kashlak Mathematical & Statistical Sciences University of Alberta Edmonton, Canada, T6G 2G1 March 27, 2019 cbna This work is licensed under the Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/ by-nc-sa/4.0/. Contents Preface1 1 One-Way ANOVA2 1.0.1 Terminology...........................2 1.1 Analysis of Variance...........................3 1.1.1 Sample size computation....................6 1.1.2 Contrasts.............................7 1.2 Multiple Comparisons..........................8 1.3 Random Effects..............................9 1.3.1 Derivation of the F statistic................... 11 1.4 Cochran's Theorem............................ 12 2 Multiple Factors 15 2.1 Randomized Block Design........................ 15 2.1.1 Paired vs Unpaired Data.................... 17 2.1.2 Tukey's One DoF Test...................... 18 2.2 Two-Way Layout............................. 20 2.2.1 Fixed Effects........................... 20 2.3 Latin Squares............................... 21 2.3.1 Graeco-Latin Squares...................... 23 2.4 Balanced Incomplete Block Designs................... 24 2.5 Split-Plot Designs............................ 26 2.6 Analysis of Covariance.......................... 30 3 Multiple Testing 33 3.1 Family-wise Error Rate......................... 34 3.1.1 Bonferroni's Method....................... 34 3.1.2 Sidak's Method.......................... 34 3.1.3 Holms' Method.......................... 35 3.1.4 Stepwise Methods........................ 35 3.2 False Discovery Rate........................... 36 3.2.1 Benjamini-Hochberg Method.................. 37 4 Factorial Design 39 4.1 Full Factorial Design........................... 40 4.1.1 Estimating effects with regression................ 41 4.1.2 Lenth's Method.......................... 43 4.1.3 Key Concepts..........................
    [Show full text]
  • Achieving Test Automation Synergy
    ACHIEVING TEST AUTOMATION SYNERGY CI/CD PIPELINES + TEST AUTOMATION + TEST DATA GENERATION How Test Data Generation integrates with CI/CD pipelines and test automation tools to unlock the full potential of accelerated software development. TABLE OF CONTENTS Embracing Quality at Speed 3 Thinking Differently About Test Data 10 The Importance of Test Data Quality 16 Integrating TDG with CI/CD 21 Maximizing Test Automation ROI 28 Innovating Test Data Solutions 36 Realizing Operational Benefits 45 ACHIEVING TEST AUTOMATION SYNERGY CI/CD PIPELINES + TEST AUTOMATION + TEST DATA GENERATION 1 EMBRACING QUALITY AT SPEED SPEED OF DEVELOPMENT N DevOps IO AT CI/CD TA Agile CLES M DA TO T AU TES ST RELEASE CY TE CI/CD Automation Test Data Pipelines Frameworks Generation Software engineering teams are accelerating the speed of development through the adoption of Agile and DevOps methodologies combined with the deployment of CI/CD pipelines. This trend has made test automation essential for QA organizations to keep pace with development. Faster release cycles are driving automation in every testing category. Test automation has become the key to ensuring quality at the speed of continuous delivery. This eBook will introduce Test Data Generation (TDG) as the next generation of Test Data Management (TDM) and will illustrate how this new and innovative technology can help your organization achieve the full synergy of test automation in a continuous delivery environment. Release Cycles Keep Getting Faster For most development environments, DevOps has become a standard approach for compressing release cycles from months and weeks to just days and hours. Market data now characterizes DevOps as widely adopted, with over 50% of organizations having implemented it and over half of the remaining organizations planning to implement the approach within 12 months (Forrester).
    [Show full text]