Effectively Integrating Unit Testing and Code Coverage Into Continuous Integration

Total Page:16

File Type:pdf, Size:1020Kb

Effectively Integrating Unit Testing and Code Coverage Into Continuous Integration M ASARYKOVAUNIVERZITA FAKULTAINFORMATIKY}w¡¢£¤¥¦§¨ !"#$%&'()+,-./012345<yA| Effectively Integrating Unit Testing and Code Coverage into Continuous Integration D IPLOMATHESIS Tom´aˇsSvrˇcek Brno, spring 2015 Declaration Hereby I declare, that this paper is my original authorial work, which I have worked out by my own. All sources, references and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Advisor: Ing. RNDr. Barbora Buhnov¨ a,´ Ph.D. ii Acknowledgement I would like to thank my supervisor, Barbora Buhnov¨ a,´ for all valuable ad- vice during the writing of this master thesis. I would like to thank Ronny Kolb as well for the patience, guidance and feedback during the whole process of implementation and writing part. I cannot forget to thank all my colleagues in ACT team for the great cooperation and willingness to help. Without these people the work would not be completed. iii Abstract These days, automated testing is a fundamental part of the software devel- opment process. It helps the developers to avoid the basic mistakes imme- diately and therefore it saves the development time. Thanks to the rapid errors discovering, the quality of the developed product is increasing. This master thesis investigates how to effectively integrate Unit Testing and Code Coverage to the Continuous Integration process. The idea is ana- lyzed in the State of the Art chapter and the current situation is inspected in the State of the Practice in Honeywell chapter. Based on the results of the analysis, the plugin for the Atlassian Bamboo server is designed and implemented. Evaluation of the developed product is done and its improvements are proposed for the future work. iv Keywords Atlassian, Bamboo, Code Coverage, Continuous Integration, Unit Testing, xUnit v Contents 1 Introduction ...............................3 1.1 Context ...............................3 1.2 Goals ................................4 1.3 Thesis Structure ..........................4 2 State of the Art .............................6 2.1 Software Development Process .................6 2.1.1 Rational Unified Process . .7 2.1.2 Other Approaches . .8 2.2 Capability Maturity Model Integration .............9 2.3 Continuous Integration ...................... 10 2.3.1 Continuous Integration Servers . 11 2.3.2 Summary . 12 2.4 Automated Software Testing ................... 12 2.4.1 Reasons and Strategies for Test Automation . 14 2.4.2 Unit Testing . 15 2.4.3 xUnit Framework Testing . 16 2.5 Code Coverage .......................... 17 2.6 Unit Testing in Continuous Integration ............. 19 2.7 Summary .............................. 19 3 State of the Practice in Honeywell .................. 20 3.1 ACS Software Development Process .............. 20 3.2 Software Development Process in ACT ............. 22 3.3 Continuous Integration in Honeywell ............. 22 3.4 Bamboo ............................... 23 3.5 Unit Testing ............................ 24 3.6 Summary .............................. 25 4 Feasibility Study ............................ 26 4.1 Overview .............................. 26 4.2 Bamboo Architecture ....................... 26 4.2.1 Project Structure . 27 4.2.2 Agents and Server . 27 4.2.3 Capabilities and Requirements . 29 1 4.3 Bamboo Extensibility ....................... 30 4.4 xUnit Frameworks ......................... 31 4.5 Code Coverage Tools ....................... 33 4.6 Tools Integration ......................... 34 4.7 Summary .............................. 35 5 Implementation ............................. 36 5.1 Architecture Design ........................ 36 5.2 Bamboo Plugin Composition .................. 37 5.3 Used Technologies ........................ 37 5.3.1 Java . 37 5.3.2 JavaScript . 37 5.3.3 XML, XSD and XSLT . 38 5.3.4 Active Objects . 38 5.3.5 FreeMarker Template Language . 38 5.4 Data Flow ............................. 39 5.5 Analysis of Potential Problems revealed during the Imple- mentation .............................. 41 5.6 Presentation Layer ......................... 41 5.7 Summary .............................. 43 6 Evaluation ................................ 45 6.1 First Round ............................ 46 6.1.1 Evaluation Method . 46 6.1.2 Scenarios . 46 6.1.3 Questionnaire Results . 47 6.1.4 Proposed Improvements . 48 6.2 Second Round ........................... 49 6.2.1 Evaluation Method . 49 6.3 Summary .............................. 50 7 Conclusion ................................. 51 7.1 Future Work ............................. 51 7.2 Obtained Benefits ......................... 52 7.3 Summary .............................. 53 Appendices . 57 A Terms and Abbreviations ....................... 58 B Package Diagram ............................ 59 C Installation Instructions ........................ 60 2 Chapter 1 Introduction 1.1 Context Honeywell is an international company with headquarters in the USA. It employs over 132,000 people all over the world. The company develops and manufactures a wide range of products from systems for residential homes and commercial buildings such as thermostats and security devices, through chemicals and materials up to turbochargers and equipment for airplanes. The company is divided into four strategic business groups: • Aerospace • Automation and Control Solutions (ACS) • Performance Materials and Technologies (PMT) • Transportation Systems (TS) This master thesis has been designed as improvement of the current state for the ACS Center of Technologies (ACT) team. ACT was founded to drive soft- ware excellence such as development of Best Practices, training, mentoring and improvement of the software development tools used in Automation and Control Solutions (ACS). For the purposes of software improvement, Honeywell ACS follows ACS Software Development Process (ASDP) which is compliant with Capability Maturity Model Integration (CMMI) as de- scribed in Section 3.1. 3 1 . I NTRODUCTION 1.2 Goals Bamboo from Atlassian is a continuous integration, deployment, and release management solution that is used by Honeywell ACS. The latest version of the Bamboo Continuous Integration (CI) server offers only a few plugins for xUnit Testing and does not provide any Code Coverage functionality by default. Honeywell needs better support of mentioned tools and therefore this master thesis has been designed. The objective is to adjust Bamboo functionality for Honeywell users by implementing the plugin for this tool. The integration xUnit framework family and Code Coverage analysis into Continuous Integration (CI) process is chosen to fulfill the assignment. The goal of this master thesis is to improve part of the development process and save the development cycle time. The result should be saving develop- ers‘ time during their activities. Based on the information from the State of the Art and also from the State of the Practice in Honeywell studies the exten- sion for this Continuous Integration server is developed. 1.3 Thesis Structure This master thesis is further divided into the following six chapters. Chapter 2 State of the Art provides an overview of Unit Testing automation and Code Coverage as important parts of software development process. It also describes Continuous Integration process, tools and their usability. Chapter 3 State of the Practice in Honeywell describes how Unit Testing, Code Coverage and Continuous Integration are used in Honeywell ACS (Appendix A). Moreover, it discusses possibilities for improvement. Chapter 4 Feasibility Study discusses the possibility of the Unit Testing tools integration into the Bamboo CI (Appendix A) server. It describes Bam- boo architecture, its extensibility and possible integration with testing and Code Coverage tools. Chapter 5 Implementation describes the implementation process. It con- tains a section about the architecture, code sharing and used technologies. The end of this chapter outlines the potential problems revealed during the implementation. 4 1 . I NTRODUCTION Chapter 6 Evaluation describes the methodology of thesis evaluation and summarizes gathered results. Measurements and findings are also provided. Chapter 7 Conclusion summarizes the results of this master thesis and its fulfillment. It also contains suggestions for the future improvements of the extension. 5 Chapter 2 State of the Art This chapter describes Continuous Integration and its importance in the software development process. It describes Unit Testing, Code Coverage and other parts of the Continuous Integration process. The chapter also mentions Automated Software Testing. 2.1 Software Development Process Software Development Process (SDP) also known as a Software Develop- ment Life-Cycle (SDLC) [1] is division of software development work into distinct phases. In other words it is a set of steps that a developed software program goes through. First mentioned in 1960, SDLC was the first frame- work for building information systems. This framework is meant to develop software in a structured and methodical way from the first birth of an idea to the deployment and support. There are more approaches to define the SDP (Appendix A). Figure 2.1 describes one of the possible approaches [2]. Figure 2.1: Phases of Software Development Process • Plan represents planning of the whole project and obtaining users‘ requirements. 6 2 . S TATEOFTHE A RT • Analysis of the problem and design suggestions are the second step of the SDP. The output is usually expressed in the graphical notation such as the Unified Modeling
Recommended publications
  • Towards a Taxonomy of Sunit Tests *
    Towards a Taxonomy of SUnit Tests ? Markus G¨alli a Michele Lanza b Oscar Nierstrasz a aSoftware Composition Group Institut f¨urInformatik und angewandte Mathematik Universit¨atBern, Switzerland bFaculty of Informatics University of Lugano, Switzerland Abstract Although unit testing has gained popularity in recent years, the style and granularity of individual unit tests may vary wildly. This can make it difficult for a developer to understand which methods are tested by which tests, to what degree they are tested, what to take into account while refactoring code and tests, and to assess the value of an existing test. We have manually categorized the test base of an existing object-oriented system in order to derive a first taxonomy of unit tests. We have then developed some simple tools to semi-automatically categorize tests according to this taxonomy, and applied these tools to two case studies. As it turns out, the vast majority of unit tests focus on a single method, which should make it easier to associate tests more tightly to the methods under test. In this paper we motivate and present our taxonomy, we describe the results of our case studies, and we present our approach to semi-automatic unit test categorization. 1 Key words: unit testing, taxonomy, reverse engineering 1 13th International European Smalltalk Conference (ESUG 2005) http://www.esug.org/conferences/thirteenthinternationalconference2005 ? We thank St´ephane Ducasse for his helpful comments and gratefully acknowledge the financial support of the Swiss National Science Foundation for the project “Tools and Techniques for Decomposing and Composing Software” (SNF Project No.
    [Show full text]
  • 2.5 the Unit Test Framework (UTF)
    Masaryk University Faculty of Informatics C++ Unit Testing Frameworks Diploma Thesis Brno, January 2010 Miroslav Sedlák i Statement I declare that this thesis is my original work of authorship which I developed individually. I quote properly full reference to all sources I used while developing. ...…………..……… Miroslav Sedlák Acknowledgements I am grateful to RNDr. Jan Bouda, Ph.D. for his inspiration and productive discussion. I would like to thank my family and girlfriend for the encouragement they provided. Last, but not least, I would like to thank my friends Ing.Michal Bella who was very supportive in verifying the architecture of the extension of Unit Test Framework and Mgr. Irena Zigmanová who helped me with proofreading of this thesis. ii Abstract The aim of this work is to gather, clearly present and test the use of existing UTFs (CppUnit, CppUnitLite, CppUTest, CxxTest and other) and supporting tools for software project development in the programming language C++. We will compare the advantages and disadvantages of UTFs and theirs tools. Another challenge is to design effective solutions for testing by using one of the UTFs in dependence on the result of analysis. Keywords C++, Unit Test (UT), Test Framework (TF), Unit Test Framework (UTF), Test Driven Development (TDD), Refactoring, xUnit, Standard Template Library (STL), CppUnit, CppUnitLite, CppUTest Run-Time Type Information (RTTI), Graphical User Interface (GNU). iii Contents 1 Introduction ....................................................................................................................................
    [Show full text]
  • Enhancement in Nunit Testing Framework by Binding Logging with Unit Testing
    International Journal of Science and Research (IJSR), India Online ISSN: 2319-7064 Enhancement in Nunit Testing Framework by binding Logging with Unit testing Viraj Daxini1, D. A. Parikh2 1Gujarat Technological University, L. D. College of Engineering, Ahmedabad, Gujarat, India 2Associate Professor and Head, Department of Computer Engineering, L. D. College of Engineering, Ahmadabad, Gujarat, India Abstract: This paper describes the new testing library named log4nunit testing library which is enhanced version of nunit testing library as it is binded to logging functionality. NUnit is similar to xunit testing family in all that cases are built directly to the code of the project. Log4NUnit testing library provides the capability to log the results of tests along with testing classes. It also provides the capability to log the messages from within the testing methods. It supplies extension of NUnit’s Test class method in terms of logging the results. It has functionality to log the entity of test cases, based on the Logging framework, to document the test settings which is done at the time of testing, test case execution and it’s results. Attractive feature of new library is that it will redirect the log message to console at runtime if Log4net library is not present in the class path. It will make easy for those users who do not want to download Logging framework and put it in their class path. Log4NUnit also include a Log4net logger methods and implements utility of logging methods such as info, debug, warn, fatal etc. This work was prompted due to lack of published results concerning the implementation of Log4NUnit.
    [Show full text]
  • Introduction to Unit Testing
    INTRODUCTION TO UNIT TESTING In C#, you can think of a unit as a method. You thus write a unit test by writing something that tests a method. For example, let’s say that we had the aforementioned Calculator class and that it contained an Add(int, int) method. Let’s say that you want to write some code to test that method. public class CalculatorTester { public void TestAdd() { var calculator = new Calculator(); if (calculator.Add(2, 2) == 4) Console.WriteLine("Success"); else Console.WriteLine("Failure"); } } Let’s take a look now at what some code written for an actual unit test framework (MS Test) looks like. [TestClass] public class CalculatorTests { [TestMethod] public void TestMethod1() { var calculator = new Calculator(); Assert.AreEqual(4, calculator.Add(2, 2)); } } Notice the attributes, TestClass and TestMethod. Those exist simply to tell the unit test framework to pay attention to them when executing the unit test suite. When you want to get results, you invoke the unit test runner, and it executes all methods decorated like this, compiling the results into a visually pleasing report that you can view. Let’s take a look at your top 3 unit test framework options for C#. MSTest/Visual Studio MSTest was actually the name of a command line tool for executing tests, MSTest ships with Visual Studio, so you have it right out of the box, in your IDE, without doing anything. With MSTest, getting that setup is as easy as File->New Project. Then, when you write a test, you can right click on it and execute, having your result displayed in the IDE One of the most frequent knocks on MSTest is that of performance.
    [Show full text]
  • Comparative Analysis of Junit and Testng Framework
    International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 05 | May-2018 www.irjet.net p-ISSN: 2395-0072 Comparative Analysis of JUnit and TestNG framework Manasi Patil1, Mona Deshmukh2 Student, Dept. of MCA, VES Institute of Technology, Maharashtra, India1 Professor, Dept. of MCA, VES Institute of Technology, Maharashtra, India2 ---------------------------------------------------------------------***--------------------------------------------------------------------- Abstract - Testing is an important phase in SDLC, Testing It is flexible than Junit and supports parametrization, parallel can be manual or automated. Nowadays Automation testing is execution and data driven testing. widely used to find defects and to ensure the correctness and The Table – 1 compares different functionalities of TestNG completeness of the software. Open source framework can be and JUnit framework. used for automation testing such as Robotframework, Junit, Spock, NUnit, TestNG, Jasmin, Mocha etc. This paper compares Table -1: Functionality TestNG vs JUnit JUnit and TestNG based on their features and functionalities. Key Words: JUnit, TestNG, Automation Framework, Functionality TestNG JUnit Automation Testing, Selenium TestNG, Selenium JUnit Yes 1. INTRODUCTION Support Annotations Yes Yes Support Test Suit Initialization Yes Testing a software manually is tedious work, and takes lot of No time and efforts. Automation saves lot of time and money, Support for Tests groups Yes also it increases the test coverage
    [Show full text]
  • A Large-Scale Study on the Usage of Testing Patterns That Address Maintainability Attributes Patterns for Ease of Modification, Diagnoses, and Comprehension
    A Large-Scale Study on the Usage of Testing Patterns that Address Maintainability Attributes Patterns for Ease of Modification, Diagnoses, and Comprehension Danielle Gonzalez ∗, Joanna C.S. Santos∗, Andrew Popovich∗, Mehdi Mirakhorli∗, Mei Nagappany ∗Software Engineering Department, Rochester Institute of Technology, USA yDavid R. Cheriton School of Computer Science, University of Waterloo, Canada fdng2551,jds5109,ajp7560, [email protected], [email protected] Abstract—Test case maintainability is an important concern, studies [5], [10], [12], [21] emphasize that the maintainability especially in open source and distributed development envi- and readability attributes of the unit tests directly affect the ronments where projects typically have high contributor turn- number of defects detected in the production code. Further- over with varying backgrounds and experience, and where code ownership changes often. Similar to design patterns, patterns more, achieving high code coverage requires evolving and for unit testing promote maintainability quality attributes such maintaining a growing number of unit tests. Like source code, as ease of diagnoses, modifiability, and comprehension. In this poorly organized or hard-to-read test code makes test main- paper, we report the results of a large-scale study on the usage tenance and modification difficult, impacting defect identifi- of four xUnit testing patterns which can be used to satisfy these cation effectiveness. In “xUnit Test Patterns: Refactoring Test maintainability attributes. This is a first-of-its-kind study which developed automated techniques to investigate these issues across Code” George Meszaro [18] presents a set of automated unit 82,447 open source projects, and the findings provide more insight testing patterns, and promotes the idea of applying patterns to into testing practices in open source projects.
    [Show full text]
  • A New Testing Framework for C-Programming Exercises
    Int'l Conf. Frontiers in Education: CS and CE | FECS'15 | 279 A new Testing Framework for C-Programming Exercises and Online-Assessments Dieter Pawelczak, Andrea Baumann, and David Schmudde Faculty of Electrical Engineering and Computer Science, Universitaet der Bundeswehr Muenchen (UniBw M), Neubiberg, Germany Abstract - Difficulties with learning a programming assessment and grading of programming assignments in the language are wide spread in engineering education. The use third year now. However the original aim to have many of a single integrated programming environment for coding, small accompanying programming exercises for self- debugging, automated testing and online assessment lowers learning could not be established yet, due to the high effort the initial burdens for novice programmers. We have for writing tests. In this paper we present a new testing developed the Virtual-C IDE especially for learning and framework, which enormously reduces the effort for test teaching the C programming language with an integrated development. Although this framework allows students to framework for program visualizations, programming exer- write their own tests, we do not plan to integrate test cises and online assessments. A new enhancement of the writing in the primer C programming course at the moment, IDE is a xUnit like testing framework allowing on the one as our curriculum covers software testing in the software hand larger sets of small, test-based programming exercises engineering courses in the major terms. and on the other hand simplifying the development of pro- gramming assignments. The integration of the new testing 2 Review of related work framework in the assessment system gives students a better and direct feedback on their programming achievements Software testing is a core topic in computer science.
    [Show full text]
  • Xunit Test Patterns and Smells Instructor Biography
    xUnit Test Patterns and Smells xUnit Test Patterns and Smells Improving Test Code and Testability Through Refactoring Gerard Meszaros [email protected] Tutorial exercises and solutions available at: http://tutorialslides.xunitpatterns.com http://tutorialexercises.xunitpatterns.com http://tutorialsolutions.xunitpatterns.com xUnit Patterns Tutorial V1 1 Copyright 2008 Gerard Meszaros xUnit Test Patterns and Smells Instructor Biography Gerard Meszaros is independent consultant specializing is agile development processes. Gerard built his first unit testing framework in 1996 and has been doing automated unit testing ever since. He is an expert in agile methods, test automation patterns, refactoring of software and tests, and design for testability. Gerard has applied automated unit and acceptance testing on projects ranging from full-on eXtreme Programming to traditional waterfall development and technologies ranging from Java, Smalltalk and Ruby to PLSQL stored procedures and SAP’s ABAP. He is the author of the book xUnit Test Patterns – Gerard Meszaros Refactoring Test Code. [email protected] xUnit Patterns Tutorial V1 2 Copyright 2008 Gerard Meszaros 1 xUnit Test Patterns and Smells Tutorial Background • Early XP projects suffered from – High Test Maintenance Cost – Obscure, Verbose Tests • Started documenting practices as Smells & Patterns at xunitpatterns.com • Clients requested Hands-on Training – 2 Day computer-based course – Available in Java, C#, C++ (other languages possible) • Condensed into half day & full day
    [Show full text]
  • Patterns of Test Oracles - a Testing Tool Perspective
    Patterns of Test Oracles - A Testing Tool Perspective YUNG-PIN CHENG, Dept. of CSIE, National Central University, TAIWAN TSUNG-LIN YANG, Dept. of CSIE, National Central University, TAIWAN Continuous integration (CI) has become a state-of-art software engineering practice in modern software industry. The key practice in CI is automatic test regression. A successful test automation can provide immediate feedback to developers and then encourages frequent code change to meet the spirit of agile methods. Unfortunately, the cost of building test automation varies significantly from different application domains. It is often inevitable to adopt testing tools of different levels to ensure the effectiveness of test regression. While adopting these testing tools, it is required to add test oracles by developers to determine whether a test is passed or failed. Nevertheless, there are dif- ferent kinds of test oracles to choose, understand, and apply. Sometimes, a testing tool may constrain the feasibility and selection of test oracles. It is important for developers to understand the pros and cons of test oracles in different testing tools. In this paper, patterns of test oracles are described, particularly from a testing-tool perspective in practice. These patterns are collected to clarify possible confusion and misunderstanding in building test oracles. Categories and Subject Descriptors: H.5.2 [Software and Its Engineering ]: Software testing and debugging—test oracles General Terms: Software Testing Additional Key Words and Phrases: Software Testing, Test Oracle, Test Automation, Regression Testing, Assertion ACM Reference Format: Cheng, Y.-P. and Yang, T.-L. 2017. Patterns of Test Oracles - A Testing Tool Perspective.
    [Show full text]
  • Test-Driven Development
    Test-Driven Development David Janzen Copyright ©2013 David S. Janzen Outline § What is TDD? ▪ An example ▪ Definitions § What is not TDD? § Where did TDD come from? § Why should I use TDD? § How can I apply TDD effectively? Copyright ©2013 David S. Janzen What is Test-Driven Development? § TDD is a design (and testing) approach involving short, rapid iterations of Unit tests are automated Unit Test Code Refactor Forces programmer to consider use of a method before implementation of the method Copyright ©2013 David S. Janzen TDD Example: Requirements § Ensure that passwords meet the following criteria: ▪ Between 6 and 10 characters long ▪ Contain at least one digit ▪ Contain at least one upper case letter Copyright ©2013 David S. Janzen TDD Example: Write a test import static org.junit.Assert.*; import org.junit.Test; public class TestPasswordValidator { @Test Needed for JUnit public void testValidLength() { PasswordValidator pv = new PasswordValidator(); assertEquals(true, pv.isValid("Abc123")); } } This is the teeth of the test Cannot even run test yet because PasswordValidator doesn’t exist! Copyright ©2013 David S. Janzen TDD Example: Write a test import static org.junit.Assert.*; import org.junit.Test; public class TestPasswordValidator { @Test public void testValidLength() { PasswordValidator pv = new PasswordValidator(); assertEquals(true, pv.isValid("Abc123")); } } Design decisions: class name, constructor, method name, parameters and return type Copyright ©2013 David S. Janzen TDD Example: Write the code public class PasswordValidator { public boolean isValid(String password) { if (password.length() >= 6 && password.length() <= 10) { return true; } else { return false; } } } Copyright ©2013 David S. Janzen TDD Example: Refactor import static org.junit.Assert.*; import org.junit.Test; public class TestPasswordValidator { @Test public void testValidLength() { PasswordValidator pv = new PasswordValidator(); assertEquals(true, pv.isValid("Abc123")); } } Do we really need an instance of PasswordValidator? Copyright ©2013 David S.
    [Show full text]
  • Causal Factors, Benefits and Challenges of Test-Driven
    Full citation: Buchan, J., Li, L., & MacDonell, S.G. (2011) Causal Factors, Benefits and Challenges of Test-Driven Development: Practitioner Perceptions, in Proceedings of the 18th Asia-Pacific Software Engineering Conference (APSEC 2011). Hochiminh City, Vietnam, IEEE Computer Society Press, pp.405-413. doi: 10.1109/APSEC.2011.44 Causal Factors, Benefits and Challenges of Test-Driven Development: Practitioner Perceptions Jim Buchan, Ling Li, Stephen G. MacDonell SERL, School of Computing and Mathematical Sciences AUT University, Private Bag 92006 Auckland 1142, New Zealand [email protected], [email protected], [email protected] Abstract in improvements to code quality [3], testing quality [4], and application quality [5], compared to the traditional Test-Last This report describes the experiences of one organization’s (TL) approach. It has also been claimed to improve overall adoption of Test Driven Development (TDD) practices as development productivity, encourage early understanding of part of a medium-term software project employing Extreme the scope of requirements (user stories), as well as Programming as a methodology. Three years into this potentially leading to enhanced developer job satisfaction project the team’s TDD experiences are compared with and confidence [3]. their non-TDD experiences on other ongoing projects. The In contrast, critics claim that the frequent changes to tests in perceptions of the benefits and challenges of using TDD in TDD are more likely (than in TL) to cause test breakages, this context are gathered through five semi-structured leading to costly rework and loss of productivity [6]. Boehm interviews with key team members. Their experiences and Turner [6] also note that with TDD the consequences of indicate that use of TDD has generally been positive and the developers having inadequate testing skills may be reasons for this are explored to deepen the understanding of amplified, compared to the consequences for a TL approach.
    [Show full text]
  • Unittest Is Broken: Toward Composable, Shareable Test Framework Extensions
    unittest is Broken: Toward Composable, Shareable Test Framework Extensions Collin Winter Google, Inc [email protected] 0. Abstract In this paper I examine the shortcomings and core design flaws of JUnit and JUnit-derived test frameworks, such as Python's PyUnit (aka unittest), focusing specifically on the programmer's ability to extend the framework. I then discuss the requirements for an extensible testing framework and introduce test_harness, an alternative framework designed from the ground up to address these fundamental issues. Finally, examples drawn from real-world, PyUnit-based test suites are reformulated using test_harness to demonstrate the power of the new framework. 1. Introduction When writing test suites, it is often desirable to mix in additional functionality in order to augment the tests themselves, e.g., the ability to mark certain tests as "expected to fail" or to indicate that a given test should skipped in the presence of a given condition. While many testing frameworks exist that allow the creation of such augmentations and extensions, the design of these frameworks greatly limits the ability of the test writer to combine extensions in order to create the ideal testing environment for the current project. This lowers the incentive to share such framework extensions with the wider development community, thus both inhibiting software reuse and increasing the barriers to properly testing certain kinds of code. In this paper, I will explore the requirements for a testing framework that will allow extensions to be easily shared and combined, in the process contrasting these principles with those found in existing popular testing frameworks.
    [Show full text]