An Exploratory Study of the State of Practice of Performance Testing in Java-Based Open Source Projects

An Exploratory Study of the State of Practice of Performance Testing in Java-Based Open Source Projects

An exploratory study of the state of practice of performance testing in Java-based open source projects The usage of open source (OS) software is nowadays wide- spread across many industries and domains. While the functional quality of OS projects is considered to be up to par with that of closed-source software, much is unknown about the quality in terms of non- functional attributes, such as performance. One challenge for OS developers is that, unlike for functional testing, there is a lack of accepted best practices for performance testing. To reveal the state of practice of performance testing in OS projects, we conduct an exploratory study on 111 Java-based OS projects from GitHub. We study the performance tests of these projects from five perspectives: (1) the developers, (2) size, (3) organization and (4) types of performance tests and (5) the tooling used for performance testing. First, in a quantitative study we show that writing performance tests is not a popular task in OS projects: performance tests form only a small portion of the test suite, are rarely updated, and are usually maintained by a small group of core project developers. Second, we show through a qualitative study that even though many projects are aware that they need performance tests, developers appear to struggle implementing them. We argue that future performance testing frameworks should provider better support for low-friction testing, for instance via non-parameterized methods or performance test generation, as well as focus on a tight integration with standard continuous integration tooling. PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2496v2 | CC BY 4.0 Open Access | rec: 6 Oct 2016, publ: 6 Oct 2016 An Exploratory Study of the State of Practice of Performance Testing in Java-Based Open Source Projects∗ Philipp Leitner Cor-Paul Bezemer Department of Informatics Software Analysis and Intelligence Lab (SAIL) University of Zurich Queen’s University Switzerland Canada leitner@ifi.uzh.ch [email protected] ABSTRACT perspective, the quality of OS software is on par with com- The usage of open source (OS) software is nowadays wide- parable closed-source software [1]. OS projects have largely spread across many industries and domains. While the func- converged against accepted best practices [5] (e.g., unit test- tional quality of OS projects is considered to be up to par ing) and widespread standard tooling for functional testing, with that of closed-source software, much is unknown about such as JUnit in the Java ecosystem. the quality in terms of non-functional attributes, such as However, the quality of OS software in terms of non- performance. One challenge for OS developers is that, un- functional attributes, such as reliability, scalability, or per- like for functional testing, there is a lack of accepted best formance, is less well-understood. For example, Heger et practices for performance testing. al. [10] state that performance bugs in OS software go undis- To reveal the state of practice of performance testing in OS covered for a longer time than functional bugs, and fixing projects, we conduct an exploratory study on 111 Java-based them takes longer. One reason for the longer fixing time OS projects from GitHub. We study the performance tests may be that performance bugs are notoriously hard to re- of these projects from five perspectives: (1) the developers, produce [26]. (2) size, (3) organization and (4) types of performance tests As many OS software libraries (such as apache/log4j or and (5) the tooling used for performance testing. the apache/commons collection of libraries) are used almost First, in a quantitative study we show that writing per- ubiquitously across a large span of other OS or industrial formance tests is not a popular task in OS projects: perfor- applications, a performance bug in such a library can lead mance tests form only a small portion of the test suite, are to widespread slowdowns. Hence, it is of utmost importance rarely updated, and are usually maintained by a small group that the performance of OS software is well-tested. of core project developers. Second, we show through a quali- Despite this importance of performance testing for OS tative study that even though many projects are aware that software, our current understanding of how developers are they need performance tests, developers appear to strug- conducting performance, stress, or scalability tests is lack- gle implementing them. We argue that future performance ing. There is currently no study that analyzes whether and testing frameworks should provider better support for low- how real projects conduct performance testing, which tools friction testing, for instance via non-parameterized methods they use, and what OS software developers struggle with. or performance test generation, as well as focus on a tight As such, there exist no guidelines for how OS developers integration with standard continuous integration tooling. should test the performance of their projects. In this paper, we conduct an exploratory study on the state of practice of performance testing in Java-based OS Keywords software. We study 111 Java-based projects from GitHub performance testing; performance engineering; open source; that contain performance tests. We focus on five perspec- mining software repositories; empirical software engineering tives: 1. The developers who are involved in performance 1. INTRODUCTION testing. In most studied OS projects, a small group of core developers creates and maintains the performance The usage of open source (OS) software libraries and com- tests. Our findings suggest that in general, there are ponents is by now widely spread across many, if not all, in- no developers in the studied OS projects that focus dustries and domains. Studies claim that, from a functional solely on performance testing. ∗This paper is submitted as a full research paper 2. The extend of performance testing. In most stud- ied OS projects, the performance tests are small in terms of lines of code, and do not change often. We did not observe a statistically significant difference in the size of the performance tests of projects that make claims about their performance (e.g., \fastest imple- mentation of X") and projects that do not make such claims. 3. The organization of performance tests. The Java ACM ISBN 978-1-4503-2138-9. OS software community has not yet converged against DOI: 10.1145/1235 PeerJ Preprints | https://doi.org/10.7287/peerj.preprints.2496v2 | CC BY 4.0 Open Access | rec: 6 Oct 2016, publ: 6 Oct 2016 a common understanding of how to conduct and or- method inspired by software repository mining that repeat- ganize performance tests. Developers freely mix per- edly benchmarks different revisions of a software system formance tests with unit tests and code comments are to discover historical, unreported, performance bugs) has used to describe how a performance test should be ex- shown that performance problems can originate from a wide ecuted. range of code changes, including simple updates of depen- 4. The types of performance tests. Half of the stud- dencies [2, 22]. A system following a similar basic operat- ied OS projects have one or two performance smoke ing principle is PerfImpact [18], which aims to find changes tests for testing the performance of the main function- and input combinations that lead to performance regres- ality of a project. 36% of the studied projects use mi- sions. PRA (performance risk analysis) is an approach used crobenchmarks for performance testing. Less popular to narow down commits that led to a (previously detected) types of performance tests are one-shot performance performance regression [13]. Baltes et al. [4] study how de- tests (i.e., tests that focus on one very specific per- velopers locate performance problems, and conclude that formance issue), performance assertions and implicit standard tools do not provide sufficient support for under- performance tests (i.e., measurements that are done standing the runtime behavior of software. during the execution of functional tests). Problems of Performance Testing. Early on, stud- 5. The tooling and frameworks used for perfor- ies have reported that industrial practice in performance mance tests. While there exist dedicated tools and testing is not on the same level as functional testing [25], frameworks for performance tests, the adoption of these even for large, performance-sensitive enterprise applications. tools and frameworks is not widespread in the Java OS Historically, performance testing has been made difficult by software community. Only 16% of the studied projects two peculiarities. Firstly, performance testing of higher-level uses a dedicated framework such as JMH or Caliper. programming languages, including interpreted languages or Our findings imply that practitioners who use OS software those running in a virtual machine as in the case of Java, is in their projects must thoroughly test the consequences of difficult due to the high number of confounding factors in- doing so on the performance of their own projects, as de- troduced by features of the program runtime environment. velopers should not assume that OS software necessarily For instance, in the case of Java, just-in-time compilation, follows stringent performance testing practices. From our hardware platforms, virtual machine implementations, or exploratory study follows that writing performance tests is garbage collection runs can all significantly impact test re- not a popular task in OS projects. Performance tests form sults [8]. Secondly, performance test results often depend only a small portion of the test suite, are rarely updated, and strongly on the used benchmarking workload, such as the are usually maintained by a small group of core project de- load patterns used for testing. Hence, writing expressive velopers. Further, we argue that future performance testing performance tests requires careful identification and mod- frameworks should provider better support for low-friction eling of representative workloads or production usage pat- testing, for instance via non-parameterized methods or per- terns [3,14].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us