Microservices: a Performance Tester’S Dream Or Nightmare?

Microservices: a Performance Tester’S Dream Or Nightmare?

SESSION 4: Serverless Apps ICPE '20, April 20–24, 2020, Edmonton, AB, Canada Microservices: A Performance Tester’s Dream or Nightmare? Simon Eismann Cor-Paul Bezemer Weiyi Shang University of Würzburg University of Alberta Concordia University Würzburg, Germany Edmonton, Canada Montreal, Quebec [email protected] [email protected] [email protected] Dušan Okanović André van Hoorn University of Stuttgart University of Stuttgart Stuttgart, Germany Stuttgart, Germany [email protected] [email protected] ABSTRACT 1 INTRODUCTION In recent years, there has been a shift in software development Microservices [24, 32, 38] are a popular trend in software architec- towards microservice-based architectures, which consist of small ture in recent years for building large-scale distributed systems. services that focus on one particular functionality. Many companies Microservices employ architectural design principles leading to are migrating their applications to such architectures to reap the explicit domain-based bounded contexts and loose coupling [40], benefits of microservices, such as increased flexibility, scalability exploit modern cloud-based technologies including containeriza- and a smaller granularity of the offered functionality by a service. tion and self-healing [17], and are suitable for modern software On the one hand, the benefits of microservices for functional engineering paradigms such as DevOps [5] including agile devel- testing are often praised, as the focus on one functionality and their opment methods and continuous delivery. smaller granularity allow for more targeted and more convenient testing. On the other hand, using microservices has their conse- Similar to other software systems, product quality [23] plays quences (both positive and negative) on other types of testing, such an important role for microservices and microservice-oriented ar- as performance testing. Performance testing is traditionally done chitectures. An important non-functional quality attribute is per- by establishing the baseline performance of a software version, formance, which describes a system’s properties with respect to which is then used to compare the performance testing results of timeliness and resource usage, including aspects such as scalability later software versions. However, as we show in this paper, estab- and elasticity [22]. Timeliness may seem to be achievable more lishing such a baseline performance is challenging in microservice easily by cloud features such as auto-scaling. However, resource applications. usage becomes extremely relevant because inefficient architectures In this paper, we discuss the benefits and challenges of microser- and implementations lead to high costs as a part of pay-per-use vices from a performance tester’s point of view. Through a series of charging models for cloud resources. experiments on the TeaStore application, we demonstrate how mi- croservices affect the performance testing process, and we demon- The performance of a system can be assessed through several strate that it is not straightforward to achieve reliable performance performance engineering techniques, such as performance test- testing results for a microservice application. ing [8, 25]. Performance testing is already considered challenging in traditional systems [30]. Even worse, the architectural, technolog- KEYWORDS ical, and organizational changes that are induced by microservices Microservices, DevOps, Performance, Regression testing have an impact on performance engineering practices as well [21]. While some of these changes may facilitate performance testing, ACM Reference Format: others may pose considerable challenges. Simon Eismann, Cor-Paul Bezemer, Weiyi Shang, Dušan Okanović, and An- dré van Hoorn. 2020. Microservices: A Performance Tester’s Dream or Night- In this paper, we discuss whether microservices are a perfor- mare?. In Proceedings of the 2020 ACM/SPEC International Conference on mance tester’s dream or nightmare. In particular, we run a series of Performance Engineering (ICPE ’20), April 20–24, 2020, Edmonton, AB, Canada. experiments on the TeaStore [51], a reference microservice applica- ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3358960.3379124 tion, to demonstrate the challenges that come with performance testing microservices. Our experiments address the following re- Permission to make digital or hard copies of all or part of this work for personal or search questions: classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the RQ1: How stable are the execution environments of mi- author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission croservices across repeated runs of the experiments? and/or a fee. Request permissions from [email protected]. Our experiments demonstrate that the execution environ- ICPE ’20, April 20–24, 2020, Edmonton, AB, Canada ments of microservices are not stable across experiment runs, © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-6991-6/20/04. even when the total number of provisioned instances of a https://doi.org/10.1145/3358960.3379124 microservice is kept the same. 138 SESSION 4: Serverless Apps ICPE '20, April 20–24, 2020, Edmonton, AB, Canada ICPE ’20, April 20–24, 2020, Edmonton, AB, Canada Simon Eismann, Cor-Paul Bezemer, Weiyi Shang, Dušan Okanović, and André van Hoorn RQ2: How stable are the performance testing results across The approach combines micro- and application benchmarks and repeated runs of the experiments? Our experiments show is particularly designed for cloud benchmarking environments. In that although the CPU busy time may not be significantly addition, Scheuner and Leitner [27] examine the problem from a different between scenarios, there often exist statistically sig- different angle, i.e., the quality of performance microbenchmarks nificant differences in response time. Such differences may and propose a quality metric that can be used to assess them. Costa have a direct negative impact on the user-perceived perfor- et al. [12] proposed a tool to further improve the quality of mi- mance, and make analyzing the performance test results of crobenchmarks by detecting bad practices in such benchmarks. By microservices more challenging. acknowledging the variation and instability of cloud environments, RQ3: How well can performance regressions in microser- He et al. [20] design an approach that estimates a sufficient duration vices be detected? It is possible to detect performance re- of performance tests that are conducted in a cloud environment. gressions in microservices applications. However, one needs It can be noted that the focus of these works was not on the to ensure that enough repetitions of the performance tests performance testing of microservice-based applications. Although are done to deal with the variance in the deployment of the they demonstrate promising results of using cloud environments for services and the environments in which they run. performance testing, microservice-based architectures may have a Our results show that performance testing microservices is not negative impact on the performance testing process due to variation straightforward and comes with additional challenges compared to and instability of cloud environments. performance testing ‘traditional’ software. Hence, future research is necessary to investigate how to best tackle these additional chal- 2.2 Microservices lenges. We provide a peer-reviewed replication package [16] to Due to the wide practical adoption of microservice architectures, assist other researchers in the reproduction and extension of our there exists a body of research discussing visions on the testing of case study. microservices. This research, however, often focuses on functional In the rest of this paper, first we discuss related work (Section 2), tests, e.g., [38]. Nonetheless, performance is one of the major as- the characteristics of microservices (Section 3) and the ideal con- pects of consideration when adopting microservices. Heinrich et ditions for performance testing (Section 4). Section 5 discusses al. [21] argued that traditional performance modeling and monitor- the benefits of microservices for performance testing. Section 6 ing approaches in most cases cannot be reused for microservices. presents our experimental setup and results. Section 7 reflects on Knoche [26] presented a simulation-based approach for transform- our experiments and discusses the challenges of performance test- ing monolithic applications into microservice-oriented applications ing microservices and promising future research directions. Sec- while preserving their performance. Aderaldo et al. [1] discussed tion 8 discusses the threats to the validity of our experiments. We several reference applications for microservices, and their advan- conclude the paper in Section 9. tages and disadvantages in terms of being used as a benchmark application. Dragoni et al. [14] argued that network overhead will 2 RELATED WORK be the major performance challenge for microservice-oriented ap- In this section, we discuss the prior research that is related to this plications. Jamshidi

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us