Understanding the performance of container execution environments Guillaume Everarts de Velp, Etienne Rivière and Ramin Sadre [email protected] EPL, ICTEAM, UCLouvain, Belgium Abstract example is an automatic grading platform named INGIn- Many application server backends leverage container tech- ious [2, 3]. This platform is used extensively at UCLouvain nologies to support workloads formed of short-lived, but and other institutions around the world to provide computer potentially I/O-intensive, operations. The latency at which science and engineering students with automated feedback container-supported operations complete impacts both the on programming assignments, through the execution of se- users’ experience and the throughput that the platform can ries of unit tests prepared by instructors. It is necessary that achieve. This latency is a result of both the bootstrap and the student code and the testing runtime run in isolation from execution time of the containers and is impacted greatly by each others. Containers answer this need perfectly: They the performance of the I/O subsystem. Configuring appro- allow students’ code to run in a controlled and reproducible priately the container environment and technology stack environment while reducing risks related to ill-behaved or to obtain good performance is not an easy task, due to the even malicious code. variety of options, and poor visibility on their interactions. Service latency is often the most important criteria for We present in this paper a benchmarking tool for the selecting a container execution environment. Slow response multi-parametric study of container bootstrap time and I/O times can impair the usability of an edge computing infras- performance, allowing us to understand such interactions tructure, or result in students frustration in the case of IN- within a controlled environment. We report the results ob- GInious. Higher service latencies also result in higher aver- tained by evaluating a large number of environment config- age resource utilization and, therefore, in lower achievable urations. Our conclusions highlight differences in support throughput. In the context of a FaaS platform, this can result and performance between container runtime environments in lower return-on-investment. For INGInious, where large- and I/O subsystems. audience coding exams may involve hundreds of submissions per minute, it results in increased resource requirements. CCS Concepts • Cloud computing; Many factors influence service latency. We focus in this ACM Reference Format: paper on the impact of the I/O subsystem. We consider its Guillaume Everarts de Velp, Etienne Rivière and Ramin Sadre. 2020. impact on latency both in the bootstrap phase (i.e., prior to ex- Understanding the performance of container execution environ- ecuting a function, service a request with an edge service, or ments. In Containers Workshop on Container Technologies and Con- running an INGInious task) and in the actual execution phase tainer Clouds (WOC ’20), December 7–11, 2020, Delft, Netherlands. (i.e. when accessing a database or using the file system). ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3429885. A difficulty that deployers of container-based execution 3429967 environments face is the diversity of choices available to 1 Introduction support their workloads, i.e. the technical components of Containers have emerged as a standard approach for envi- their container execution environment. This choice starts, ronments supporting on-demand, short-lived execution of obviously, with the actual container runtime environment. In computational tasks. Examples of such environments include addition to container solutions based on Linux’s namespaces Function-as-a-Service (FaaS) platforms [11] and edge com- and cgroups functionalities, such as runc, crun or LXC, alter- puting environments [8, 9]. More specifically, our motivating natives are rapidly emerging that blur the lines between OS- level and machine-level virtualization. Lightweight machine Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not virtualization technologies such as Firecracker [1] reduce made or distributed for profit or commercial advantage and that copies bear the delay for bootstrapping an actual virtual machine, while this notice and the full citation on the first page. Copyrights for components retaining its better isolation properties compared to OS-level of this work owned by others than ACM must be honored. Abstracting with virtualization. Other solutions, such as Kata Containers [7], credit is permitted. To copy otherwise, or republish, to post on servers or to allow running containers directly over an hypervisor, be redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. it Firecracker or KVM. Alternatives are not limited to the WOC ’20, December 7–11, 2020, Delft, Netherlands runtime environment. Deployers must also make choices for © 2020 Association for Computing Machinery. the storage subsystem, that influence the performance of I/O ACM ISBN 978-1-4503-8209-0/20/12...$15.00 intensive container workloads: We identify for instance no https://doi.org/10.1145/3429885.3429967 WOC ’20, December 7–11, 2020, Delft, Netherlands Guillaume Everarts de Velp etal. less than 8 different storage drivers that can interface with the different abovementioned container runtime solutions. Container manager interacts with Contribution We are interested in this paper in the im- pact of the container environment on service latency, both setup for bootstrap and execution. It is difficult to identify which component impacts most these latencies in production envi- ronments given their intrinsic complexity and the influence Base image Storage driver Control groups of multiple external factors, such as the existing load on the uses platform or the co-existence of other workloads. Filesystem Our first contribution is therefore a benchmarking tool that helps to isolate the factors related to the configuration relies on of the container execution environment by performing a relies on systematic experimental analysis of a large number of valid Container runtime configurations, i.e. combinations of possible technological choices at the various levels of the container execution envi- Figure 1. Components of a container execution environ- ronment stack. We identify the components that influence ment. I/O performance and therefore service latency as: the con- tainer manager, the container runtime, the storage driver, the base image used for the container, and finally the control the tool assigns every test run the same amount of resources, group mechanism. Considering all compatible alternatives that is one logical CPU core and 1 GB of memory. for these components, the benchmarking tool evaluates a Deploying a container-based application requires a com- total of 72 valid configurations. These configurations reflect bination of different complementary technologies. In the fol- the requirements of the INGInious application, but are also lowing, we will call such a combination a container execution characteristic of other service backends requirements (i.e. environment. Figure 1 gives an overview of its components: the need for isolation, resource management, network access and a writable filesystem). • The container manager is a user-friendly tool which As our second contribution, we present in this paper the allows to manage the lifecycle of containers, attach key insight and results that we obtained by running our their execution to a terminal etc. tool with five workloads representative of containers mak- • The container runtime is the system support mech- ing intensive use of the I/O subsystem. In a nutshell, we anism responsible for creating, starting, and stopping highlight the importance of using lightweight and low-level containers and allowing other low-level operations. container runtimes and the important influence of the stor- • A storage driver provides and manages the filesystem age drivers on I/O subsystem performance and, as a result, of a container. It ensures that each container filesystem on perceived bootstrap and execution latencies. Our com- is isolated from that of other co-located containers. plete results are available in the companion report [5]. Our • Control groups (cgroups) in Linux is used for OS- datasets and benchmarks are also publicly available [4]. level virtualization for controlling resource usage for the processes belonging to a specific container. Outline We first present our methodology (§2). We then • The base container image contains the operating detail the most significant results from our experiments (§3). system to run inside the container. We finally review related work (§4) and conclude (§5). For each of these components, various concrete imple- 2 Methodology mentations are available in the container ecosystem. Table 1 We detail how we select different container execution envi- details the ones that we have considered in our benchmark- ronments how we evaluate their performance. ing tool. We consider three container managers and five container runtime environments, three based on OS-level 2.1 Metrics and tested components virtualization (runc, crun and LXC) and two based on light- We build a benchmarking tool that helps to quantify the im- weight machine-level virtualization (Kata containers over pact of different container
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-