
Containers Orchestration with Cost-Efficient Autoscaling in Cloud Computing Environments Maria A. Rodriguez and Rajkumar Buyya Cloud Computing and Distributed Systems (CLOUDS) Laboratory School of Computing and Information Systems The University of Melbourne, Australia Abstract Containers are standalone, self-contained units that package software and its dependencies together. They offer lightweight performance isolation, fast and flexible deployment, and fine-grained resource sharing. They have gained popularity in better application management and deployment in recent years and are being widely used by organizations to deploy their increasingly diverse workloads such as web services, big data, and IoT in either proprietary clusters or cloud data centres. This has led to the emergence of container orchestration platforms, which are designed to manage the deployment of containerized applications in large-scale clusters. The majority of these platforms are tailored to optimize the scheduling of containers on a fixed-sized private cluster but are not enabled to autoscale the size of the cluster nor to consider features specific to public cloud environments. In this work, we propose a comprehensive container resource management approach that has three different objectives. The first one is to optimize the initial placement of containers by efficiently scheduling them on existing resources. The second one is to autoscale the number of resources at runtime based on the current cluster’s workload. The third one is a rescheduling mechanism to further support the efficient use of resources by consolidating applications into fewer VMs when possible. Our algorithms are implemented as a plugin-scheduler for Kubernetes platform. We evaluated our framework and the effectiveness of the proposed algorithms on an Australian national cloud infrastructure. Our experiments demonstrate that considerable cost savings can be achieved by dynamically managing the cluster size and placement of applications. We find that our proposed approaches are capable of reducing the cost by 58% when compared to the default Kubernetes scheduler. 1. Introduction Cloud native architectures are becoming a popular approach to structuring and deploying large-scale distributed applications. Contrary to traditional monolithic architectures, these are composed of several smaller, specialized processes, often referred to as microservices, that interact with each other to provide services to users. Container technologies such as Docker [1] and Linux Containers (LXC) [2] provide a lightweight environment for the deployment of these microservices either in private data centers or in virtual clusters in public cloud environments. Containers are standalone, self-contained units that package software and its dependencies together. Similar to Virtual Machines (VMs), containers are a virtualization technique that enable the resources of a single compute node to be shared between multiple users and applications. However, while VMs virtualize resources at the hardware-level, containers do so at the operating system-level. They are isolated user-space processes that despite running on a shared operating system (OS), create the illusion of being deployed on their own isolated OS. This makes them a lightweight virtualization approach that enables application environment isolation, fast and flexible deployment, and fine- grained resource sharing. Verma et al. [3] demonstrated for instance how using containers at Google led to an improved resource utilization in terms of the number of machines needed to host a given workload on them. Containerized applications are deployed on a cluster of compute nodes, rather than on a single machine. Organizations are increasingly relying on this technology to deploy diverse workloads derived from modern-day applications such as web services, big data, and IoT. Containers are also found suitable for hosting HPC microservices [20]. This creates the need for container orchestration middleware such as Kubernetes [4], Docker Swarm [6] and Apache Mesos [7]. These systems are responsible for managing and deploying the heterogeneous distributed applications packaged as containers efficiently on a set of hosts. Hence, a particularly important problem to address in this context is the scheduling or placement of containerized applications on the available hosts. As applications are submitted for deployment, the orchestration system must place them as fast as possible on one of the available resources while considering the application’s specific constraints while aiming to maximize the utilization of the compute resources in order to reduce to the operational cost of the organization. This should also be done while considering factors such as the capacity of the available machines, application performance and Quality of Service (QoS) requirements, fault-tolerance, and energy consumption among others. Although the aforementioned frameworks address this issue to an extent, further research is required in order to better optimize the use of resources under different circumstances and for different application requirements. In cloud environments, containers and VMs can be used together to provide users a great deal of flexibility in deploying, structuring, and managing their applications. In this case, not only should the number of containers scale to meet the requirements of applications, but the number of available compute resources should also adjust to adequately host the required containers. At any given point in time, a cloud container orchestration system should avoid underutilizing VMs as a cost and potentially energy controlling mechanism. It should also be capable of dynamically adding worker VMs to the cluster in order to avoid a degradation in the applications’ performance due to resource overutilization. Therefore, autoscaling the number of VMs is essential to successfully meet the performance goals of containerized applications deployed on public clouds and to reduce the operational cost of leasing the required infrastructure. This increases the complexity of the container placement and scheduling problem mentioned above. Existing container orchestration frameworks provide bin-packing algorithms to schedule containers on a fixed-sized cluster but are not enabled to autoscale the size of the cluster. Instead, this decision is left to the user or to external frameworks at the platform level. An example of such a scenario is using Kubernetes for the placement of containers and Amazon’s autoscaling mechanism to manage the cluster. This may not only be impractical but also inefficient as these external entities have limited information regarding the container workload. As a result, we argue that a cloud-centric container orchestration framework capable of making all of the resource management decisions is essential in successfully optimizing the use of resources in virtualized environments. Another possible optimization to existing systems is related to rescheduling. In particular, rescheduling for either defragmentation or autoscaling when the workload includes long-running tasks. Regardless of how good the initial placement of these tasks is, the performance will degrade over time as the workload changes. This will lead to an inefficient use of resources in which the load is thinly spread across nodes or the amount of resources in different nodes are not sufficient to run other applications. Rescheduling applications that tolerate a component being shut down and restarted will enable the orchestration system to consolidate and rearrange tasks so that more applications can be deployed on the same number of nodes or some nodes can be shutdown to reduce cost or save energy. Similarly, if more nodes are added to the cluster, being able to reschedule some of the existing applications on the new nodes may be beneficial in the long term. Key Contribution: This paper proposes a comprehensive container resource management algorithm that has three different objectives. The first one is to optimize the initial placement of containers so that the number of worker VMs is minimum and the memory and CPU requirements of the containerized applications are met. The second one is to autoscale the number of worker VMs at runtime based on the current cluster’s workload. On one hand, scaling out will enable the current resource demand to be met while reducing the time containers wait to be placed and launched. On the other hand, scaling in will enable applications to be relocated so that underutilized VMs can be shutdown to reduce the infrastructure cost. Finally, a rescheduling mechanism will further support the efficient use of resources by consolidating applications into fewer VMs when possible to either avoid unnecessary scaling out operations or to encourage scale in operations. The rest of the paper is organized as follows. Section 2 presents prominent container management platforms along other related scheduling algorithms. Section 3 discusses application workload models and system requirements of our orchestration framework. Architecture of the proposed system and its realisation by leveraging Kubernetes is discussed in Section 4; and its design and implementation is discussed in Section 5. Autoscaling, rescheduling, and scheduling methods we proposed are discussed in Section 6. Section 7 presents the evaluation of our framework and the effectiveness of the proposed algorithms on an Australian national cloud infrastructure. Section 8 identifies and discusses new
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages22 Page
-
File Size-