
Serverless Computing: An Investigation of Factors Influencing Microservice Performance Wes Lloyd1, Shruti Ramesh4, Swetha Chinthalapati2, Lan Ly3, Shrideep Pallickara5 Institute of Technology 5Department of Computer Science University of Washington 4Microsoft Colorado State University Tacoma, Washington USA Redmond, Washington USA Fort Collins, Colorado USA 1 2 3 wlloyd, swethach, [email protected] [email protected] [email protected] Abstract— Serverless computing platforms provide driving a paradigm shift rivaling a scale not seen since the function(s)-as-a-Service (FaaS) to end users while promising advent of cloud computing itself! reduced hosting costs, high availability, fault tolerance, and Fundamentally different than application hosting with IaaS dynamic elasticity for hosting individual functions known as or Platform-as-a-Service (PaaS) clouds, with serverless microservices. Serverless Computing environments, unlike computing, applications are decomposed and deployed as code Infrastructure-as-a-Service (IaaS) cloud platforms, abstract modules. Each cloud provider restricts the maximum size of infrastructure management including creation of virtual machines (VMs), operating system containers, and request load balancing code (e.g. 64 to 256MB) and runtime (e.g. 5 minutes) of from users. To conserve cloud server capacity and energy, cloud functions. Serverless platform functions are often used to host providers allow hosting infrastructure to go COLD, RESTful web services. When RESTful web services have a deprovisioning containers when service demand is low freeing small code size they can be referred to as microservices. infrastructure to be harnessed by others. In this paper, we present Throughout this paper we refer to our code deployments as results from our comprehensive investigation into the factors mircoservices because they are small RESTful web services, but which influence microservice performance afforded by serverless the results of our work are applicable to any code asset deployed computing. We examine hosting implications related to to a serverless computing platform. We do not focus on defining infrastructure elasticity, load balancing, provisioning variation, the difference between a RESTful web service and a infrastructure retention, and memory reservation size. We identify microservice and leave this debate open. four states of serverless infrastructure including: provider cold, Serverless environments leverage operating system VM cold, container cold, and warm and demonstrate how containers such as Docker to deploy and scale microservices [6]. microservice performance varies up to 15x based on these states. Granular code deployment harnessing containers enables Keywords Resource Management and Performance; Serverless incremental, rapid scaling of server infrastructure surpassing the Computing; Function-as-a-Service; Provisioning Variation; elasticity afforded by dynamically scaling virtual machines I. INTRODUCTION (VMs). Cloud providers can load balance many small container placements across servers helping to minimize idle server Serverless computing recently has emerged as a compelling capacity better than with VM placements [7]. Cloud providers approach for hosting applications in the cloud [1] [2] [3]. While are responsible for creating, destroying, and load balancing Infrastructure-as-a-Service (IaaS) clouds provide users with requests across container pools. Given their small size and access to voluminous cloud resources, resource elasticity is footprint, containers can be aggregated and reprovisioned more managed at the virtual machine level, often resulting in over- rapidly than bulky VMs. To conserve server real estate and provisioning of resources leading to increased hosting costs, or energy, cloud providers allow infrastructure to go COLD, under-provisioning leading to poor application performance. deprovisioning containers when service demand is low freeing Serverless computing platforms provide Function(s)-as-a- infrastructure to be harnessed by others. These efficiencies hold Service (FaaS) by hosting individual callable functions. These promise for better server utilization leading to workload platforms promise reduced hosting costs, high availability, fault consolidation and energy savings. tolerance, and dynamic elasticity through automatic In this paper, we present results of our investigation focused provisioning and management of compute infrastructure [4]. on identifying factors that influence performance of Serverless computing platforms integrate support for microservices deployed to serverless computing platforms. Our scalability, availability, fault tolerance capabilities directly as primary goal for this study has been to identify factors features of the framework. Early adoption of serverless influencing microservice performance to inform practitioners computing has focused on deployment of lightweight stateless regarding the nuances of serverless computing infrastructure to services for image processing, static processing routines, speech enable better application deployments. We investigate processing, and event handlers for Internet-of-Things devices microservice performance implications related to: infrastructure [5]. The promised benefits, however, makes the platform very elasticity, load balancing, provisioning variation, infrastructure compelling for hosting any application. If serverless computing retention, and memory reservation size. delivers on its promises, it has the potential to fundamentally transform how we build and deploy software on the cloud, 1 A. Research Questions container image can be cached enabling additional container To support our investigation of factors influencing instances to be created more rapidly. Containers preserved in a microservice performance for serverless computing platforms, warm state can rapidly service incoming requests, but retaining we investigate the following research questions: infrastructure indefinitely is not feasible as cloud providers must share server infrastructure amongst all cloud users. We are RQ-1: (Elasticity) What are the performance implications for interested in quantifying how infrastructure is deprecated to leveraging elastic serverless computing infrastructure understand implications for performance as well as derive keep for microservice hosting? How is response time alive workloads to prevent microservices with strict SLAs from impacted for COLD vs WARM service requests? experiencing longer latencies. COLD service requests are sent by clients to microservice RQ-5: (Memory Reservations) What performance hosting platforms where the service hosting infrastructure must implications result from microservice memory be provisioned to respond to these requests. Four types of reservation size? How do memory reservations impact function invocations exist relating to infrastructure warm up for container placement? serverless computing infrastructure. These include: (1-provider cold) the very first service invocation for a given microservice Serverless computing platforms abstract most infrastructure code release made to the cloud provider, (2-VM cold) the very management configuration from end users. Platforms such as first service invocation made to a virtual machine (VM) hosting AWS Lambda and Google Cloud Functions allow users to one or more containers hosting microservice code, (3-container specify a memory reservation size. Users are then billed for each cold) the very first service invocation made to an operating function invocation based on memory utilization to the nearest system container hosting microservice code, and (4-warm) a tenth of a second. For example, Lambda functions can reserve repeated invocation to a preexisting container hosting from 128MB to 3008MB, while Google Cloud Functions can microservice code. reserve from 128MB to 2048MB. Azure functions allows users to create function apps. Function apps share hosting RQ-2: (Load Balancing) How does load balancing vary for infrastructure and memory for one or more user functions. hosting microservices in serverless computing? How Azure function app hosts are limited 1536MB maximum do computational requirements of service requests memory. Users do not reserve memory for individual functions impact load balancing, and ultimately microservice and are billed only for memory used in 128MB increments. One performance? advantage to Azure’s model is that users do not have to Serverless computing platforms automatically load balance understand the memory requirements of their functions. They service requests across hosting infrastructure. Cloud providers simply deploy their code, and infrastructure is automatically typically leverage round robin or load balancing based on CPU provisioned for functions up to the 1536MB limit. In contrast, load to distribute incoming resource requests [8]. For serverless users deploying microservices to Lambda or Google Cloud computing, we are interested in understanding how the Functions must specify a memory reservation size for function computational requirements of individual microservice requests deployment. These reservations are applied to Docker impact load balancing and ultimately performance. containers created to host user functions. Containers are created RQ-3: (Provisioning Variation) What microservice to host individual function deployments, and user functions may performance implications result from provisioning or may not share resources of underlying VMs. variation of container infrastructure?
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-