Report Benchmarking the Performance of Recent Serverless Technologies

Report Benchmarking the Performance of Recent Serverless Technologies

Followup Seminars - Report Benchmarking the performance of recent serverless technologies Jose´ Donato∗, Marco Vieiray Departamento de Engenharia Informatica´ Universidade de Coimbra ∗[email protected], [email protected] Abstract—Web technologies are constantly evolving. Serverless per unit time [2]. We are also interested in up-time, i.e., the Computing is an ambitious web technology that has emerged percentage of time the system is up [2]. in recent years and has been the focus of several studies. It There are several studies that compare different serverless is ambitions because Serverless aims to replace the traditional architecture where a server is always running waiting for re- technologies [3], [4], [5] or the same serverless technology quests. However, its stateless approach and design characteristics deployed on different serverless providers [6]. We focus on re- raise some concerns that need to be studied and the primary cent technologies that were not considered yet but are already one is performance. In this paper we study several technologies gaining popularity. This benchmark currently supports three (both serverless and traditional) normally used to support web different technologies but can be easily extended to support applications. We propose a preliminary performance benchmark that allows comparing recent serverless technologies (such as (as we will see in the SectionII). The currently supported Netlify functions, Next.js API routes and Cloudflare Workers) technologies are Next.js API Routes [7], Netlify Functions [8] and traditional technologies (such as Express.js). Results show and Express.js [9] (we dive deep into the technologies used in that all technologies tested provide high up-time and the server- SectionII). less technologies do not fall behind the technologies based on We developed the same application using these three dif- traditional server architectures. Index Terms—Serverless Computing,Performance Benchmark ferent technologies and several scripts to assess them (more detail in SectionII). All the source code is available on github[10] and the application with the results is live on I. INTRODUCTION benchmarking.vercel.app/sa[11]. As for the results, regarding the tests performed we observed Serverless computing (also called Functions as a Service that the serverless technologies behave similarly to applica- or FaaS) has been a hot topic for the past few years as it tions that follow the traditional architecture. All technologies aims to replace the traditional servers that are the standard tested (both serverless and non-serverless) should be viable so- and world-wide used. Serverless proposes an alternative ap- lutions when the goal is to implement simple web Application proach regarding how applications run in the cloud: instead of Programming Interfaces (or APIs) with high up-time and that constantly running as tradition servers, serverless consists of may involve expensive computations (e.g., image processing). stateless functions that run on containers in the cloud and only The remainder of this document is structured as follows. wake up on demand, i.e., these containers only turn on when SectionII describes the performance benchmark and explains they are called. Since the containers are by default sleeping all its necessary phases. Section III presents the experiences (i.e., not running), serverless functions can be an efficient made and their results. SectionIV we discuss the results and alternative in terms of energy consumption (and consequently, compare the different technologies under study. SectionV cost). However, if they are by default asleep, before the request enumerates the limitations of this work and why they exist. is processed we need to wait until the containers turn on. This Finally, SectionVI concludes the work. In this section we process is called cold boot and it is the primary reason of also consider what could be added to this study in the future. serverless performance limitations. This study focus on assessing the performance of server- II. BENCHMARK DESCRIPTION less and non-serverless technologies. Therefore, we propose a simple performance benchmark that allows comparing Our approach is described in Figure1. Our benchmark alternative technologies that can be used to develop either is divided into in three different processes. Therefore, the serverless or traditional applications. remainder of this section is divided into three subsections. In We can define benchmark as “standard tools that allow each subsection we through the processes and the necessary evaluating and comparing different systems or components tools to implement them are explained. according to specific characteristics (performance, depend- The following list itemizes a brief description about each ability, security, etc)” [1]. For our benchmark we focus on benchmark process: performance and up-time. Therefore, we choose to perform • Application Development (represented with yellow in several throughput tests, i.e., the number of items processed the diagram): The first process of the benchmark. This for(let x = 0; x < w; x++) { Receives Outputs Analysis Results in Configurations CLI Results in CSV idx = (w * y + x) << 2 Tool JSON rgb[0] += matrix[idx] rgb[1] += matrix[idx + 1] Tests several applications rgb[2] += matrix[idx + 2] Web App to } visualize results } Produces Application Application Application Application Application return [Math.floor(rgb[0] / size), Math.floor( Development 1 2 n-1 n rgb[1] / size), Math.floor(rgb[2] / size)] } Fig. 1: Benchmark Architecture As we will see in Benchmark Campaign II-B, for more expensive workloads we submit bigger images. For the opposite effect, we use smaller images. This endpoint phase consists of the development of the several applica- expects POST requests with the image in the body and is tions that will be tested in the benchmark execution phase. normally located at /dominantColor. Alongside the In its subsection we explained what are the application dominant pixel, this endpoint returns the duration it took features and why we choose those features. Also, we to compute the pixel (which will be useful in Subsections describe the technologies we choose. II-B and II-C for collecting the metrics). • Benchmark Campaign (represented with orange in the For our tests, we consider three different sizes for of the diagram): Receives the configurations and is responsible same JPEG image: for testing each application while collecting the metrics. – One small image: 640x959 pixels. In its subsection we enumerate the configurations, the – One medium image: 1920x2878 pixels. metrics we collect and how command-line interface (CLI) – One large image: 2400x3598 pixels. we developed works. The campaign outputs the results in • Echo Endpoint: Simple endpoint that does not receive comma-separated values (CSV). anything and just returns a “Hello World” string. This • Result Analysis and Presentation (represented with endpoint expects GET requests and is normally located green in the diagram): This is the final process of our at /echo and its purpose is to assess the throughtput of assessment. To sort the results from the Benchmark Cam- each application. paign we developed a simple analysis tool that receives the results in CSV format, performs some analysis and As said in the introduction, we developed several applica- converts them into JavaScript Object Notation (JSON). tions and they are all open sourced in the github[10]. Finally, we can provide the JSON files to the web Each application has its own folder and a Dockerfile as- application we also implemented in order to display the sociated (i.e., a file that contains the information needed to results. build a Docker image [12]) to ease the process of running each application. We considered the following technologies to A. Application Development develop the applications under test: The focus of our assessment is performance. Therefore, we • Serverless - recent technologies that are used to support tried to collect a set of features that despite being simple web applications such as: allowed to evaluate the performance (primarily the throughput) – Next.JS API routes: “API routes provide a straight- of serverless and non-serverless functions. This resulted in two forward solution to build your API with Next.js” [7]. features: – Netlify functions: “Serverless functions built into • Dominant Color Endpoint: An endpoint that receives every Netlify account. No setup, servers, or ops a JPEG or PNG image and retrieves its dominant color, required” [8]. i.e., the color that appears more times in the image. We – Cloudflare Workers: “Simple FaaS on the edge of think this use case is representative because it involves Cloudflare CDN” [13]. Cloudflare workers was not expensive computations. The bigger the image, the more included in tests because this technology is not computations are needed. In code snippet1 is presented compatible with docker and when running locally the code we used to calculate the dominant color. The it crashed as soon as the tests started. snippet navigates through all the images pixels (repre- • Traditional - also recent technologies to support web sented by matrix) and calculates the color that appears applications: more times. – Express.js: “Node.js web application Listing 1: Compute image dominant color. Adapted from this framework” [9]. source code. As we will talk in SectionV, the deployment of such function dominantColor(w, h, matrix) { technologies is hard without paying some money. Therefore,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us