Drowsy-DC: Data Center Power Management System Mathieu Bacou, Gregoire Todeschi, Alain Tchana, Daniel Hagimont, Baptiste Lepers, Willy Zwaenepoel

Total Page:16

File Type:pdf, Size:1020Kb

Drowsy-DC: Data Center Power Management System Mathieu Bacou, Gregoire Todeschi, Alain Tchana, Daniel Hagimont, Baptiste Lepers, Willy Zwaenepoel Drowsy-DC: Data center power management system Mathieu Bacou, Gregoire Todeschi, Alain Tchana, Daniel Hagimont, Baptiste Lepers, Willy Zwaenepoel To cite this version: Mathieu Bacou, Gregoire Todeschi, Alain Tchana, Daniel Hagimont, Baptiste Lepers, et al.. Drowsy- DC: Data center power management system. 33rd IEEE International Parallel & Distributed Process- ing Symposium (IPDPS), May 2019, Rio de Janeiro, Brazil. pp.825-834, 10.1109/IPDPS.2019.00091. hal-02891652 HAL Id: hal-02891652 https://hal.archives-ouvertes.fr/hal-02891652 Submitted on 7 Jul 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Open Archive Toulouse Archive Ouverte OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible This is a publisher’s version published in: http://oatao.univ-toulouse.fr/26182 Official URL DOI:10.1109/IPDPS.2019.00091 To cite this version: Bacou, Mathieu a es hi r ire and Tchana, Alain-Bouzaïde and Hagimont, Daniel and Lepers, Baptiste and Zwaenepoel, Willy Drowsy-DC: Data center power management system. (2019) In: 33rd IEEE International Parallel & Distributed Processing Symposium (IPDPS), 20 May 2019 - 24 May 2019 (Rio de Janeiro, Brazil) Any correspondence concerning this service should be sent to the repository administrator: [email protected] 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS) Drowsy-DC: Data center power management system Mathieu Bacou†∗, Grégoire Todeschi†, Baptiste Lepers‡, Willy Zwaenepoel‡ Alain Tchana†, Daniel Hagimont† ‡Operating Systems Laboratory †IRIT, Université de Toulouse, CNRS, Toulouse, France École Polytechnique Fédérale de Lausanne {first.last}@enseeiht.fr Lausanne, Vaud, Switzerland †Atos Integration, Toulouse, France {first.last}@epfl.ch Abstract—In a modern data center (DC), a large majority of efficiency for CPUs, HDDs and NICs, state-of-the-art proposals costs arise from energy consumption. The most popular technique are still a long way from the ideal energy proportional machines. used to mitigate this issue is virtualization and more precisely Therefore, some DC-level power management techniques such virtual machine (VM) consolidation. Although consolidation may as VM consolidation have been proposed. increase server usage by about 5–10%, it is difficult to actually witness server loads greater than 50%. By analyzing the traces VM consolidation seeks to dynamically assign VMs to a from our cloud provider partner, confirmed by previous research minimum number of servers, so that empty servers — which are work, we have identified that some VMs have sporadic moments consequently inactive — can be transitioned to low power states of data computation followed by large periods of idleness. These (suspend to RAM or suspend to disk, respectively ACPI S3 and VMs often hinder the consolidation system which cannot further S4). By increasing the load of active servers, this technique also increase the energy efficiency of the DC. In this paper we propose a novel DC power management system called Drowsy-DC, which improves the energy efficiency of the DC. However, the general is able to identify the aforementioned VMs which have matching utilization boost is around 5–10% which makes it difficult to patterns of idleness. These VMs can thus be colocated on the actually witness server loads greater than 50% for even the same server so that their idle periods are exploited to put the best adapted workloads [1, 2, 3, 4]. This is explained by several server to a low power mode (suspend to RAM) until some data reasons. First, VM consolidation is a bin-packing problem computation is required. While introducing a negligible overhead, our system is able to significantly improve any VM consolidation which is NP-complete. Second, the resource partitioning system; evaluations showed improvements up to 81% and more depends on the resource type. For instance, unlike CPU which when compared to OpenStack Neat. is time-shared, memory is space-shared between the VMs and Index Terms—virtualization, consolidation, long-lived mostly is not preempted. Consequently, memory is often the limiting idle, low-power state, power consumption resource in the consolidation process [5]. Several solutions tried to address this problem by minimizing VMs memory footprints I. INTRODUCTION (e.g. page sharing [6, 7, 8], page compression [9, 10]) and Cloud platforms have proven to be able to provide very allowing memory overcommitment (ballooning [11, 12, 7]). attractive prices. The enabling element of cloud computing However, these techniques are generally not employed in is the virtualization technology. It allows a server to execute, mainstream cloud DCs for several reasons. First, despite the simultaneously and in an isolated manner, multiple operating achieved memory gains, they all have a detrimental impact on systems (OSs) called virtual machines (VMs). Reducing the performance of user applications. Second and specifically datacenters (DC) energy consumption is a major concern for for ballooning, these solutions depend on efficiently estimating cloud providers and an important center of interest for the the memory footprint which is a fairly difficult task considering research community. According to the scale at which the issue the variance of the working-set over time [13, 14]. is addressed, previous works can be organized in two categories: In this paper, we propose a new DC power management (1) those interested in minimizing the power consumption system called Drowsy-DC, based on an innovative idea which of a single server; and (2) those which act at the DC scale, combines concepts from the two previous categories: we state leveraging the virtualized nature of the workload. that using consolidation, we can create situations where a DC In the first category, the holy grail is to achieve energy server may be suspended despite not being empty (i.e. it is proportional servers [1]. Such a server consumes energy in hosting VMs), thus greatly improving its power consumption. exact proportion to its utilization. Thereby, a server with We observed patterns in VMs activity traces. From the point no load would theoretically consume no energy. In this of view of their activity patterns, VMs may be classified in respect, an essential goal is to independently control the power three categories: short-lived mostly-used VMs (noted SLMU, state of each hardware component (e.g. by leveraging the 1 e.g. MapReduce tasks), long-lived mostly-used VMs (noted ACPI ). Subsequently, this modular system is able to adapt the LLMU, e.g. popular Web services), and long-lived mostly-idle energy consumed by each component according to its current VMs (noted LLMI, e.g. seasonal Web services). In this paper solicitation. Although these solutions improve the energy we focus on LLMI VMs, as defined by Zhang et al. [15]. Their 1The Advanced Configuration and Power Interface (ACPI) is an standard activity pattern is composed of isolated operations followed by used by OSs to manage the power of hardware components. long periods of idleness. Notice that LLMI VMs do exist in a 1530-2075/19/$31.00 ©2019 IEEE 825 DOI 10.1109/IPDPS.2019.00091 Authorized licensed use limited to: UNIVERSITE PAUL SABATIER TOULOUSE 3. Downloaded on June 23,2020 at 14:48:27 UTC from IEEE Xplore. Restrictions apply. proportion significant enough in public clouds, that Amazon With these two modules in place, the existing dynamic and Microsoft recently provided special pricing for such cases, consolidation solution can be augmented with Drowsy-DC despite the implementation of Function-as-a-Service [16]. As idleness-based consolidation algorithm. The resulting system for private clouds host a majority of LLMI VMs: for instance, can make VM placement decisions based on classic criteria, at least half of Nutanix’s workload is made of enterprise such as resource requirements, as well as the new criterion of applications which are typically LLMI [17]. Based on this VMs’ idleness patterns. Drowsy-DC’s idleness-based consoli- observation, the basic idea of Drowsy-DC is to enforce as far dation algorithm is described in the next section. as possible2 the colocation of VMs exhibiting similar idleness patterns. In this way, the hosting servers can be suspended III. IDLENESS AWARE VM PLACEMENT during idle periods. Such suspended servers are called drowsy The central concept that rules placement decisions in Drowsy- servers in the paper. To the best of our knowledge, we are DC is idle periods. We want to colocate VMs which are likely the first to investigate this approach in the context of cloud to be idle during the next time interval (e.g. the next hour) so DCs. To implement it, Drowsy-DC is composed of three main that their host can remain suspended during that interval. To components: a consolidation support module, a suspension this end, Drowsy-DC continually builds each VM’s idleness subsystem and a resuming subsystem. In summary we make model (noted IM), which summarizes its past idleness. By so the following contributions: doing, each time a VM is candidate for placement, Drowsy-DC • A consolidation approach based on modeling VM behavior derives from its IM an idleness probability (noted IP), which of idleness, which enables VM colocation based on quantifies the likelihood of this VM being idle in the next matching idleness periods. time interval. We also define a server’s IP as the average of its • An algorithm allowing to efficiently detect the right time VMs’ IPs: as we add a special consolidation step to keep the for suspending a server.
Recommended publications
  • Towards a Security-Aware Benchmarking Framework for Function-As-A-Service
    Towards a Security-Aware Benchmarking Framework for Function-as-a-Service Roland Pellegrini1, Igor Ivkic1 and Markus Tauber1 1University of Applied Sciences Burgenland, Eisenstadt, Austria {1610781022, igor.ivkic, markus.tauber}@fh-burgenland.at Keywords: Cloud Computing, Benchmarking, Cloud Security, Framework, Function-as-a-Service Abstract: In a world, where complexity increases on a daily basis the Function-as-a-Service (FaaS) cloud model seams to take countermeasures. In comparison to other cloud models, the fast evolving FaaS increasingly abstracts the underlying infrastructure and refocuses on the application logic. This trend brings huge benefits in application and performance but comes with difficulties for benchmarking cloud applications. In this position paper, we present an initial investigation of benchmarking FaaS in close to reality production systems. Furthermore, we outline the architectural design including the necessary benchmarking metrics. We also discuss the possibility of using the proposed framework for identifying security vulnerabilities. 1 INTRODUCTION illustrates the technical workflow between the components: Cloud computing, as defined by Mell and Grance (2011), is a model for enabling on-demand network access to a shared pool of configurable resources. Cloud vendors provide these resources in the service models Infrastructure as a Service (IaaS), Platform- as-a-Service (PaaS) and Software-as-a-Service (SaaS). Through virtualization, the IaaS service model provides computing resources (e.g.: servers, storage, network) to consumers so they can deploy Figure 1: FaaS architecture, based on Pientka (2017) and run software. In other words, the consumers do not control the infrastructure, but are able to manage However, Cloud Service Providers (CSP) often the running operating systems and applications.
    [Show full text]
  • Nfaas: Named Function As a Service Michał Król Ioannis Psaras University College London University College London [email protected] [email protected]
    NFaaS: Named Function as a Service Michał Król Ioannis Psaras University College London University College London [email protected] [email protected] ABSTRACT functionality to be incorporated. Powerful end-user devices and In the past, the Information-centric networking (ICN) community new applications (e.g., augmented reality [1]) demand minimum has focused on issues mainly pertaining to traditional content de- service delay, while the Internet of Things (IoT) [2] generates huge livery (e.g., routing and forwarding scalability, congestion control amounts of data that flow in the reverse direction from traditional and in-network caching). However, to keep up with future Internet flows (that is, from the edge towards the core for processing). As architectural trends the wider area of future Internet paradigms, a result, computation needs to be brought closer to the edge to there is a pressing need to support edge/fog computing environ- support minimum service latencies and to process huge volumes ments, where cloud functionality is available more proximate to of IoT data. where the data is generated and needs processing. In contrast to cloud computing, edge and fog computing promote With this goal in mind, we propose Named Function as a Service the usage of resources located closer to the network edge to be (NFaaS), a framework that extends the Named Data Networking used by multiple different applications, effectively reducing the architecture to support in-network function execution. In contrast transmission delay and the amount of traffic flowing towards the to existing works, NFaaSbuilds on very lightweight VMs and allows network core.
    [Show full text]
  • 2.5 AWS Lambda
    Bachelorarbeit Jules Fehr Serverlose Architektur: Function as a Service mit AWS Lambda am Beispiel einer Web-Anwendung für Fotografen Fakultät Technik und Informatik Faculty of Engineering and Computer Science Department Informatik Department of Computer Science Jules Fehr Serverlose Architektur: Function as a Service mit AWS Lambda am Beispiel einer Web-Anwendung für Fotografen Abschlussarbeit eingereicht im Rahmen der Bachelorprüfung im Studiengang Wirtschaftsinformatik am Department Informatik der Fakultät Technik und Informatik der Hochschule für Angewandte Wissenschaften Hamburg Betreuender Prüfer: Prof. Dr. Ulrike Steffens Zweitgutachter: Prof. Dr. Stefan Sarstedt Abgegeben am 19.07.2018 Jules Fehr Thema der Arbeit Serverlose Architektur: Function as a Service mit AWS Lambda Beispiel einer Web-Anwendung für Fotografen Stichworte FaaS, Serverless, AWS Lambda, Web-Anwendung Kurzzusammenfassung Das Ziel dieser Bachelorarbeit ist die Konzeption und Entwicklung einer Function as a Service Web-Anwendung für Fotografen. In dieser Arbeit werden die Prinzipien von serverloser Architektur behandelt. Es wird eine Referenzarchitektur vorgestellt und es wird eine Anforderungsanalyse für die Anwendung erstellt. Basierend auf der Analyse und den Prinzipien der serverlosen Architektur wird die Umsetzung wichtiger Komponenten erläutert. Jules Fehr Title of the paper Serverless Architecture: Function as a Service with AWS Lambda on the example of a web application for photographers Keywords FaaS, Serverless, AWS Lambda, Web-application Abstract The goal of this bachelor thesis is the conception and development of a Function as a Service web-application for photographers. The principles of serverless architecture will be discussed. A reference architecture will be presented and a requirement analysis will be made. The implementation of important components will be made based on the analysis and the principles of serverless architecture.
    [Show full text]
  • Nanolambda: Implementing Functions As a Service at All Resource Scales for the Internet of Things
    NanoLambda: Implementing Functions as a Service at All Resource Scales for the Internet of Things. Gareth George, Fatih Bakir, Rich Wolski, and Chandra Krintz Computer Science Department Univ. of California, Santa Barbara Abstract—Internet of Things (IoT) devices are becoming in- decompose their applications into multiple functions, each re- creasingly prevalent in our environment, yet the process of sponsible for some short-lived piece of processing. A function programming these devices and processing the data they produce is typically a small, single-entry code package (50 megabytes remains difficult. Typically, data is processed on device, involving arduous work in low level languages, or data is moved to the or less) written in a high level language (e.g. Python, node.js, cloud, where abundant resources are available for Functions or Java). Thanks to their small size and self-contained design, as a Service (FaaS) or other handlers. FaaS is an emerging these functions are easily invoked and scaled automatically category of flexible computing services, where developers deploy by the FaaS platform in response to incoming events (data self-contained functions to be run in portable and secure con- arrival, messages, web requests, storage updates, etc.). Many tainerized environments; however, at the moment, these functions are limited to running in the cloud or in some cases at the “edge” functions, potentially belonging to different tenants, execute on of the network using resource rich, Linux-based systems. a single server, using Linux containers to provide secure iso- In this paper, we present NanoLambda, a portable platform lation and a consistent execution environment.
    [Show full text]
  • Serverless Computing in Financial Services High Performance Computing—With More Speed, Accuracy and Flexibility Authors
    Serverless computing in financial services High performance computing—with more speed, accuracy and flexibility Authors Michael Behrendt is a Distinguished Engineer in Neil Cowit is the Worldwide Cloud Offering the IBM Cloud development organization. He is Manager for High Performance Computing at responsible for IBM’s technical strategy around IBM. HPC has been a significant part of Neil’s serverless & Function-as-a-Service. 20-plus-year career within the financial services technology sector. He has held leadership In that context, he’s also the chief architect for and individual contributor roles in Product the IBM serverless offering, IBM Cloud Functions. Management, Development, Sales and Marketing. Before that, he was the chief architect of the core platform of IBM Bluemix and was one of the initial founding members incubating it. Michael has been working on cloud computing for more than 13 years and has 35 patents. He is located in the IBM Research & Development Laboratory in Boeblingen, Germany. | 2 | Contents Authors 02 Introduction 04 What issues can serverless computing help address? 06 Technical perspective 07 IBM Cloud Functions: Implementing serverless computing 08 Benefits of IBM Cloud Functions 08 Conclusion 09 Glossary 10 | 3 | Introduction If Mrs. Wallis Simpson were alive today and Wouldn’t it be terrific if we only had to solve static involved in financial services, she may very well or deterministic models as opposed to stochastic have modified her famous quote from “You can or probabilistic financial models? Imagine being never be too rich or too thin.” to “You can never able to cover all the possible contingencies in have too much compute capacity.” proportion to their likelihood.
    [Show full text]
  • A Brief History of Cloud Application Architectures
    applied sciences Review A Brief History of Cloud Application Architectures Nane Kratzke ID Lübeck University of Applied Sciences, Department of Electrical Engineering and Computer Science, 23562 Lübeck, Germany; [email protected] Received: 14 July 2018; Accepted: 27 July 2018; Published: 14 August 2018 Featured Application: This paper features system and software engineering use cases for large-scale (business) Cloud-native applications (e.g., Netflix, Twitter, Uber, Google Search). Such Cloud-native applications (CNA) provide web-scalability and independent deployability of their components and enable exponential user growth. Furthermore, migration and architecture transformation use cases of existing tiered and on-premise (business) applications are additionally of interest. Thus, questions of how existing and not cloud-ready applications are migratable into cloud environments are covered as well. Abstract: This paper presents a review of cloud application architectures and its evolution. It reports observations being made during a research project that tackled the problem to transfer cloud applications between different cloud infrastructures. As a side effect, we learned a lot about commonalities and differences from plenty of different cloud applications which might be of value for cloud software engineers and architects. Throughout the research project, we analyzed industrial cloud standards, performed systematic mapping studies of cloud-native application-related research papers, did action research activities in cloud engineering projects, modeled a cloud application reference model, and performed software and domain-specific language engineering activities. Two primary (and sometimes overlooked) trends can be identified. First, cloud computing and its related application architecture evolution can be seen as a steady process to optimize resource utilization in cloud computing.
    [Show full text]
  • Architectural Implications of Function-As-A-Service Computing
    Architectural Implications of Function-as-a-Service Computing Mohammad Shahrad Jonathan Balkind David Wentzlaff Princeton University Princeton University Princeton University Princeton, USA Princeton, USA Princeton, USA [email protected] [email protected] [email protected] ABSTRACT Network Serverless computing is a rapidly growing cloud application model, popularized by Amazon’s Lambda platform. Serverless cloud ser- Scheduling vices provide fine-grained provisioning of resources, which scale Platform (priorwork) automatically with user demand. Function-as-a-Service (FaaS) appli- Queueing Management cations follow this serverless model, with the developer providing 35% decrease in IPC Interference their application as a set of functions which are executed in response due to interference 6x variation due to to a user- or system-generated event. Functions are designed to Memory BW invocation pattern 20x MPKI for be short-lived and execute inside containers or virtual machines, Branch MPKI >10x exec time short functions introducing a range of system-level overheads. This paper studies for short functions Cold Start Server the architectural implications of this emerging paradigm. Using (500ms cold start) Up to 20x (thispaper) Container the commercial-grade Apache OpenWhisk FaaS platform on real slowdown servers, this work investigates and identifies the architectural im- Native plications of FaaS serverless computing. The workloads, along with Execution Figure 1: We characterize the server-level overheads of the way that FaaS inherently interleaves short functions from many Function-as-a-Service applications, compared to native exe- tenants frustrates many of the locality-preserving architectural cution. This contrasts with prior work [2–5] which focused structures common in modern processors.
    [Show full text]
  • Persistent Helper Functions in a Serverless Offering
    Journal of Software Engineering and Applications, 2020, 13, 278-287 https://www.scirp.org/journal/jsea ISSN Online: 1945-3124 ISSN Print: 1945-3116 Result-as-a-Service (RaaS): Persistent Helper Functions in a Serverless Offering Arshdeep Bahga, Vijay K. Madisetti, Joel R. Corporan Georgia Institute of Technology, Atlanta, USA How to cite this paper: Bahga, A., Madi- Abstract setti, V.K. and Corporan, J.R. (2020) Re- sult-as-a-Service (RaaS): Persistent Helper Serverless Computing or Functions-as-a-Service (FaaS) is an execution model Functions in a Serverless Offering. Journal for cloud computing environments where the cloud provider executes a piece of Software Engineering and Applications, of code (a function) by dynamically allocating resources. When a function 13, 278-287. https://doi.org/10.4236/jsea.2020.1310018 has not been executed for a long time or is being executed for the first time, a new container has to be created, and the execution environment has to be in- Received: September 21, 2020 itialized resulting in a cold start. Cold start can result in a higher latency. We Accepted: October 24, 2020 propose a new computing and execution model for cloud environments Published: October 27, 2020 called Result-as-a-Service (RaaS), which aims to reduce the computational Copyright © 2020 by author(s) and cost and overhead while achieving high availability. In between successive Scientific Research Publishing Inc. calls to a function, a persistent function can help in successive calls by pre- This work is licensed under the Creative computing the functions for different possible arguments and then distribut- Commons Attribution International License (CC BY 4.0).
    [Show full text]
  • A Systematic Mapping Study on Engineering Function-As-A-Service Platforms and Tools
    Institute of Architecture of Application Systems A Systematic Mapping Study on Engineering Function-as-a-Service Platforms and Tools Vladimir Yussupov, Uwe Breitenbücher, Frank Leymann, Michael Wurster Institute of Architecture of Application Systems, University of Stuttgart, Germany, {yussupov, breitenbuecher, leymann, wurster}@iaas.uni-stuttgart.de : @inproceedings{Yussupov2019_SystematicMappingStudyFaaS, author = {Vladimir Yussupov and Uwe Breitenb{\"u}cher and Frank Leymann and Michael Wurster}, title = {{A Systematic Mapping Study on Engineering Function-as-a-Service Platforms and Tools}}, booktitle = {Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing (UCC 2019)}, publisher = {ACM}, year = 2019, month = dec, pages = {229--240}, doi = {10.1145/3344341.3368803} } © Yussupov et al. 2019. This is an Open Access publication licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. A Systematic Mapping Study on Engineering Function-as-a-Service Platforms and Tools Vladimir Yussupov Uwe Breitenbücher Institute of Architecture of Application Systems Institute of Architecture of Application Systems University of Stuttgart, Germany University of Stuttgart, Germany [email protected] [email protected] Frank Leymann Michael Wurster Institute of Architecture of Application Systems Institute of Architecture of Application Systems University of Stuttgart, Germany University of Stuttgart, Germany [email protected] [email protected] ABSTRACT which creates a wrong impression that servers are absent. A recent Function-as-a-Service (FaaS) is a novel cloud service model allowing newcomer in the line of as-a-service offerings called Function-as- to develop fine-grained, provider-managed cloud applications.
    [Show full text]
  • Rethinking Scalable Service Architectures for the Internet of Things
    Devices-as-Services: Rethinking Scalable Service Architectures for the Internet of Things Fatih Bakir, Rich Wolski, Chandra Krintz Gowri Sankar Ramachandran Univ. of California, Santa Barbara Univ. of Southern California Abstract vices at the edge that augment device capabilities and enable We investigate a new distributed services model and architec- scale. In this paper, we outline this approach to implementing ture for Internet of Things (IoT) applications. In particular, Devices-as-Services and describe some of the capabilities of we observe that devices at the edge of the network, although an early prototype. resource constrained, are increasingly capable – performing Our work is motivated by the following observations. • IoT applications can and will likely be structured as collec- actions (e.g. data analytics, decision support, actuation, con- tions of services that require functionality from a device trol, etc.) in addition to event telemetry. Thus, such devices tier, an edge tier, and a cloud tier are better modeled as servers, which applications in the cloud • in-network data processing can significantly reduce re- compose for their functionality. We investigate the implica- sponse time and energy consumption [31], tions of this “flipped” IoT client-server model, for server dis- • edge isolation precludes the need for dedicated commu- covery, authentication, and resource use. We find that by com- nication channels between application and devices, and bining capability-based security with an edge-aware registry, facilitates privacy protection for both data and devices, this model can achieve fast response and energy efficiency. • actuation in device tier will require some form of request- 1 Introduction response protocol where the device fields the request, • the heterogeneity of devices militates for a single program- As the Internet of Things (IoT) grows in size and ubiquity, it ming paradigm and distributed interaction model, and is becoming critical that we perform data-driven operations • multi-function devices can and will be able to perform their (i.e.
    [Show full text]
  • Rise of Serverless Computing, Overview of Current State and Future Trends in Research and Industry
    PREPRINT ARTICLE The server is dead, long live the server: Rise of Serverless Computing, Overview of Current State and Future Trends in Research and Industry Paul Castro​1,​ Vatche Ishakian​2,​ Vinod Muthusamy​1,​ Aleksander Slominski​1 1 IBM​ T.J. Watson Research Center, {castrop, vmuthus, aslom}@us.ibm.com 2 Computer​ Information Systems, Bentley University, [email protected] Abstract Serverless computing -- an emerging cloud-native paradigm for the deployment of applications and services -- represents an evolution in cloud application development, programming models, abstractions, and platforms. It promises a real pay-as-you-go billing (with millisecond granularity) with no waste of resources, and lowers the bar for developers by asking them to delegate all their operational complexity and scalability to the cloud provider. Delivering on these promises comes at the expense of restricting functionality. In this article we provide an overview of serverless computing, its evolution, general architecture, key characteristics and uses cases that made it an attractive option for application development. Based on discussions with academics and industry experts during a series of organized serverless computing workshops (WoSC), we also identify the technical challenges and open problems. Introduction Cloud computing in general, and Infrastructure-as-a-Service (IaaS) in particular, have become widely accepted and adopted paradigms for computing with the offerings of Virtual Machines (VM) on demand. By 2020, 67% of enterprise IT infrastructure and software spending will be for cloud-based offerings [1]. A major factor in the increased adoption of the cloud by enterprise IT was its pay-as-you-go model where a customer pays only for resources leased from the cloud provider and have the ability to get as many resources as needed with no up-front cost (elasticity) [20].
    [Show full text]
  • A Mixed-Method Empirical Study of Function-As-A-Service Software Development in Industrial Practice
    A mixed-method empirical study of Function-as-a-Service software development in industrial practice Downloaded from: https://research.chalmers.se, 2021-09-25 15:40 UTC Citation for the original published paper (version of record): Leitner, P., Wittern, J., Spillner, J. et al (2019) A mixed-method empirical study of Function-as-a-Service software development in industrial practice Journal of Systems and Software, 149: 340-359 http://dx.doi.org/10.1016/j.jss.2018.12.013 N.B. When citing this work, cite the original published paper. research.chalmers.se offers the possibility of retrieving research publications produced at Chalmers University of Technology. It covers all kind of research output: articles, dissertations, conference papers, reports etc. since 2004. research.chalmers.se is administrated and maintained by Chalmers Library (article starts on next page) A Mixed-Method Empirical Study of Function-as-a-Service Software Development in Industrial Practice Philipp Leitnera,∗, Erik Witternb, Josef Spillnerc, Waldemar Hummerb aSoftware Engineering Division, Chalmers | University of Gothenburg, Sweden bIBM Research, Yorktown Heights, New York, USA cService Prototyping Lab, Zurich University of Applied Sciences, Switzerland Abstract Function-as-a-Service (FaaS) describes cloud computing services that make infrastructure components transparent to application developers, thus falling in the larger group of “serverless” computing models. When using FaaS offerings, such as AWS Lambda, developers provide atomic and short-running code for their functions, and FaaS providers execute and horizontally scale them on-demand. Currently, there is no systematic research on how developers use serverless, what types of applications lend themselves to this model, or what architectural styles and practices FaaS-based applications are based on.
    [Show full text]