The Time for Serverless Is Now!

Total Page:16

File Type:pdf, Size:1020Kb

The Time for Serverless Is Now! The time for serverless is now! Serverless Architecture Whitepaper Up in the Cloud: Step by step towards serverless applications, platforms and a cloud-native ecosystem @ServerlessCon # SLA_con www.serverless-architecture.io Content Serverless Development First things first 3 Your first step towards serverless application development Quarkus: Modernizing Java to keep pace in a cloud-native world 7 Scaling the modern app world Serverless Architecture & Design Why platform as a service is such a great model 9 Looking into the future of PaaS The time for serverless is now – tips for getting started 11 If not now, when? Building a data platform on Google Cloud Platform 13 Laying the groundwork for big data Migrating big data workloads to Azure HDInsight – Smoothing the path to the cloud with a plan 17 Strategies for big data migration Serverless Engineering & Operations Cloud-Native DevOps 20 The driving force behind the digital transformation of modern enterprises Serverless Security 25 Basic considerations on the subject of serverless architecture security www.serverless-architecture.io @ServerlessCon # SLA_con 2 WHITEPAPER Serverless Development Your first step towards serverless application development First things first In this article, Kamesh Sampath shows us how to master the first steps on the journey towards a serverless application. He shows how to set up the right environment and takes us through its deployment. by Kamesh Sampath RAM, 6 CPUs and 50 GB hard disk space. The boot command also contains a few additional configurations In the first part of this article, we will deal with setting for the Kubernetes cluster that are necessary to get Kna- up a development environment that is suitable for Kna- tive up and running. It is also important that the used tive in version 0.6.0. The second part deals with the Kubernetes version is not older than version 1.12.0, deployment of your first serverless microservice. The otherwise Knative will not work. If Minikube doesn’t basic requirement for using Knative to create serverless start immediately, it’s completely normal; it can take a applications is a solid knowledge of Kubernetes. If you few minutes until the initial startup is complete, so you are still inexperienced, you should complete the official should be a little patient when setting it up. basic Kubernetes tutorial [1]. Before we get down to the proverbial “can do”, a few Setting up an Istio Ingress Gateway tools and utilities have to be installed: Knative requires an Ingress Gateway to route requests to Knative Services. In addition to Istio [6], Gloo [7] is also • Minikube [2] supported as an Ingress Gateway. For our example, we • kubectl [3] will use Istio, though. The following steps show how to • kubens [4] perform a lightweight installation of Istio that contains only the Ingress Gateway: For Windows users, WSL [5] has proven to be quite use- ful, so I recommend installing that as well. curl -L https://raw.githubusercontent.com/knative/serving/release-0.6/ third_party/istio-1.1.3/istio-lean.yaml \ Setting up Minikube | sed ‘s/LoadBalancer/NodePort/’ \ Minikube is a single node Kubernetes cluster that is ide- | kubectl apply --filename – al for everyday development with Kubernetes. After the setup, the following steps must be performed to make Like the setup of Minikube, the deployment of the Istio Minikube ready for deployment with Knative Serving. Pod takes a few minutes. With the command kubectl Listing 1 shows what this looks like in the code. —namespace istio-system get pods –watch you can see First, a Minikube profile must be created, which is the status; the overview is finished with Ctrl + C. Whe- what the first line achieves. The second command is then ther the deployment was successful or not can be easi- used to set up a Minikube instance that contains 8 GB ly determined with the command kubectl –namespace www.serverless-architecture.io @ServerlessCon # SLA_con 3 WHITEPAPER Serverless Development istio-system get pods. If everything went well, the output Create the deployment and service should look like Listing 2. By applying the previously created YAML file, we can create the deployment and service. This is done using Installing Knative Serving the kubectl apply –filename app.yaml command. Also, The installation of Knative Serving [8] allows us to run at this point, the command kubectl get pods –watch serverless workloads on Kubernetes. It also provides au- can be used to get information about the status of the tomatic scaling and tracking of revisions. You can ins- application, while CTRL + C terminates the whole tall Knative Serving with the following commands: thing. If all went well, we should now have a deploy- ment called greeter and a service called greeter-svc (Lis- kubectl apply --selector knative.dev/crd-install=true \ ting 5). --filename https://github.com/knative/serving/releases/download/v0.6.0/ To activate a service, you can also use a Minikube serving.yaml shortcut like minikube service greeter-svc, which opens the service URL in your browser. If you prefer to use kubectl apply --filename https://github.com/knative/serving/releases/ curl to open the same URL, you have to use the com- download/v0.6.0/serving.yaml --selector networking.knative.dev/certificate- mand curl $(minikube service greeter-svc –url). Now provider!=cert-manager you should see a text that looks something like this: Hi greeter => ‘9861675f8845’ : 1 Again, it will probably take a few minutes until the Knati- ve Pods are deployed; with the command kubectl –name- Migrating the traditional Kubernetes space knative-serving get pods –watch you can check the deployment to serverless with Knative status. As before, the check can be aborted with Ctrl + C. The migration starts by simply copying the app.yaml With the command kubectl –namespace knative-serving file, naming it serverless-app-yaml and updating it to the get pods you can check if everything is running. If this is lines shown in Listing 6. the case, an output like in Listing 3 should be displayed. If we compare the traditional Kubernetes application (app.yaml) with the serverless application (serverless- Deploy demo application The application we want to create for demonst- ration is a simple greeting machine that outputs Listing1 “Hi”. For this we use an existing Linux container image, which can be found on the Quay website [9]. minikube profile knative The first step is to create a traditional Kubernetes de- ployment that can then be modified to use serverless minikube start -p knative --memory=8192 --cpus=6 \ functionality. This will make clear where the actual dif- --kubernetes-version=v1.12.0 \ ferences lie and how to make existing deployments using --disk-size=50g \ Knative serverless. --extra-config=apiserver.enable-admission-plugins=”LimitRanger,Namesp aceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStora Create a Kubernetes resource file geClass,MutatingAdmissionWebhook” The following steps show how to create a Kubernetes resource file. To do this, you must first create a new file called app.yaml, into which the code in Listing 4 must be copied. Listing 2 NAME READY STATUS RESTARTS AGE cluster-local-gateway-7989595989-9ng8l 1/1 Running 0 Session: From Monolith to Serverless: 2m14s Rethinking your Architecture istio-ingressgateway-6877d77579-fw97q 2/2 Running 0 2m14s Michael Dowden istio-pilot-5499866859-vtkb8 1/1 Running 0 2m14s It’s easy to understand the benefits of serverless but it’s not always easy to un- derstand how this will impact our software architecture. In this talk we will deconst- Listing 3 ruct a set of requirements and walk through the architecture of both a traditional service-oriented NAME READY STATUS RESTARTS AGE architecture and a modern serverless architecture. activator-54f7c49d5f-trr82 1/1 Running 0 27m You’ll leave with a better understanding of how to autoscaler-5bcd65c848-2cpv8 1/1 Running 0 27m design event-driven systems and serverless APIs, controller-c795f6fb-r7bmz 1/1 Running 0 27m along with some alternatives to the traditional networking-istio-888848b88-bkxqr 1/1 Running 0 27m RESTful API layer. webhook-796c5dd94f-phkxw 1/1 Running 0 27m www.serverless-architecture.io @ServerlessCon # SLA_con 4 WHITEPAPER Serverless Development Listing 4 --- - name: greeter path: /healthz apiVersion: apps/v1 image: quay.io/rhdevelopers/knative- port: 8080 kind: Deployment tutorial-greeter:quarkus --- metadata: resources: apiVersion: v1 name: greeter limits: kind: Service spec: memory: “32Mi” metadata: selector: cpu: “100m” name: greeter-svc matchLabels: ports: spec: app: greeter - containerPort: 8080 selector: template: livenessProbe: app: greeter metadata: httpGet: type: NodePort labels: path: /healthz ports: app: greeter port: 8080 - port: 8080 spec: readinessProbe: targetPort: 8080 containers: httpGet: Listing 5 Listing 7 $ kubectl get deployments $ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE greeter 1 1 1 1 16s greeter 1 1 1 1 30m greeter-bn8cm-deployment 1 1 1 1 59s $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE greeter-svc NodePort 10.110.164.179 8080:31633/TCP 50s Listing 8 $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE Listing 6 greeter ExternalName istio-ingressgateway.istio- --- system.svc.cluster.local 114s apiVersion: serving.knative.dev/v1alpha1 greeter-bn8cm ClusterIP 10.110.208.72 kind: Service 80/TCP 2m21s metadata: greeter-bn8cm-metrics ClusterIP 10.100.237.125 name: greeter 9090/TCP 2m21s spec: greeter-bn8cm-priv ClusterIP 10.107.104.53 template: 80/TCP 2m21s metadata: labels: app: greeter spec: containers: Listing 9 - image: quay.io/rhdevelopers/knative-tutorial-greeter:quarkus resources: kubectl get services.serving.knative.dev limits: NAME URL LATESTCREATED LATESTREADY memory: “32Mi” READY REASON cpu: “100m” greeter http://greeter.default.example.com greeter-bn8cm greeter- ports: bn8cm True - containerPort: 8080 livenessProbe: Attention httpGet: In a Minikube deployment we will have neither LoadBalancer nor DNS to path: /healthz resolve anything to *.example.com or a service URL like http://greeter.
Recommended publications
  • Towards a Security-Aware Benchmarking Framework for Function-As-A-Service
    Towards a Security-Aware Benchmarking Framework for Function-as-a-Service Roland Pellegrini1, Igor Ivkic1 and Markus Tauber1 1University of Applied Sciences Burgenland, Eisenstadt, Austria {1610781022, igor.ivkic, markus.tauber}@fh-burgenland.at Keywords: Cloud Computing, Benchmarking, Cloud Security, Framework, Function-as-a-Service Abstract: In a world, where complexity increases on a daily basis the Function-as-a-Service (FaaS) cloud model seams to take countermeasures. In comparison to other cloud models, the fast evolving FaaS increasingly abstracts the underlying infrastructure and refocuses on the application logic. This trend brings huge benefits in application and performance but comes with difficulties for benchmarking cloud applications. In this position paper, we present an initial investigation of benchmarking FaaS in close to reality production systems. Furthermore, we outline the architectural design including the necessary benchmarking metrics. We also discuss the possibility of using the proposed framework for identifying security vulnerabilities. 1 INTRODUCTION illustrates the technical workflow between the components: Cloud computing, as defined by Mell and Grance (2011), is a model for enabling on-demand network access to a shared pool of configurable resources. Cloud vendors provide these resources in the service models Infrastructure as a Service (IaaS), Platform- as-a-Service (PaaS) and Software-as-a-Service (SaaS). Through virtualization, the IaaS service model provides computing resources (e.g.: servers, storage, network) to consumers so they can deploy Figure 1: FaaS architecture, based on Pientka (2017) and run software. In other words, the consumers do not control the infrastructure, but are able to manage However, Cloud Service Providers (CSP) often the running operating systems and applications.
    [Show full text]
  • Benchmarking, Analysis, and Optimization of Serverless Function Snapshots
    Benchmarking, Analysis, and Optimization of Serverless Function Snapshots Dmitrii Ustiugov∗ Plamen Petrov Marios Kogias† University of Edinburgh University of Edinburgh Microsoft Research United Kingdom United Kingdom United Kingdom Edouard Bugnion Boris Grot EPFL University of Edinburgh Switzerland United Kingdom ABSTRACT CCS CONCEPTS Serverless computing has seen rapid adoption due to its high scala- • Computer systems organization ! Cloud computing; • In- bility and flexible, pay-as-you-go billing model. In serverless, de- formation systems ! Computing platforms; Data centers; • velopers structure their services as a collection of functions, spo- Software and its engineering ! n-tier architectures. radically invoked by various events like clicks. High inter-arrival time variability of function invocations motivates the providers KEYWORDS to start new function instances upon each invocation, leading to cloud computing, datacenters, serverless, virtualization, snapshots significant cold-start delays that degrade user experience. To reduce ACM Reference Format: cold-start latency, the industry has turned to snapshotting, whereby Dmitrii Ustiugov, Plamen Petrov, Marios Kogias, Edouard Bugnion, and Boris an image of a fully-booted function is stored on disk, enabling a Grot. 2021. Benchmarking, Analysis, and Optimization of Serverless Func- faster invocation compared to booting a function from scratch. tion Snapshots . In Proceedings of the 26th ACM International Conference on This work introduces vHive, an open-source framework for Architectural Support for Programming Languages and Operating Systems serverless experimentation with the goal of enabling researchers (ASPLOS ’21), April 19–23, 2021, Virtual, USA. ACM, New York, NY, USA, to study and innovate across the entire serverless stack. Using 14 pages. https://doi.org/10.1145/3445814.3446714 vHive, we characterize a state-of-the-art snapshot-based serverless infrastructure, based on industry-leading Containerd orchestra- 1 INTRODUCTION tion framework and Firecracker hypervisor technologies.
    [Show full text]
  • Nfaas: Named Function As a Service Michał Król Ioannis Psaras University College London University College London [email protected] [email protected]
    NFaaS: Named Function as a Service Michał Król Ioannis Psaras University College London University College London [email protected] [email protected] ABSTRACT functionality to be incorporated. Powerful end-user devices and In the past, the Information-centric networking (ICN) community new applications (e.g., augmented reality [1]) demand minimum has focused on issues mainly pertaining to traditional content de- service delay, while the Internet of Things (IoT) [2] generates huge livery (e.g., routing and forwarding scalability, congestion control amounts of data that flow in the reverse direction from traditional and in-network caching). However, to keep up with future Internet flows (that is, from the edge towards the core for processing). As architectural trends the wider area of future Internet paradigms, a result, computation needs to be brought closer to the edge to there is a pressing need to support edge/fog computing environ- support minimum service latencies and to process huge volumes ments, where cloud functionality is available more proximate to of IoT data. where the data is generated and needs processing. In contrast to cloud computing, edge and fog computing promote With this goal in mind, we propose Named Function as a Service the usage of resources located closer to the network edge to be (NFaaS), a framework that extends the Named Data Networking used by multiple different applications, effectively reducing the architecture to support in-network function execution. In contrast transmission delay and the amount of traffic flowing towards the to existing works, NFaaSbuilds on very lightweight VMs and allows network core.
    [Show full text]
  • Facing the Unplanned Migration of Serverless Applications: a Study on Portability Problems, Solutions, and Dead Ends
    Institute of Architecture of Application Systems Facing the Unplanned Migration of Serverless Applications: A Study on Portability Problems, Solutions, and Dead Ends Vladimir Yussupov, Uwe Breitenbücher, Frank Leymann, Christian Müller Institute of Architecture of Application Systems, University of Stuttgart, Germany, {yussupov, breitenbuecher, leymann}@iaas.uni-stuttgart.de [email protected] : @inproceedings{Yussupov2019_FaaSPortability, author = {Vladimir Yussupov and Uwe Breitenb{\"u}cher and Frank Leymann and Christian M{\"u}ller}, title = {{Facing the Unplanned Migration of Serverless Applications: A Study on Portability Problems, Solutions, and Dead Ends}}, booktitle = {Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing (UCC 2019)}, publisher = {ACM}, year = 2019, month = dec, pages = {273--283}, doi = {10.1145/3344341.3368813} } © Yussupov et al. 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version is available at ACM: https://doi.org/10.1145/3344341.3368813. Facing the Unplanned Migration of Serverless Applications: A Study on Portability Problems, Solutions, and Dead Ends Vladimir Yussupov Uwe Breitenbücher Institute of Architecture of Application Systems Institute of Architecture of Application Systems University of Stuttgart, Germany University of Stuttgart, Germany [email protected] [email protected] Frank Leymann Christian Müller Institute of Architecture of Application Systems Institute of Architecture of Application Systems University of Stuttgart, Germany University of Stuttgart, Germany [email protected] [email protected] ABSTRACT 1 INTRODUCTION Serverless computing focuses on developing cloud applications that Cloud computing [20] becomes a necessity for businesses at any comprise components fully managed by providers.
    [Show full text]
  • 2.5 AWS Lambda
    Bachelorarbeit Jules Fehr Serverlose Architektur: Function as a Service mit AWS Lambda am Beispiel einer Web-Anwendung für Fotografen Fakultät Technik und Informatik Faculty of Engineering and Computer Science Department Informatik Department of Computer Science Jules Fehr Serverlose Architektur: Function as a Service mit AWS Lambda am Beispiel einer Web-Anwendung für Fotografen Abschlussarbeit eingereicht im Rahmen der Bachelorprüfung im Studiengang Wirtschaftsinformatik am Department Informatik der Fakultät Technik und Informatik der Hochschule für Angewandte Wissenschaften Hamburg Betreuender Prüfer: Prof. Dr. Ulrike Steffens Zweitgutachter: Prof. Dr. Stefan Sarstedt Abgegeben am 19.07.2018 Jules Fehr Thema der Arbeit Serverlose Architektur: Function as a Service mit AWS Lambda Beispiel einer Web-Anwendung für Fotografen Stichworte FaaS, Serverless, AWS Lambda, Web-Anwendung Kurzzusammenfassung Das Ziel dieser Bachelorarbeit ist die Konzeption und Entwicklung einer Function as a Service Web-Anwendung für Fotografen. In dieser Arbeit werden die Prinzipien von serverloser Architektur behandelt. Es wird eine Referenzarchitektur vorgestellt und es wird eine Anforderungsanalyse für die Anwendung erstellt. Basierend auf der Analyse und den Prinzipien der serverlosen Architektur wird die Umsetzung wichtiger Komponenten erläutert. Jules Fehr Title of the paper Serverless Architecture: Function as a Service with AWS Lambda on the example of a web application for photographers Keywords FaaS, Serverless, AWS Lambda, Web-application Abstract The goal of this bachelor thesis is the conception and development of a Function as a Service web-application for photographers. The principles of serverless architecture will be discussed. A reference architecture will be presented and a requirement analysis will be made. The implementation of important components will be made based on the analysis and the principles of serverless architecture.
    [Show full text]
  • Nanolambda: Implementing Functions As a Service at All Resource Scales for the Internet of Things
    NanoLambda: Implementing Functions as a Service at All Resource Scales for the Internet of Things. Gareth George, Fatih Bakir, Rich Wolski, and Chandra Krintz Computer Science Department Univ. of California, Santa Barbara Abstract—Internet of Things (IoT) devices are becoming in- decompose their applications into multiple functions, each re- creasingly prevalent in our environment, yet the process of sponsible for some short-lived piece of processing. A function programming these devices and processing the data they produce is typically a small, single-entry code package (50 megabytes remains difficult. Typically, data is processed on device, involving arduous work in low level languages, or data is moved to the or less) written in a high level language (e.g. Python, node.js, cloud, where abundant resources are available for Functions or Java). Thanks to their small size and self-contained design, as a Service (FaaS) or other handlers. FaaS is an emerging these functions are easily invoked and scaled automatically category of flexible computing services, where developers deploy by the FaaS platform in response to incoming events (data self-contained functions to be run in portable and secure con- arrival, messages, web requests, storage updates, etc.). Many tainerized environments; however, at the moment, these functions are limited to running in the cloud or in some cases at the “edge” functions, potentially belonging to different tenants, execute on of the network using resource rich, Linux-based systems. a single server, using Linux containers to provide secure iso- In this paper, we present NanoLambda, a portable platform lation and a consistent execution environment.
    [Show full text]
  • Serverless Computing in Financial Services High Performance Computing—With More Speed, Accuracy and Flexibility Authors
    Serverless computing in financial services High performance computing—with more speed, accuracy and flexibility Authors Michael Behrendt is a Distinguished Engineer in Neil Cowit is the Worldwide Cloud Offering the IBM Cloud development organization. He is Manager for High Performance Computing at responsible for IBM’s technical strategy around IBM. HPC has been a significant part of Neil’s serverless & Function-as-a-Service. 20-plus-year career within the financial services technology sector. He has held leadership In that context, he’s also the chief architect for and individual contributor roles in Product the IBM serverless offering, IBM Cloud Functions. Management, Development, Sales and Marketing. Before that, he was the chief architect of the core platform of IBM Bluemix and was one of the initial founding members incubating it. Michael has been working on cloud computing for more than 13 years and has 35 patents. He is located in the IBM Research & Development Laboratory in Boeblingen, Germany. | 2 | Contents Authors 02 Introduction 04 What issues can serverless computing help address? 06 Technical perspective 07 IBM Cloud Functions: Implementing serverless computing 08 Benefits of IBM Cloud Functions 08 Conclusion 09 Glossary 10 | 3 | Introduction If Mrs. Wallis Simpson were alive today and Wouldn’t it be terrific if we only had to solve static involved in financial services, she may very well or deterministic models as opposed to stochastic have modified her famous quote from “You can or probabilistic financial models? Imagine being never be too rich or too thin.” to “You can never able to cover all the possible contingencies in have too much compute capacity.” proportion to their likelihood.
    [Show full text]
  • A Brief History of Cloud Application Architectures
    applied sciences Review A Brief History of Cloud Application Architectures Nane Kratzke ID Lübeck University of Applied Sciences, Department of Electrical Engineering and Computer Science, 23562 Lübeck, Germany; [email protected] Received: 14 July 2018; Accepted: 27 July 2018; Published: 14 August 2018 Featured Application: This paper features system and software engineering use cases for large-scale (business) Cloud-native applications (e.g., Netflix, Twitter, Uber, Google Search). Such Cloud-native applications (CNA) provide web-scalability and independent deployability of their components and enable exponential user growth. Furthermore, migration and architecture transformation use cases of existing tiered and on-premise (business) applications are additionally of interest. Thus, questions of how existing and not cloud-ready applications are migratable into cloud environments are covered as well. Abstract: This paper presents a review of cloud application architectures and its evolution. It reports observations being made during a research project that tackled the problem to transfer cloud applications between different cloud infrastructures. As a side effect, we learned a lot about commonalities and differences from plenty of different cloud applications which might be of value for cloud software engineers and architects. Throughout the research project, we analyzed industrial cloud standards, performed systematic mapping studies of cloud-native application-related research papers, did action research activities in cloud engineering projects, modeled a cloud application reference model, and performed software and domain-specific language engineering activities. Two primary (and sometimes overlooked) trends can be identified. First, cloud computing and its related application architecture evolution can be seen as a steady process to optimize resource utilization in cloud computing.
    [Show full text]
  • Architectural Implications of Function-As-A-Service Computing
    Architectural Implications of Function-as-a-Service Computing Mohammad Shahrad Jonathan Balkind David Wentzlaff Princeton University Princeton University Princeton University Princeton, USA Princeton, USA Princeton, USA [email protected] [email protected] [email protected] ABSTRACT Network Serverless computing is a rapidly growing cloud application model, popularized by Amazon’s Lambda platform. Serverless cloud ser- Scheduling vices provide fine-grained provisioning of resources, which scale Platform (priorwork) automatically with user demand. Function-as-a-Service (FaaS) appli- Queueing Management cations follow this serverless model, with the developer providing 35% decrease in IPC Interference their application as a set of functions which are executed in response due to interference 6x variation due to to a user- or system-generated event. Functions are designed to Memory BW invocation pattern 20x MPKI for be short-lived and execute inside containers or virtual machines, Branch MPKI >10x exec time short functions introducing a range of system-level overheads. This paper studies for short functions Cold Start Server the architectural implications of this emerging paradigm. Using (500ms cold start) Up to 20x (thispaper) Container the commercial-grade Apache OpenWhisk FaaS platform on real slowdown servers, this work investigates and identifies the architectural im- Native plications of FaaS serverless computing. The workloads, along with Execution Figure 1: We characterize the server-level overheads of the way that FaaS inherently interleaves short functions from many Function-as-a-Service applications, compared to native exe- tenants frustrates many of the locality-preserving architectural cution. This contrasts with prior work [2–5] which focused structures common in modern processors.
    [Show full text]
  • Persistent Helper Functions in a Serverless Offering
    Journal of Software Engineering and Applications, 2020, 13, 278-287 https://www.scirp.org/journal/jsea ISSN Online: 1945-3124 ISSN Print: 1945-3116 Result-as-a-Service (RaaS): Persistent Helper Functions in a Serverless Offering Arshdeep Bahga, Vijay K. Madisetti, Joel R. Corporan Georgia Institute of Technology, Atlanta, USA How to cite this paper: Bahga, A., Madi- Abstract setti, V.K. and Corporan, J.R. (2020) Re- sult-as-a-Service (RaaS): Persistent Helper Serverless Computing or Functions-as-a-Service (FaaS) is an execution model Functions in a Serverless Offering. Journal for cloud computing environments where the cloud provider executes a piece of Software Engineering and Applications, of code (a function) by dynamically allocating resources. When a function 13, 278-287. https://doi.org/10.4236/jsea.2020.1310018 has not been executed for a long time or is being executed for the first time, a new container has to be created, and the execution environment has to be in- Received: September 21, 2020 itialized resulting in a cold start. Cold start can result in a higher latency. We Accepted: October 24, 2020 propose a new computing and execution model for cloud environments Published: October 27, 2020 called Result-as-a-Service (RaaS), which aims to reduce the computational Copyright © 2020 by author(s) and cost and overhead while achieving high availability. In between successive Scientific Research Publishing Inc. calls to a function, a persistent function can help in successive calls by pre- This work is licensed under the Creative computing the functions for different possible arguments and then distribut- Commons Attribution International License (CC BY 4.0).
    [Show full text]
  • A Systematic Mapping Study on Engineering Function-As-A-Service Platforms and Tools
    Institute of Architecture of Application Systems A Systematic Mapping Study on Engineering Function-as-a-Service Platforms and Tools Vladimir Yussupov, Uwe Breitenbücher, Frank Leymann, Michael Wurster Institute of Architecture of Application Systems, University of Stuttgart, Germany, {yussupov, breitenbuecher, leymann, wurster}@iaas.uni-stuttgart.de : @inproceedings{Yussupov2019_SystematicMappingStudyFaaS, author = {Vladimir Yussupov and Uwe Breitenb{\"u}cher and Frank Leymann and Michael Wurster}, title = {{A Systematic Mapping Study on Engineering Function-as-a-Service Platforms and Tools}}, booktitle = {Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing (UCC 2019)}, publisher = {ACM}, year = 2019, month = dec, pages = {229--240}, doi = {10.1145/3344341.3368803} } © Yussupov et al. 2019. This is an Open Access publication licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. A Systematic Mapping Study on Engineering Function-as-a-Service Platforms and Tools Vladimir Yussupov Uwe Breitenbücher Institute of Architecture of Application Systems Institute of Architecture of Application Systems University of Stuttgart, Germany University of Stuttgart, Germany [email protected] [email protected] Frank Leymann Michael Wurster Institute of Architecture of Application Systems Institute of Architecture of Application Systems University of Stuttgart, Germany University of Stuttgart, Germany [email protected] [email protected] ABSTRACT which creates a wrong impression that servers are absent. A recent Function-as-a-Service (FaaS) is a novel cloud service model allowing newcomer in the line of as-a-service offerings called Function-as- to develop fine-grained, provider-managed cloud applications.
    [Show full text]
  • Rethinking Scalable Service Architectures for the Internet of Things
    Devices-as-Services: Rethinking Scalable Service Architectures for the Internet of Things Fatih Bakir, Rich Wolski, Chandra Krintz Gowri Sankar Ramachandran Univ. of California, Santa Barbara Univ. of Southern California Abstract vices at the edge that augment device capabilities and enable We investigate a new distributed services model and architec- scale. In this paper, we outline this approach to implementing ture for Internet of Things (IoT) applications. In particular, Devices-as-Services and describe some of the capabilities of we observe that devices at the edge of the network, although an early prototype. resource constrained, are increasingly capable – performing Our work is motivated by the following observations. • IoT applications can and will likely be structured as collec- actions (e.g. data analytics, decision support, actuation, con- tions of services that require functionality from a device trol, etc.) in addition to event telemetry. Thus, such devices tier, an edge tier, and a cloud tier are better modeled as servers, which applications in the cloud • in-network data processing can significantly reduce re- compose for their functionality. We investigate the implica- sponse time and energy consumption [31], tions of this “flipped” IoT client-server model, for server dis- • edge isolation precludes the need for dedicated commu- covery, authentication, and resource use. We find that by com- nication channels between application and devices, and bining capability-based security with an edge-aware registry, facilitates privacy protection for both data and devices, this model can achieve fast response and energy efficiency. • actuation in device tier will require some form of request- 1 Introduction response protocol where the device fields the request, • the heterogeneity of devices militates for a single program- As the Internet of Things (IoT) grows in size and ubiquity, it ming paradigm and distributed interaction model, and is becoming critical that we perform data-driven operations • multi-function devices can and will be able to perform their (i.e.
    [Show full text]