presents

Living in a post- container world Serverless Architecture Magazine 2019

All you need to know about serverless architecture, technology and implementation in 8 articles written by top notch expert

April 8 – 10, 2019 The Hague, ­Netherlands

@ServerlessCon # ServerlessCon www.serverless-architecture.io

Presented by: Organizer: Contents

Serverless Platforms & Technology – What is the future for Dev and Ops teams? 3 Don‘t make yourself irrelevant by ignoring serverless Living in a post-container world 6 Serverless vs. containers Serverless adoption 8 Reap the benefits -native Architecture Monitoring serverless computing 11 We need to think differently about apps, containers, and infrastructure Containers or serverless: Which is better? 13 A head to head battle Docker, & Co Companies should run serverless on Kubernetes 16 An unspoken truth of serverless What exactly is Knative? 20 An introduction with Evan Anderson from Cloud Services & Backend Highly optimized APIs 23 Integrating GraphQL in serverless architecture

www.serverless-architecture.io @ServerlessCon # ServerlessCon 2 WHITEPAPER Serverless Platforms & Technology

Don‘t make yourself irrelevant by ignoring serverless Serverless compu- ting – What is the future for Dev and Ops teams?

Assuming you’ve been paying attention for the last 15 years or so, serverless is just the latest movement in the ongoing Ops switch from tactics to strategy. In this ar- ticle, Dominic Wellington talks about the real danger to Ops from serverless, the potential downsides of serverless computing and more.

by Dominic Wellington same way they had always managed their physical com- pute infrastructure. Of course, this was to miss the point Serverless computing, much like the other variations of both virtualisation and almost com- on the theme of cloud computing which it inherits pletely, but it did not fail immediately. The problems from, boils down to “it’s someone else’s computer.” only became apparent over time, when the predicted In the case of serverless, it’s a little bit more complica- improvements in capacity and utilisation rates from vir- ted than that. This is where the last vestiges of super- tualising mysteriously failed to materialise. Even worse, ficial familiarity with traditional models of IT finally a significant proportion of available capacity was consu- fall away, forcing anyone who is still treating cloud med in running zombie VMs which nobody was able to computing as just more of the same old thing to con- explain or justify, but everyone was afraid to shut down front the truth. in case they turned out to be important. People can and indeed still do treat an AMI much like The reason this happened is that – surprise! – Ops is a VM, which in turn they managed in more or less the hard. The idea of serverless computing is to get rid of the

www.serverless-architecture.io @ServerlessCon # ServerlessCon 3 WHITEPAPER Serverless Platforms & Technology

day-to-day transactional Ops tasks, letting Dev roll out pendency, and when the developer pulled all of their code much faster, and leaving the infrastructure mostly modules, including left-pad, utter chaos ensued. to manage itself. Instead of trying to “do the DevOps” by having an army of Ops Morlocks toiling away behind How is serverless relevant to Ops? the scenes to support the Dev Eloi, with serverless there So much for the Dev side of serverless – but I’m an Ops is no wizard behind the curtain. It really is automated guy at heart; I used to be a sysadmin, and even though machinery back there, and this frees up developers to get I’ve drifted a long way from the light with my strate- on with building whatever they are building. gic architectural role at Moogsoft, I still mostly think The reason this happened is that – surprise! – Ops that way, and I spend a lot of my time with Ops peo- is hard. The idea of serverless computing is to get rid ple. Here’s the thing: many Ops people miss the point of the day-to-day transactional Ops tasks, letting Dev of serverless because the consumption model of the ap- roll out code much faster, and leaving the infrastructure plications is the same, and they run on top of familiar mostly to manage itself. Instead of trying to “do the infrastructure – so what’s the point, exactly? Sure, the DevOps” by having an army of Ops Morlocks toiling developers are all very excited, but how is it relevant to away behind the scenes to support the Dev Eloi, with Ops? serverless there is no wizard behind the curtain. It really Some Ops types even feel threatened: “My job is loo- is automated machinery back there, and this frees up king after the servers, and now you’re talking of getting developers to get on with building whatever they are rid of them!” This is the same category error that comes building. from forklifting physical servers first into VMware and Dev teams have mostly taken to the change with en- then into the cloud without changing anything in your thusiasm. Anything that takes friction out of the de- thinking. If you define your job as putting your hands to ployment process is good, and if it does not require the keyboard any time someone wants to get anything developers to pick up a metaphorical pager, so much done involving IT – which these days means pretty much the better. That’s not to say that there are no problems everything – then yes, that job is going away. Serverless with serverless, of course; no chance of that, in this may or may not be the final nail in the coffin, but the lid fallen world we live in. After all, if cloud is somebody is already firmly on. else’s computer, serverless means that your code is now Assuming you’ve been paying attention for the last dependent on someone else’s code, running on someo- 15 years or so, serverless is just the latest movement ne else’s computer. This sort of thing is great when it in the ongoing Ops switch from tactics to strategy. works, but this assumption of unreliable remote services Instead of getting actively involved in delivering each has not yet been fully internalised. and every request, Ops defines the capabilities and For an example of the sorts of dependencies which parameters of available infrastructure and then hands are being introduced, look no further than the left-pad over both delivery and day-to-day running to auto- debacle in 2016 [1]. In case you have managed to erase mation. the relevant memories, left-pad was an 11-line module This may sound like NoOps, but rather than kick in NPM which implemented basic string padding. For that particular ant hill, I’d rather go back to the old whatever reason, tons of projects included this as a de- distinction between operators (basically, tape jockeys) and actual system administrators [2]. If you’re getting involved in day-to-day stuff, you’re not sysadminning right. A proper sysadmin is taking a nap, feet propped up on a decommissioned server, secure in the knowledge Session: Serverless vs. Organizations: that everything is working just fine – because otherwise, How Serverless forces us to *un*learn something would have told them already. Soenke Ruempler ’s own pitch for serverless [3] is this: “Serverless” is fundamentally changing “What if you could spend all your time building and the way how software gets developed, deploying great apps, and none of your time managing shipped, and operated. For many organi- servers?” This doesn’t mean servers aren’t being mana- zations these change are going to be- ged, just that they aren’t managed by hand. No deve- come a major challenge. Entire disciplines and lopers or users are aware of or concerned with details teams might get obsolete or change substantially of infrastructure, which is as it should be. Utility com- within organizations. What will change with ser- puting means that compute infrastructure is about as verless? What are typical signs of resistance against the change? How can we prepare our org interesting as the electrical grid to outsiders. Sure, it’s and people for unlearning old patterns and beha- vital and we’d all have a bad day if it broke, but unless viors that don’t work anymore in a serverless maintaining it is your actual job, you just plug in and world? How can organizational *un*learning get don’t give it a second thought – and the people main- institutionalized in companies? Let’s have a look taining it certainly aren’t spending their days driving from a knowledge management perspective. around the countryside, wiring transformers by hand just because.

www.serverless-architecture.io @ServerlessCon # ServerlessCon 4 WHITEPAPER Serverless Platforms & Technology

So far, so good – but what are the potential Just because they don’t have to reinstall operating downsides of serverless computing? systems doesn’t mean the Ops team is idle, though. First of all, because it’s so antithetical to how (some) Sysadmins can only take those restorative naps if they Ops teams view themselves and their place in the world, know for sure that nothing bad is happening to the sys- there’s a good chance that it’s happening as part of that tems under their care. This once meant that there were growing proportion of IT spending that is happening monitoring agents running on servers, and that failu- outside the IT budget. Yes, it’s that infamous shadow re conditions had been carefully defined up front: IF IT [4] once again. I spend a lot of time with Ops teams, this_happens AND that_happens THEN wake_up_sys- and if you ask them about serverless, often they scoff admin. and imply that while somebody in a lab might be play- These days, platforms are self-instrumented and lar- ing with that, nothing serious is happening. Then the gely self-propelled. The trick is figuring out what is im- very next week you see a press release profiling how the portant to know about and react to, and what can be very same company is launching huge new business- safely ignored. The complexity of modern IT crossed the critical services on AWS Lambda or whatever. This sort threshold where those sorts of deterministic approaches of thing does not happen just once, but repeatedly. The stopped being useful some time ago, but serverless really real danger to Ops from serverless is not that it makes rubs it in. With services almost entirely divorced from them irrelevant, but that they make themselves irrele- specific bits of infrastructure, old ways of keeping track vant by ignoring it. no longer work. Ops can make a real contribution by helping strate- If Ops does not engage with the Dev teams – who, gise how these new approaches can be adopted safely. remember, are already doing serverless, somewhere and After all, there are significant conceptual overheads to somehow, whatever Ops thinks – the result will sim- serverless architectures for both Dev and Ops. ply be that Dev will interpret Ops as damage and route For a start, it’s good practice to test your stuff before around it. Then when something inevitably breaks, it it goes live in production. (Pause for howls of hysterical will still be the pagers of Ops teams that go off and in- laughter to die down.) (Longer pause.) (Okay, I think terrupt their slumber. we’re good now.) Even if you have good test coverage for your own code, how do you deal with something like Dominic Wellington is Director of Strategic Architec- left-pad? Or say Google deprecates something you were ture for Europe at Moogsoft, helping companies ad- relying on, with their usual levels of inscrutability. How opt AIOps to streamline their IT Operations and become more agile and responsive to ever-changing do you account for that in your testing? demands. He has been involved in IT operations for How about pricing? Google talks about going a number of years, working in fields as diverse as SecOps, “from prototype to production to planet-scale.” This cloud computing, and automation. is great for Dev at the start of building a new thing, @dwellington because you can get a project off the ground for pen- nies. But what happens if you get slashdotted? (Yes, showing my age there.) A design decision which could go either way on a whiteboard might have significant business impacts once it hits the real world. If capacity can go from zero to unbounded, so can your spending. A good Ops team that is no longer running around re-imaging servers should be thinking about that sort of thing.

Short Talk: Serverless for Startups Michael Dowden One of the critical components of any startup company or pilot project is the References ability to “fail fast” – to rapidly develop and test ideas, making adjustments as [1] https://www.theregister.co.uk/2016/03/23/npm_left_ necessary. Learn how one startup team leverages pad_chaos/ serverless architecture to get app ideas off the [2] https://www.theregister.co.uk/2016/03/23/npm_left_ ground in hours instead of weeks, greatly redu- pad_chaos/ cing the cost of failure and experimentation. You’ll learn how to launch an app quickly, add features [3] https://azure.microsoft.com/en-us/overview/serverless- orthogonally, integrate with 3rd party apps in computing/ minutes, and control operating costs. [4] https://www.theregister.co.uk/2016/04/04/shadow_it_ danger/

www.serverless-architecture.io @ServerlessCon # ServerlessCon 5 WHITEPAPER Serverless Platforms & Technology

Serverless vs. containers Living in a post- container world

Making the choice between containers and serverless is a difficult one. But do you really need to choose only one? Sascha Möllering of explains why deciding between the two tech- nologies isn’t just an either/or question.

by Sascha Möllering only executed when it is needed and the service is scaled automatically. Serverless has gained a lot of attention recently, possibly AWS Lambda [2] is a serverless compute service and even more than containers. If you want to build a mo- figure 1 shows the essential components of a server- dern and future-proof architecture based on microser- less compute application. All functions need an event vices, then serverless works just as well as containers as source; this could be changes in data state, requests to a solid building block. However, the question then ari- endpoints, or even changes in resource state. Functions ses of which technology to use, containers or serverless? can be implemented in different languages to process the events. An event source is the service or custom ap- Building serverless applications plication that publishes events. Typically, functions run There are many different definitions for serverless. The on a high-availability computing infrastructure. The best and most accurate definition is the “Serverless administration of the computing resources is entirely Compute Manifesto”, which consists of eight core te- handled by the service, including server and operating nets and was presented at AWS re:Invent 2016 [1]: system maintenance, capacity provisioning, and auto- matic scaling, monitoring, and logging. All you need to • Functions are the unit of deployment and scaling do is supply your code in one of the supported language. • No machines, VMs, or containers are visible in the However, this support comes at the cost of flexibility. programming model You can’t log in to compute instances or customize the • Permanent storage lives elsewhere operating system or language runtime. These constraints • Scales per request: users cannot over- or under-provi- enable the underlying service of the function to perform sion capacity operational and administrative activities on your behalf. • Users never pay for idle time: no cold servers/contai- AWS Lambda automatically takes care of things like ners or their costs provisioning capacity, monitoring fleet health, applying • Implicitly fault-tolerant because functions can run anywhere • BYOC – Bring Your Own Code • Metrics and logging are a universal right

With serverless applications, you only pay for consumed resources, making it possible to focus on the business lo- gic. If we want to compare serverless and containers, we should focus on the computing part of serverless appli- cation: functions. Typically functions let you run code without provisioning or managing servers. Your code is Figure 1: All the parts of a serverless application

www.serverless-architecture.io @ServerlessCon # ServerlessCon 6 WHITEPAPER Serverless Platforms & Technology

security patches, deploying your code, and monitoring nagement. Major disadvantages include things like ad- and logging your functions. ditional implementation complexity and management of the underlying computer infrastructure. Container workloads This small example demonstrates that choosing the Containers are basically a method of virtualization that right compute building block is not an either/or ques- allows you to run your applications in resource-isolated tion. It is more important to fit your specific use case. processes. In many cases, containers allow you to easily With containers, you have the ability (and also the res- package an application‘s code, configurations, and de- ponsibility) to maintain your own infrastructure. How- pendencies in an easy to use building block. This means ever, this flexibility comes with a price. You need the you can use the same Docker images in all environments right people in your organization that are capable to and implement the “immutable infrastructure” pattern. build and maintain a highly available cluster for your Containers give you a great amount of flexibility and container workloads. control. With containers, you can typically choose the underlying operating system, but you are responsible for Conclusion common administration tasks like patching. Unfortuna- So, which approach is better, containers or AWS Lamb- tely, containers add an additional layer of complexity, da? The answer depends on your specific use-case and especially if you start building complex applications requirements! Probably the best approach is to start with a lot of inter-service communication. Additionally, with a serverless approach and – if necessary – switch containers still need a fair amount of administrative to containers. work like applying security fixes for the Docker images. AWS currently offers two different services for con- Sascha Möllering works as Solutions Architect at tainer orchestration: Amazon Elastic Container Service Amazon Web Services EMEA SARL. He’s interested (Amazon ECS) [3] and Amazon Elastic Container Ser- in automation, infrastructure as code, distributed computing, containers, serverless and the JVM. vice for Kubernetes (Amazon EKS) [4]. Amazon ECS eliminates the need for you to install and operate your own container orchestration software, manage and sca- References le a cluster of virtual machines, or schedule containers [1] https://www.youtube.com/watch?v=yCOgc3MRUrs on those virtual machines. Amazon ECS supports AWS Fargate, so you can deploy and manage containers wit- [2] https://aws.amazon.com/lambda/ hout having to provision or manage servers. [3] https://aws.amazon.com/ecs/ Amazon EKS runs the Kubernetes management infra- [4] https://aws.amazon.com/eks/ structure for you across multiple AWS availability zones to eliminate a single point of failure. You simply provi- sion worker nodes and connect them to the provided Amazon EKS endpoint.

Combining containers and AWS Lambda Let’s take a view from 10 000 feet: containers and func- Session: Serverless Economics: tions are basic compute building blocks to implement It’s not only Lambda business logic. Why not combine both? Serverless and Christian Bannes, Vadym Kazulkin GmbH containers can be used side by side to solve business pro- blems. One typical approach is to start with serverless When we talk about prices, we often talk only about Lambda and move to containers if necessary. costs. But we rarey use only How would a combined container and serverless ar- Lambda in our applications. We chitecture look? For example, let’s think of an applica- usually have other building blocks like API Gate- tion that collects data running in a Docker container. way, data sources like SNS, SQS or Kinesis and Log After validating the input data with information located service (Cloud Watch). Also, we store our data in a cache, this application generates and emits messa- either in S3 or in serverless databases like Dyna- ges with a streaming service. Functions consume those moDB or recently in Aurora Serverless. All these messages and write it into data storage. The application services have their own price models which we running in a Docker container has strict requirements have to pay attention to. Moreover, we have to in regards to response time. It uses an in-memory cache consider application data transfer costs. In this that is much easier to implement with a container-based talk, we will draw the complete picture about the costs in the serverless applications, look at the approach. Total Cost of Ownership and make some recom- Of course, it would be also possible to consume those mendations about when it’s worth using server- messages using a Java application running on a server. less and when the traditional approach (EC2, Con- The advantage of the combined container-serverless tainer). approach is having more control over the offset ma-

www.serverless-architecture.io @ServerlessCon # ServerlessCon 7 WHITEPAPER Serverless Platforms & Technology

Reap the benefits Serverless adoption

This article aims to cut through the confusion that many users today face in considering serverless as a solution strategy in their organiza- tions. It examines the benefits and common use cases for serverless computing, as well situations where serverless is not well suited and considerations in choosing a cloud platform. Armed with this infor- mation, architects and developers can begin to ask the questions that will contribute to achieving successful serverless adoption.

by Amila Maharachchi less because users do not have to manage any servers. But, in this article and in general, the word “serverless” Serverless computing is becoming increasingly popular refers to the serverless functions mentioned above. in cloud-native development as it affords many advan- tages to an organization in terms of cost, scalability and Serverless benefits agility. But, serverless is not a silver bullet that can be Serverless was ignited by the Lambda functions first used in all use cases. You need to be aware when to use offered by Amazon Web Services (AWS) in late 2014. it and when not to. Moreover, there are some trade-offs People started seeing the convenience and began trying to make when deciding between a mega cloud and an it out to get very basic use cases done. Now a recognized open source alternative for your serverless platform. software development approach, serverless offers three main advantages to an organization: increased agility, Defining serverless native scalability and availability, and infrastructure Many developers or teams are early adopters who al- cost savings. Let’s look at each of these a little closer. ready are using serverless functions deployed on a cloud Increased agility: In current software development vendor’s platform. These can be written with the pur- and deployment approaches, developers cannot just pose of getting some internal routine tasks done or as code and survive. They need to consider where the code an architectural component of a larger solution. For the is deployed and whether anything else is required to wider group of users not yet taking advantage of server- support the deployment process, etc. For example, deve- less computing, it is good to begin by agreeing on some lopers writing a microservice to be deployed in a Kuber- of the terms. netes environment need to write it by adhering to some According to the Cloud Native Computing Foun- microservice framework and then create the necessary dation (CNCF), Serverless computing refers to the Kubernetes artifacts to support the deployment. They concept of building and running applications that do also have to understand Kubernetes concepts, as well as not require server management. Here, applications are figure out how to run the microservice and artifacts lo- bundled as one or more functions and deployed to a cally to test them. By contrast, with serverless functions, platform that executes them based on the demand. developers only need to worry about the business logic In other words, computing resources are allocated they have to write, which will follow a certain template, to the functions only if there is a need for them to be and then just upload it to the serverless platform. This executed. This is a major difference when compared speeds up things significantly and helps organizations to with the way we deploy applications now. ship solutions quickly to production and rapidly make Some vendors try to define solutions, such as email changes based on feedback, increasing the agility of services, service, etc. as also being server- teams and the organization.

www.serverless-architecture.io @ServerlessCon # ServerlessCon 8 WHITEPAPER Serverless Platforms & Technology

Scalability and code/solution availability: With traditi- tionally there are two ways to be prepared. One is to onal approaches, developers and DevOps teams have to deploy the service with resources that cater to the peak factor in the scalability and availability of the solutions load, but this is a waste of infrastructure. The other is they build. Considerations typically include the peak load to deploy the service with resources to handle the ave- expected, whether an auto-scaling solution is needed or rage load and autoscale. But, autoscaling is not straight- a static deployment will suffice, and requirements for en- forward. You need infrastructure that supports it, and suring availability. Serverless platforms remove the need even then you may face architectural limitations in your to worry about these factors, since they are capable of service or app, such as clustering and sharing the state. executing the code when needed, which is what matters In these scenarios, you can leverage serverless functions; in availability, and scale the executions with the demand. since they always scale from zero, autoscaling is a native So, scalability and availability are provided by default. quality. Infrastructure cost savings: When we deploy a solu- tion in a production environment, we generally need to When serverless isn’t a fit keep it running 24x7 since in most cases it is hard to While serverless computing supports many scenarios, predict the specific time periods a solution needs to be there are certain use cases where it is not well suited. available. That means there are many idle times between This is particularly true if you have a service that handles requests even as we pay for the infrastructure full time. millions of requests per hour (that means a continuous For an organization that has to deploy hundreds of ser- high load). To be sure, you could still use a serverless vices the costs can escalate quickly. With serverless func- function to handle it. However, doing so will cost you a tions, it is guaranteed that your code will get computing lot more than deploying it as a service. resources allocated only when it is executed, which me- Moreover, the response time can get affected because ans you pay exactly for what is being used. This can cut serverless functions have a problem with cold starts. costs significantly. Since there is no code running during the idle time, when a request arrives, your code has to be loaded to the run- Common serverless use cases time. So, the first request might get delayed a little bit. The benefits from serverless are most typically seen in If you cannot tolerate high latencies, then you will have four types of use cases: event-driven executions, sche- to use workarounds to keep the containers warm. But, duled executions, idling services, and unpredictable then you are moving toward something more like tradi- loads or spikes. tional deployment. Event-driven executions: There are many places in our Finally, serverless functions are unable to maintain a solutions where we want to run a piece of code based on state within themselves, since their existence is not per- an event occurring somewhere. These events do not oc- manent. They just come and go. So, if a state needs to be cur continuously. Instead, they are random and mostly based on some user actions. In such cases, rather than running a service forever with your code wrapped, you can easily use a serverless function. The only thing is, Session: Make your existing solution you need to plug the event source to the serverless plat- tastier with serverless salt form to bridge the events and the function execution. Pierrick Voulet One good example is executing code to do some image When it comes to the cloud these days it processing when a user uploads an image. Here, the is often associated with DevOps and mic- event source is a file bucket, which should be plugged ro-services related technologies. Server- with the serverless platform such that the code is execu- less started a few years back but it is only ted when the file is added. recently that it really caught people’s interest, Scheduled executions: Unlike the random nature of maybe since mature enough tooling became event-driven executions, with scheduled executions, we available. Many teams identified serverless as a know exactly when we want to execute the code. For much better fit than micro-services to improve example, we might need to do some data processing eve- the efficiency of their existing solutions: it is quite incremental, flexible and does not necessarily ry hour or every 30 minutes. There are alarm services require structural modifications. available with serverless platforms that can execute your In our team, we decided to confirm these code at scheduled intervals, so that it doesn’t need to run thoughts trying to integrate serverless into our continuously. business application platform. It was the occasion Idling services: You may have services that get re- to appreciate the pros and the cons but also to get quests at random intervals with idle times between these a better idea of how complex it is to support. requests. Instead of keeping the service running, you can In this talk, we share our really positive journey leverage a serverless function such that it executes only and demonstrate how serverless can be a great when a request arrives. start for teams with the objective of ramping-up Unpredictable loads or spikes: If you cannot predict one step at a time on the cloud in general. the load of the requests arriving at your service, tradi-

www.serverless-architecture.io @ServerlessCon # ServerlessCon 9 WHITEPAPER Serverless Platforms & Technology

maintained, it has to be in a database, which is not very serverless function to accomplish the same task. At the performance friendly. same time, serverless functions do not replace more ad- vanced middleware capabilities due to the heavy coding Serverless platform tradeoffs required to support them. As mentioned earlier, serverless computing was ignited The serverless landscape continues to rapidly evol- by the AWS Lambda functions, and as of now, AWS is ve. Most recently it changed drastically with the intro- the leading provider of serverless functions on a pub- duction of Knative from Google. Until mid-July 2018, lic cloud platform. , Microsoft there were quite a few open source projects helping us Azure, and IBM also offer serverless functions on their to setup our own serverless platforms, and they used public clouds. All of these cloud platforms also offer different architectures to support serverless behaviour several supporting services for architecting serverless for the functions. Knative came out with all the great solutions, such as file buckets, notification services, and features provided by these projects and also answe- databases. There is value in having a complete platform red some other problems that they couldn’t. Basically, and the ability to more easily design your solution. with Knative, anything can be run in a serverless man- However, this comes with a big risk of cloud lock-in, ner. since architecting the solution by leveraging the sup- Significantly, other serverless projects only have been porting services of one cloud makes it difficult to mig- capable of supporting functions to run in a serverless rate to another cloud or your own data center. environment. Knative allows you to run your microser- The other option is to work with one of the open sour- vices, micro ESBs, micro API gateways, etc, in a server- ce serverless platforms available, such Apache Open- less environment as well. So, now you don’t necessarily Whisk, Kubeless, Fission, and OpenFaaS. Although need to write functions to gain the infrastructure ad- open source projects allow you to set up your own ser- vantage from serverless computing. You can simply use verless platform, the downside is that a huge investment Knative. At the same time, we need to note that Kna- is needed to learn, set up, and maintain such a platform. tive is still in the alpha stage. And, although it holds Also, unlike the public clouds, the open source alterna- tremendous promise, it is not yet ready for enterprise tives lack event sources, which is a problem for some use production environments. cases. The upside is that you have lot more flexibility than with the public cloud offerings. Conclusion Serverless computing has become increasingly popular Other serverless adoption considerations today in cloud native development bringing advanta- Serverless functions go very closely with the micro- ges such as increased agility, native scalability support, services, since you can take out the business logic of a and infrastructure cost savings. Several use cases lend microservice and easily deploy it as a function. There- themselves readily to serverless computing, and deve- fore, if you already have a microservice architecture, it lopers now have several commercial public cloud of- is not difficult to adopt a serverless architecture. That ferings and open source serverless solution options. said developers will have to change their coding practi- Finally, while developers are using serverless functions ces because serverless functions have to be written in a for simple tasks today, they are moving toward using specific way following a template enforced by the ser- serverless for more complex cases, too – potentially lea- verless platform. But, they have the advantage of not ding to a battle with microservices. And who knows? needing to learn different frameworks and can just wri- Maybe serverless and microservices will merge as Kna- te the logic. tive starts redefining what serverless is. In either case, a There are still improvements needed when it comes solid understanding of both development approaches to tooling support for serverless functions. Although will provide an important foundation for the future de- there are few good public serverless platforms, such as velopment of applications and services. AWS, Google, and Azure, developers have to write their own solutions or use multiple third-party tools to ea- sily deploy their functions to the platform. Debugging is another problem because the traditional debugging techniques will not be valid due to the complex runtime concepts. There are integrated development environ- ments (IDEs) emerging to fill the tooling gap to a certain Amila Maharachchi is the Director of Engineering extent. But, there is a long way to go. at WSO2, leading the WSO2 Cloud Team. He is res- Many organizations now turn to serverless functions ponsible for architecting and running the public cloud offering of WSO2. Amila has been in WSO2 for as a cost-effective replacement for simple middleware 8 years and he has contributed to many of the cloud solutions. For example, in the past, they may have used projects there. At WSO2, he has become an expert on WSO2 an enterprise service bus (ESB) to read a file from a file product deployments, their availability and migrations. Ami- location, transform its messages, and send it to some la holds a degree in Bachelor of the Science of Engineering from University of Moratuwa, Sri Lanka. backend. Now they will simply use a file bucket and a

www.serverless-architecture.io @ServerlessCon # ServerlessCon 10 WHITEPAPER Cloud-native Architecture

We need to think differently about apps, containers, and infrastructure Monitoring server- less computing

Following on from containers, serverless computing is the next wave management in application deployment and software de- livery. Colin Fernandes explains why investing in operational and security analytics solutions that understand serverless is a wor- thwhile step to becoming successful with serverless.

by Colin Fernandes development of the open source virtual kubelet project has taken a lead in advancing this discussion within In our recent report on The State of Modern Applica- both the Kubernetes node and scheduling special-inte- tions in the Cloud [1], companies in the EMEA regi- rest groups (SIGs). on increased their use of the serverless platform AWS Containers enable developers to be more productive, Lambda by almost double. This increase went from but they also require internal infrastructure and security 12 percent of companies using Lambda in 2016 to management. This means that the developer and ope- 23 percent in 2017. rations teams need to manage the process of container Like most things cloud, the terminology developing creation, integration, testing and deployment. In con- around serverless can be esoteric – even the name can trast, serverless computing removes this management be seen as a contradiction, as the “serverless” platforms overhead and places it squarely on the service provider are still backed by infrastructure in a data centre some- involved. where. At its heart, serverless computing aims to make A good example is AWS Lambda. It allows develo- it even easier for developers to implement and run their pers to write their code and then upload it to Lamb- applications by letting developers focus entirely on their da as a function. The function is then created within a custom code. container that includes all the specific requirements for Rather than having to care about the IT operations the function to operate. Lambda orchestrates the con- side at all, developers expect the cloud platform that tainer instance. Once each function has been created, they are running on to scale and optimise resources au- this frees up the developer to concentrate on developing tomatically in real time. Serverless is a logical extension applications, rather than provisioning and managing re- of the fundamental cloud value proposition, by abstrac- sources – or management systems. Each function can be ting everything away and letting the customer focus en- automatically scaled, and have further capacity created, tirely on their essential intellectual property – their code. managed, restored and destroyed automatically.

Serverless computing models can offer The unforeseen challenges of serverless advantages with containers computing Serverless computing is part of the same technological Using serverless, application developers and site relia- revolution as container technology. However, it doesn’t bility engineers can now create high-performance, high replace the use of containers. Both technologies bring availability application run-time workloads at scale, in synergistic benefits, and both enable optimization and a very short space of time, using a combination of func- efficiency across the delivery chain. tions that are created, executed, and managed by the The Kubernetes community is addressing the growth cloud provider. While the underlying infrastructure still of serverless with integrated serverless container infra- remains complex, the management headaches are abs- structures with higher level Kubernetes concepts. The tracted from the developer team.

www.serverless-architecture.io @ServerlessCon # ServerlessCon 11 WHITEPAPER Cloud-native Architecture

While this is great to start with, it can lead to issues log and metrics analytics service in place from the start over time. For instance, without the insight into how is essential. Like with container technology, traditional the serverless platform is running and consuming re- IT and software monitoring tools built for on-premise sources, it is difficult to control hidden costs. Alongside or hybrid models were not designed for these highly lack of real-time insight on usage and consumption, dynamic architectures. It is therefore worth looking at monitoring overall performance by individual func- analytics tools that understand these new architectures tions and across the entire application lifecycle stack natively in order to get visibility into application perfor- can be challenging. With serverless computing, when a mance where and when it matters. problem arises, developers can lack visibility into eve- ry layer of the system. This can pose a major issue for Building business level insight on top of troubleshooting and can impact customer experience. serverless Often the only way to effectively discover root causes While it might be a relief to offload the provisioning and in distributed cloud-based systems is to have access to management of infrastructure, you still need to under- the right data at the right time. stand what’s happening across your distributed cloud Continuous planning ahead around serverless de- deployments. While traditional IT operations approa- ployments is therefore important. Alongside composing ches may be less relevant, the need to actively manage functions, developers have to understand the business business SLAs and KPIs in order to maintain great cus- value models for their functions and how they are con- tomer experience is more relevant today than ever. sumed and bought over time. This can have a big impact Developers aren’t the only teams that can use this on decisions made around how to solve problems – for data. For example, this data can be used to track cus- example, a high performance function that is paid for tomer experience and increase the effectiveness of other based on the volume and traffic of API calls made will teams like customer support and customer success. Simi- be very different in cost compared to something that larly, getting insight into the real-time billable durations is based on storage used and other methods of metrics of function invocations can help developers manage measurement. their budgets over time, and avoid awkward questions Understanding the financial value should encourage from lines of business teams and finance. Similarly, for you to put some appropriate policies in place so au- any developers that are handling customer data, ser- tomated scaling does not end up creating unexpected verless does not remove the need for trust, privacy, and increase in service costs. This should also amplify the compliance around how data is secured and managed. need for run-time observability. Log and time-series Serverless adoption continues to grow rapidly in metric data from each function can provide a comple- Europe as developers want more flexibility and speed te history of an event that happened within different without the need to rely on operations teams for com- domains of an application, while intelligent correlation putation, infrastructure, and security. Your customers and outlier detection can be combined to measure the don’t care where your applications reside, but they do health of the entire application throughout its lifecyc- care about poor service levels or missing applications. le. Combining log and metric data with analytics can It’s therefore essential to maintain that consistent custo- provide much more clarity and enables predictive root mer experience through getting insight into your appli- cause analysis. cations, preferably in real time. Investing in operational Manual review of large volumes of log data files is and security analytics solutions that understand server- neither exciting, nor scalable, so putting some unified less is a worthwhile step to becoming successful with serverless.

Colin Fernandes is product marketing director, EMEA, at Sumo Logic [2]. Session: Distributed Tracing: From Chaos to Clarity Billie Thompson Ever felt lost in your microservices architecture, unable to tell which requests go where? This talk will give you a practical guide on how to clarify where requests go, and how to visualise them. This talk will help you make the case for OpenTracing, giving you a short tour of the current tracing options. Moving on it will give you practical implementation advice including common problems such as high load and event-based systems, as References well as diving into the future of tracing with the increasing adoption of FaaS. This is a practical talk aimed at people [1] https://www.sumologic.com/resource/report/state- near the code. modern-apps-report/ [2] https://www.sumologic.com/

www.serverless-architecture.io @ServerlessCon # ServerlessCon 12 WHITEPAPER Cloud-native Architecture

A head to head battle Containers or serverless: Which is better?

Computing solutions like serverless and containers have become all the rage. In this article, Kayla Matthews exp- lains the differences between serverless and containers and how each one has differing implications for data sci- ence applications.

by Kayla Matthews science application? More importantly, what’s the diffe- rence, and how does that affect the results? The rise of cloud computing solutions, coupled with a growing need for always-on, connected experiences, is What is serverless computing and driving cutting-edge IT adoption and a push toward in- programming? novative technology. As a result, a derivative of cloud Right off the bat, it’s important to understand the term computing called serverless computing has recently “serverless” does not mean there is no server or remote emerged. It stacks up against another form of similar computing portals involved. In fact, the reality is pre- technology that has been popular for years now: contai- cisely the opposite, because the technology still relies on ners or container-based programming. remote, cloud-based servers. As one might expect, it’s not always the best idea to If that’s the case, why is it called serverless? It’s be- adopt something new because it’s trendy. It may work cause a third-party service provider [1] handles all the IT and may well be more efficient, but that doesn’t neces- operations and maintenance. In other words, the prima- sarily mean it’s the ideal solution for a team or organi- ry platform still lives within a cloud operations system, zation. yet the code gets written and deployed separately. That said, one of the newer debates in the industry This tech allows the programming team to handle, now relates to these two technologies – one already develop and distribute the application itself, while the prominent and another rising. Between containers and hardware and systems infrastructure get handled remo- serverless solutions, which is better for the average data tely. It removes the burden of delivering, powering, and

www.serverless-architecture.io @ServerlessCon # ServerlessCon 13 WHITEPAPER Cloud-native Architecture

Right now, it seems like it’s going to be a while before containers get phased out completely.

maintaining remote hardware from a development or ving from one computing environment to another, deve- programming team and allows them to focus on their lopers often run into obstacles. specialty – product and software development. If the supporting software is not identical, it can cause That’s precisely why many hail serverless computing a series of hiccups. And since it’s not uncommon for de- as the ideal solution, particularly through Amazon’s velopers to move from a personal or work computer to a Lambda [2]. test environment, or even from staging into production, this becomes a widespread issue. That’s just concerning How is serverless computing different from the software itself, as other issues can appear from net- containers? work topologies, security and privacy policies, as well as Containers are essentially what the name describes: a varying tools or technologies. comprehensive software package [3] that gets delivered Containers solve this by wrapping everything up and used as a standalone application environment. The nicely into a runtime environment. Virtualization tech- most common form of this is how applications get dis- nology and systems are remarkably similar, except the tributed at runtime. package that is exchanging systems is an entire virtual Everything and anything that’s needed to run and in- machine - essentially an operating system. teract with a piece of software gets included or packaged Even with container-based solutions, companies and together. That often entails bundling the software code, teams still require internal servers or hardware solu- runtime and system tools, software and foundational li- tions to manage the data. The number of servers neces- braries and default settings. sary [4] for this depends – and always will – on the data In the case of virtualization containers – through ser- load or requirements needed. vices like Docker – they help solve the more common problems that arise from cross-platform use. When mo- OK, which is better? Unsurprisingly, the ideal solution depends on the pro- ject at hand [5]. Some projects are certainly more sui- table for a serverless computing environment, whereas Session: Building Resilient Serverless others would be better within a container-based solu- Systems with “Non-Serverless” Com- tion. ponents Containers, by nature, allow for bigger and more complex applications and deployments. Virtualiza- Jeremy Daly tion has made them even more viable, because it al- Serverless functions (like AWS Lambda, lows incredibly complicated and monolithic solutions Google Cloud Functions, and Azure Func- to be structured and delivered within the appropriate tions) have the ability to scale almost infinitely to handle massive workload environment. Because of this, it affords robust cont- spikes. While this is a great solution for compute, rols over the individual containers, as well as the entire it can be a MAJOR PROBLEM for other system. downstream resources like RDBMS, third-party Comparatively, serverless computing calls for reli- APIs, legacy systems, and even most managed ance on a service provider, not least of which involves services hosted by your cloud provider. Whether their goodwill and security capabilities. The partner your maxing out database connections, exceeding company must trust that their provider has infrastruc- API quotas, or simply flooding a system with too ture and security policies in order. For reference, it many requests at once, serverless functions can does help to know AWS Lambda [6], Microsoft, and DDoS your components and potentially take even Google are big, trustworthy providers in this down your application. In this talk, we’ll discuss space. strategies and architectural patterns to create highly resilient serverless applications that can Also, serverless costs are much more manageable – mitigate and alleviate pressure on “non-server- even inexpensive – because a company only pays a pro- less” downstream systems during peak load vider for resources, like the time and volume of traffic times. each system uses. Pricing is based on active computing resources only, so idle time has little to no cost.

www.serverless-architecture.io @ServerlessCon # ServerlessCon 14 WHITEPAPER Cloud-native Architecture

That is in direct contrast to the complexity of contai- Serverless is remarkably hands-off, which is one of ners, which call for completion of functions on a grea- the advantages of TensorFlow-like models [9]. The code ter scale, which often drives up costs. Serverless relies gets uploaded to the provider or remote system, which on small, simple tasks with almost no overhead. Also, then handles the deployment appropriately. Before up- with serverless computing, the emphasis remains on the loading the code, a can develop and write it core concepts of software development, such as writing using more preferable environments or methods. code. But that doesn’t mean containers are obsolete – not For instance, a programmer can write a main applica- yet, anyway. In fact, they are still incredibly useful and tion – like an external service – entirely separate, because reliable – in some cases, it’s possible to use both forms there’s no integration with the container or runtime eco- of technology interchangeably. system. But there’s a downside to this. Less control and As serverless computing solutions become more ad- weaker integration mean less power to debug [7], test, vanced and capable, it’s possible this will change, but and monitor an application, especially across platforms right now, it sure seems like it’s going to be a while be- – and fewer performance metrics to boot. fore containers get phased out entirely. The kind of full and rounded controls container- based solutions offer allow users to test and effectively Kayla Matthews is a technology writer, contributing understand what happens inside and outside containers, to publications like VentureBeat, TechnoBuffalo, resulting in more detailed analytics at all levels of de- Fast Company, and The Week. To read more posts by Kayla, check out her personal tech blog at https:// ployment. productivitybytes.com/ When working with containers, it’s not unheard of to identify and optimize performance issues as the pro- ject progresses, so the entire system returns the desired result.

Which is better for data science? Containers still have their place, but serverless compu- ting is definitely the up-and-coming star [8] in the world of big data. For data science specifically, serverless plat- forms don’t require infra-management or extra teams to handle things like Hadoop or Spark clusters. Not to mention, serverless solutions monitor resource usage always and will scale up – or down – depending on requirements. That’s incredibly helpful for smaller teams that don’t have the bandwidth to scale up on their own, but it’s equally beneficial for larger organizations due to the cost savings.

References Session: Serverless Apps in a Multi- Cloud World [1] https://medium.com/swlh/everything-you-need-to- Niko Köbler know-about-serverless-architecture-5cdc97e48c09 [2] https://nils-braun.github.io/why-you-should-use- Serverless is cool and has many advanta- amazon-lambda/ ges. So far, so good. But for many people, the Damocles sword of „vendor-lock“ still [3] https://techbeacon.com/essential-guide-software- hangs over the serverless world. This must containers-application-development not be necessarily the case. I’ll show you how you [4] https://worldwidesupply.net/blog/how-many-servers- can run serverless apps happily and effectively in need-business/ a multi-cloud environment. We will discuss ven- [5] https://rancher.com/containers-vs-serverless- dor-specific APIs and whether and how they can computing/ be bypassed. In addition, I will show you how you [6] https://techbeacon.com/aws-lambda-serverless-apps-5- can use an event gateway to distribute the indivi- things-you-need-know-about-serverless-computing dual events in a public multi-cloud and even in [7] https://caylent.com/containers-vs-serverless/ your own infrastructure, transparently to the con- sumers of the API. Vendor-lock was yesterday. [8] https://www.xenonstack.com/blog/data-science/ functions-big-data-serverless-architecture/ #LessSlides #MoreLive [9] https://towardsdatascience.com/serving-tensorflow- models-serverless-6a39614094ff?gi=b771bf2c2983

www.serverless-architecture.io @ServerlessCon # ServerlessCon 15 WHITEPAPER Docker, Kubernetes & Co

An unspoken truth of serverless Companies should run serverless on Kubernetes

What are the costs involved in switching to serverless? Priyanka Sharma explains why enterprises should move their operations onto Kubernetes and how this can reduce operational costs of all kinds.

by Priyanka Sharma use. This can be a blessing for newer services that have little load on them or those with erratic traffic patterns; Serverless came of age in 2014 with the release of AWS an owned or managed server would sit idle for long Lambda [1]. Since then, there’s been increased inte- times and still end up on your balance sheet. rest in FaaS and serverless – however, it’s important When you’re free from server-side packaging and ma- not to confuse these technologies as interchangeable nagement, deploying applications becomes much simp- terms [2]. ler. You no longer have to worry about Puppet/Chef or While there are many reasons for adopting server- determining your container story. That said, as server- less, a lesser spoken truth is why enterprises should less workloads become part of larger and more complex run serverless on Kubernetes. Running serverless wor- architectures, you should no longer approach serverless kloads outside of Kubernetes creates vendor lock-in, the same way. Instead, leverage Kubernetes to run your an inability to leverage existing data, and poor DevOps serverless workloads and stick to the computing bene- productivity. Martin Fowler’s research on serverless fits. architectures [3] illustrates the benefits of serverless on To better understand the need for enterprises running Kubernetes. serverless functions on Kubernetes, it’s important that This particular column will encompass the decrease we understand the advantages of doing so. in operational costs associated with going serverless in general, and then transition into why it’s advantageous Avoiding vendor lock-in for enterprises to run serverless on Kubernetes. While an isolated serverless application can run on a single-cloud provider, large organizations often use Operational cost reduction of all kinds hybrid and multi-cloud strategies. This is so they can Serverless technology helps reduce costs along multi- leverage the best features from each cloud while also ple axes, the most important of which are the cost of mixing in some owned and managed servers. Depending computing and development savings. Reducing the cost on the exact environment, running serverless workloads of computing is the most obvious, celebrated, and me- on Kubernetes makes it possible to use whatever combi- aningful benefit of serverless. nation you want. Horizontal scaling is also cost effective because you offload server-side management to one of multiple cloud Leveraging existing services and data vendors. They then utilize economies of scale, making If you’re a larger institution, the more you can leverage you responsible for only the computing costs that you your existing system and corresponding data, the bet-

www.serverless-architecture.io @ServerlessCon # ServerlessCon 16 WHITEPAPER Docker, Kubernetes & Co

ter equipped your greenfield products and applications stateless. Kubernetes specialists can bring their chops will be. Running where the rest of your application is and tooling and make a bad situation better. Using ser- operating will only help with that. This is where tech- verless with Kubernetes can ensure a large portion of nologies, such as ACI and AwS Fargate, are comparable your developers can benefit rather than a small mino- but not quite the same to a Kubernetes-run serverless rity. application. A serverless container like Azure Container The value of running serverless on Kubernetes has Instances is raw infrastructure. While it’s a great way to become well understood among vendors and open- easily run several containers [4], Brendan Burns explains source providers. There are various solutions on the in “The New Stack” why building complicated systems market that can help run serverless workloads on top requires the development of an orchestrator to intro- of Kubernetes and the jury is out on which solution will duce higher-level concepts, like services, deployments, emerge victorious. Some projects that come to mind are secrets, and more. OpenWhisk by RedHat, Knative by Google (recently announced at Google Next), OpenFaaS, Virtual Kube- DevOps productivity let, Kubeless, Fission, and the list goes on. It’s impor- At this point, there’s no debating that Kubernetes has tant to note that these projects are not apples-to-apples won the orchestration war. Developers are using it daily comparisons as they try to solve for different aspects of to build and scale large-software systems. By marrying the central issue. serverless and Kubernetes, it helps users quickly under- While I can’t predict which technology will become stand how serverless works in the context of their own the de facto standard, there is good news. GitLab is fo- reality, utilize existing logging and monitoring setups as cused on helping enterprises get from idea to produc- well as improve their troubleshooting skills. tion as fast as possible using a single application. That An important caveat is that operationalizing server- effectively means we will leverage the best open-source less is non-trivial from a DevOps perspective, especially technologies to incorporate serverless to run on Kuber- with regards to logging and monitoring – given that it’s netes with GitOps.

Priyanka Sharma is the Director of Cloud-Native Al- liances at GitLab Inc. She also works on the Open- Tracing project, an instrumentation standard for Session: Serverless Operations: distributed tracing. A former entrepreneur with a passion for building developer products and gro- From Development to Production wing them through open source communities, Priyanka ad- Erwin van Eyk vises startups at HeavyBit industries, an accelerator for developer products. Priyanka holds a BA in political science FaaS functions on Kubernetes are increa- from Stanford University. singly popular. We often talk about the developer productivity advantages, such as the time to create a useful application from scratch without learning a lot about Kuber- netes. In this talk, we will focus on the operational aspects of serverless applications on Kubernetes. What does it take to use serverless functions in Production, with safety, and at scale? This talk covers six specific approaches, patterns, and best practices that you can use with any FaaS/Server- less framework. These practices are geared to- ward improving quality, reducing risk, optimizing costs, and generally moving you closer towards production-readiness with serverless systems. We discuss:declarative configuration, live-reload for fast feedback, record-replay for testing and de- bugging, Canary Deployments to reduce the risk and impact of application changes, monitoring with metrics and tracing, and cost optimization. We’ll show how you can make different cost-per- formance tradeoffs; we’ll discuss what the de- References faults choices imply, and how to tune these. We also share a live demo showing how you can easily [1] https://www.youtube.com/watch?v=9eHoyUVo-yg follow these practices with the open source Fissi- [2] https://techbeacon.com/essential-guide-serverless- on.io FaaS framework, so you can use them on any ecosystem infrastructure that runs Kubernetes (whether it’s [3] https://martinfowler.com/articles/serverless.html your datacenter or the public cloud). [4] https://thenewstack.io/the-future-of-kubernetes-is- serverless/

www.serverless-architecture.io @ServerlessCon # ServerlessCon 17 April 8 – 10, 2019 The Hague, Netherlands Mastering Cloud Native Architectures, the Kubernetes Ecosystem and Functions

INNOVATE FASTER From managed containerized environments to functions: Modern cloud native technologies are pushing productivity to the next level. FOCUS ON WHAT BUILD GREAT REALLY MATTERS APPLICATIONS To take full advantage of this new dawn of deploy- Join Serverless Architecture Conference ing, scaling and managing applications software and gain key knowledge and skills for architects and developers need to re-focus the this new era of computing. Being a soft- way they design software systems. ware engineer has never been better.

@ServerlessCon # ServerlessCon

Presented by: Organizer: ONLY TILL MARCH 7 Early Bird  Group discounts TRACKS AND TOPICS  Save up to € 215

Serverless Cloud Services, Docker, Cloud Native Platforms & Backend as a Service Kubernetes & Co. Architecture Technology

EARLY BIRD SPECIALS TILL MARCH 7

Early Bird Special: Company Team Discount: Extra Specials: Save up to €215 Receive an additional Freelancers and employees of scientific till March 7 10 % discount when registering institutions benefit from individual offers – with 3+ colleagues. if you’re interested, just send us an email to [email protected].

Nikhil Barthwal Michael Dowden Vadym Kazulkin Niko Köbler John McCabe

Soenke Ruempler Avishai Shafir Billie Thompson Erwin van Eyk Marcia Villalba

www.serverless-architecture.io WHITEPAPER Docker, Kubernetes & Co

An introduction with Evan Anderson from Google What exactly is Knative?

What are the benefits of serverless computing? What exactly is Kna- tive and what features are still in development? In our interview with Evan Anderson, Senior Staff Software Engineer at Google, he gives an introduction to the new shiny serverless tooling based on Kuberne- tes. He also talks about the benefits and the downsides of serverless computing and why it is such a big topic at the moment.

Serverless Architecture editorial team: Hello Evan and marketplace was narrower than necessary, and we could thanks for taking the time to answer our questions. implement both FaaS and (PaaS) After Docker came Kubernetes, and now the new hot on top of stateless, request-driven Container as a Ser- topic is Knative. What is Knative all about? vice. Based on previous experience building serverless Evan Anderson: Kubernetes was an attempt to elevate platforms at Google, we had a good idea of how the the conversation about cloud computing using some of serving and supporting components should look. But the lessons learned at Google about container orchest- we also knew that great OSS software comes from the ration. Knative is a follow-up on that success, building community, so we found a number of strong partners to up the stack into the “serverless” space (scalable event- make Knative a reality. driven compute with a high degree of management At its core, Knative has two goals: automation) to enable developers to focus on business solutions rather than infrastructure. 1. Create building blocks to enable a high-quality OSS We’ve broken down the project into three main pillars serverless stack. at the moment: 2. Drive improvements to Kubernetes, which support both serverless and general-purpose computing. • Build focuses on taking source and producing Docker images in a repeatable server-side way. As a set of OSS building blocks, Knative allows on-pre- • Serving provides scalable (start from zero, scale to mises IT departments and cloud providers to provide many) request-driven serving of stateless Docker a common serverless development experience for web containers, including the tools needed to inspect and sites, stateless API services, and event processing. debug those services. • Event orchestrates on- and off-cluster event sources Serverless Architecture editorial team: The Knative and enables delivery to multiple compute endpoints project is relatively new. What functions are still being (Knative Serving, but also Kubernetes Services or developed? Approximately, when will it be production even raw VMs). ready? Evan Anderson: Knative implements a change-tracked One of the key insights that our team had at the start server-side workflow where each deployment and con- of the project was that the (FaaS) figuration change results in a new server-side Revision. paradigm that’s currently dominant in the serverless Revisions allow easy canary and rollback of production

www.serverless-architecture.io @ServerlessCon # ServerlessCon 20 WHITEPAPER Docker, Kubernetes & Co

changes, and were part of our initial design process rollout, while the intersection of Eventing and serverless based on experience with App Engine. Every Knative architecture is still evolving rapidly once you get past Revision is backed by a standard Kubernetes Deploy- “send an event from A to B”. ment and an Istio VirtualService, so we deliver a lot of functionality out of the . Further, we integrate with Serverless Architecture editorial team: Which features systems like Prometheus, Zipkin, and ELK to collect are still being worked on and which ones are planned observability information, which can be a challenge in for future releases? serverless environments. Evan Anderson: I think we’ll always be working on au- Having worked in Google production on two SRE toscaling and networking features. Right now, we’re teams, I hesitate to say something is “production ready” focused on enabling core user scenarios like “run my until a customer has tried it in their own environment. application with autoscaling”, “let me plug in my own Based on the work the community is doing, we are on build”, and “I need to debug my application”. One of track to reach the following targets in the next few re- the biggest requests we received from users with our ini- leases: tial release was to make each component more indepen- dent and pluggable, which is a big challenge while also • Reactive autoscaling: Scale-from-zero (aka: cold trying to make the whole platform coherent. start) < 2s with appropriate runtime and cluster con- There are also some features that don’t land on our figuration. Scale to 1000 instances (possibly much roadmap until we get more experience running the sys- more than 1000 qps) in less than a minute. tem, such as live upgrades from one release to the next, • Automatic metrics, logs and telemetry collection, so component packaging, and automatic generation of re- you have visibility into what your code is doing. lease notes. • 7+ languages (Node, Java, Python, Ruby, C#, PHP, Go) with high-quality build templates where you Serverless Architecture editorial team: Knative is built don’t need to write a Dockerfile to get started. on top of Kubernetes, how is Istio involved in the Kna- • Automatic TLS configuration, data plane security tive ecosystem? and rate-limiting controls. Evan Anderson: Kubernetes provides a robust contai- • A few dozen event sources (GitHub, , etc) which ner scheduling layer, but Kubernetes networking tools can deliver to functions or applications. (service) were too primitive for the features we knew we wanted to build in Knative. Istio enables core Knative There’s a lot of interesting open questions in both the features such as percentage-based canary of new code Build and Eventing prongs of the project – Build natu- and configuration and common security controls in the rally extends into CI/CD and multi-stage validation and request path. Istio’s service mesh model matched our experience at Google and gave us a lot of the network routing capabilities for free (meaning that we only nee- ded to write the configuration to request it). Session: Going FaaSter: One of the interesting post-launch feedback items Cost-Performance Optimizations of we heard from several members of the community was Serverless on Kubernetes that they couldn’t use Istio in their environment, but Erwin van Eyk still wanted to use Knative. So something we’ve been Serverless promises on-demand, optimal looking at for future releases is creating internal cus- performance for a fixed cost. Yet, we see tom resources (CRDs) to represent the desired Knative that the current serverless platforms do state for routing or load-balancing, and then using tho- not always hold up this promise in practi- se CRDs as extension points so you could replace Istio ce; serverless applications can suffer from plat- with Contour or Linkerd if desired. In some cases, you form overhead, unreliable performance, “cold might end up with a slightly longer network path or fe- starts”, and more. In this talk, we review optimiza- wer authentication features, but it widens the set of use tions used in popular FaaS platforms, and recent cases that Knative can address. research findings that aim to optimize the trade- off between cost and performance. We will review Serverless Architecture editorial team: How is Knative dif- function reuse, resource pooling, function locality, ferent from AWS Lambda and Google Cloud Functions? and predictive scheduling. To illustrate, we will use the open-source, Kubernetes-based Fission Evan Anderson: First of all, Knative is a set of software FaaS platform to demonstrate how you can achie- building blocks, not a hosted and packaged solution like ve specific goals around latency, throughput, Lambda and Cloud Functions. Being a set of building resource utilization, and cost. Finally, we take a blocks is different from hosted and packaged services in look at the horizon; what are the current perfor- a few ways: mance challenges and opportunities to make FaaS even faster? 1. You can run it locally using minikube, or install it onto a Kubernetes cluster anywhere (including on

www.serverless-architecture.io @ServerlessCon # ServerlessCon 21 WHITEPAPER Docker, Kubernetes & Co

AWS or Google). Because you can run Knative your- had most of the same key ingredients: stateless process self, we divide up our customers into three groups: scale-out, server-side build, integrated monitoring. Pro- • Developers: these are our end-users who are sol- bably one of the key areas that’s gotten a lot better is ving business problems by writing code. Typically, the actual server-side infrastructure. Back when Google they want their code to just run and scale with no extra work on their part. • Operators: these are the IT professionals running “I’m not quite sure Knative to provide a serverless experience to deve- lopers. This could be a cloud provider or in-house why serverless didn’t private cloud – in either case, they are managing servers and upgrades. take off sooner” • Contributors: these are the community members who are working on Knative itself. Contributors need ways to build and test Knative itself. 2. Existing hosted FaaS offerings target source de- App Engine launched in 2008, there were a lot of limi- ployments (i.e. upload your Node or Python code tations that were a product of the times: limited langua- as a zip file). Knative can take a source upload and ge choice, single vendor, no escape hatch to other styles build a container, but it also works with any existing of compute. Today, something like Cloud Functions or workflows that produce a container, including lan- Lambda is still single-vendor, but the language choices guages that Knative doesn’t have any native support are wider and the overall cloud ecosystem has gotten a for yet. (for example, we even have Dart, Kotlin, lot deeper. Rust, and Haskell samples!) Regardless of “why now”, there are a lot of benefits 3. Portability: Knative isn’t tied to a single cloud ven- to adopting a stateless, request-driven compute model dor – it’s hybrid and cross-cloud. There will be plu- (combined with pay-per-use, I think this is the core of gins to enable vendor-specific features (like Google serverless). Much like the event-driven model in desk- Stackdriver for logs), but the core experience and top UI, handling a single event or request at a time of- runtime behavior will translate from one cloud to ten leads to simpler, more stable code. The architecture another. also lends itself fairly naturally towards developing in 4. It’s OSS! You can download the source code, contri- the 12-factor pattern, and a lot of the undifferentiated bute your bug fixes, and participate in the working heavy lifting around monitoring, request delivery, and groups to make it better. identity is taken care of for you, which is a huge pro- ductivity win. This also makes serverless systems stable Serverless Architecture editorial team: Serverless is on and self-repairing, which is great for both hobby and the rise, although the name is misleading – servers are enterprise projects where maintenance work is expen- still needed at some level. But why is serverless such a sive. big topic at the moment? Evan Anderson: I’m not quite sure why serverless didn’t take off sooner. The PaaS phenomenon in 2008-2012 (Heroku, App Engine, , and others)

Session: Knative: Essential compo- nents for building Serverless applications on Kubernetes Nikhil Barthwal Come learn about Knative, an open sour- ce collaboration to define the future of serverless on Kubernetes. In this session, I will introduce the Knative project and outline how it unifies function, application, and container developer workflows under a single API Evan Anderson is a Senior Staff Software Engineer to make you more productive and your system at Google, where for the last 12 years he’s been wor- easier to manage. I will also use demonstrations to king on private and public cloud initiatives. Evan’s previous projects include highlight the key benefits developers and opera- networking and live migration, as well as control tors can expect by adopting platforms based on plane APIs for . Outside of work, Evan en- Knative. joys being a dad and slowly running long-distance races.

www.serverless-architecture.io @ServerlessCon # ServerlessCon 22 WHITEPAPER Cloud Services & Backend as a Service

Integrating GraphQL in serverless architecture Highly optimized APIs

With three years on the market, GraphQL is a sophisticated and established alternative to REST. It should be taken into consideration when creating or further developing an API. Various applications such as , Instagram, and XING already successfully use this REST alternative. This is enough reason to give an insight into how GraphQL can be integrated into modern serverless architectures with little effort. In the interaction between GraphQL and AWS Lambda, a highly scalable implementation is presented that can be adapted to different architectures and frameworks.

How often do front-end developers get annoyed that a example, /graphql. Listing 1 shows a server request in- REST call did not deliver all the required data? Even cluding parameters. back-end developers are asked by colleagues to add ano- The example could now be expanded at will, for ex- ther property to a response if it was missing. Fortuna- ample by the birthday of the customer, his order IDs tely, thanks to GraphQL, these problems are a thing of or the last log-in. Anything is possible as long as the the past. While REST defines predetermined structures properties within GraphQL are defined as returns. This for the return of a call, GraphQL only returns the data is done by Schema (Listing 2). It contains all operations that is desired in the front-end. The so-called over- and under fetching is avoided, because calling the interface not only names the desired executed method, but it also Listing 1 Listing 2 names the desired return structures. { type Query { Modern developments in microservices and serverless "query": " getCustomer(id: Int!) : Customer architectures make it possible to create highly scalab- query testQuery($id: Int!) { } le systems. Combining this advantage with GraphQL getCustomer(id: $id) { for networkload optimized APIs, results in highly op- id type Customer { timized, data-driven systems. The article gives a first name id: Int! insight into GraphQL, and a special focus on the inter- orders { name: String! action with AWS Lambda as a representative of server- date age: Int less architectures. } birthdate: String } A first look } orders: [Order] What does the call of a GraphQL server look like exact- ", } ly? The client creates a JSON request with the elements "variables": { query and variables. The content of the query object is "id": 0 type Order { a string that contains the name-giving graph query lan- } amount: Int! guage as a value. Normal JSON objects of any comple- } date: String xity are passed as variables. This request is sent to the } server via a classic POST request. The endpoint is, for

www.serverless-architecture.io @ServerlessCon # ServerlessCon 23 WHITEPAPER Cloud Services & Backend as a Service

and object structures with which GraphQL shall work js, Python and Java. For the following example, Java 8 (Box: “Important GraphQL schema terms”). is used in conjunction with AWS’ own NoSQL databa- After a query has been processed on the server, the se DynamoDB. In that regard it’s sufficient to create a response is returned. The response is also a JSON for- Maven project with the following AWS and GraphQL mat, and can be read and processed by existing client dependencies (Listing 4). implementations (Listing 3). The end point for calls is a method with two parame- ters – the input parameter and the context parameter: Implementation as Java back-end After getting to know the basic usage of a GraphQL ser- public String handleRequest(InputType input, Context context) ver, we now move on to the concrete implementation. An AWS serverless (Lambda) function with a connection The input parameter has already been optimized for to a NoSQL database will be created in the following. GraphQL. AWS Lambda automatically deserializes the Choosing the must come first, received JSON request in the named objects. The fol- since AWS supports programming languages like Node. lowing two properties are sufficient for GraphQL:

class InputType { String query; Listing 3 Map variables; { {"date": "2018-02-17"}, ... "data": { {"date": "2018-02-21"} } "getCustomer": { ] "id": 0, } Start up of the serverless function "name": "Micha", } When the Lambda function is started, the GraphQL "orders": [ } schema is initially analyzed and the corresponding Java {"date": "2017-12-21"}, Listing 4 Important GraphQL schemata terms GraphQL returns a set of terms that are used in its com.graphql-java schema definition. Some of them are discussed in graphql-java the article. For others, please refer to the GraphQL 7.0 documentation: ■■ Query – read accesses to data. ■■ Mutation – write access to data. The structure of com.graphql-java a mutation within the schema corresponds to graphql-java-tools that of a query, but begins with the word “mu- 4.3.0 tation”. ■■ Inline Fragments – object trees can be clearly structured and reused in other queries, for ex- ample. Duplicate Code is thus avoided. ■■ Type and InputType – objects and their proper- com.amazonaws ties are firmly defined in the schema. This info aws-lambda-java-core is known to the client and server, which means 1.2.0 that a validation can be used directly when the server is started and a request is executed. ■■ Scalar – objects such as date values (DateTime) can be added to the GraphQL native elements com.amazonaws such as String, Int, and Boolean and used di- aws-java-sdk-dynamodb rectly as data types. 1.11.280 ■■ Argument/Variable – when passing server re- quests, arguments can be written directly into the request or passed as separate variables. com.google.code.gson ■■ Mandatory fields – described within the sche- gson ma through means of a following “!” 2.8.2 ■■ Directive – the desired return structures can be filtered by the conditional operators if and skip.

www.serverless-architecture.io @ServerlessCon # ServerlessCon 24 WHITEPAPER Cloud Services & Backend as a Service

handlers are wired. For that to happen, the schema must known shape. Listing 5 connects to a DynamoDB table be filled with the corresponding information during and reads a customer object. The idiosyncrasy here is creation. Three aspects are of importance here: this: If the Lambda function and the created DynamoDB The first aspect is the parsing and validation of the table are operated in an AWS account, it’s sufficient to schema; syntax errors are detected during the start up specify the AWS region and the table name as connec- process: tion parameters. When working with DynamoDB objects, you can of SchemaParserBuilder parser = SchemaParser.newParser().file("schema. course proceed in the usual POJO manner (Listing 6). graphqls"); For this AWS offers JPA based annotations which con- vert return values into Java objects when communica- The second one is setting up the Java resolvers; these ting with the database. classes contain the subsequent business logic: As soon as the processing of the request is completed, the results are passed directly to the executing Graph- parser.resolvers(new QueryResolver()); QL service. This is where the added value of GraphQL GraphQLSchema schema = parser.build().makeExecutableSchema(); comes into its own. Remember that the request only requested the customer’s ID, name, and orders. The The third aspect is the transfer of the data to the Gra- customer object on the other hand contains additional phQL service – the transferred parameters are parsed by properties. If this object is passed, the GraphQL service GraphQL and the corresponding business logic is called: removes the unwanted properties and ignores structures so that only the requested elements are delivered to the ExecutionInput exec = ExecutionInput.newExecutionInput() client. .query(input.getQuery()) I want to briefly point out write inquiries (Mutations). .variables(input.getVariables()) These run according to the same scheme as queries and .build(); are identified in the query only by the keyword “mutati- on”. Function enhancements only require three updates The results can later be used directly as a response: in GraphQL. The first update is the expansion of the schema: return GraphQL.newGraphQL(schema).build() .execute(exec) .toString(); Listing 6 Workflow of the requests @DynamoDBTable(tableName = "customer") After the query has been passed to GraphQL, the method public class Customer { to be called is parsed and determined. Furthermore, the @DynamoDBHashKey(attributeName = "id") transferred parameters are automatically validated and public Integer getId() { return id; } converted into the corresponding Java objects. Thus, the

GraphQL service has done its duty at this point. Now all @DynamoDBAttribute(attributeName="name") the desired Java functionality can be executed in a well public String getName() { return name; }

@DynamoDBAttribute(attributeName="orders") Listing 5 public List getOrders() { return orders; }

public class QueryResolver implements GraphQLQueryResolver { [...]

} public Customer getCustomer(int id) {

return getDB().load(Customer.class, id);

} @DynamoDBDocument

public class Order { private DynamoDBMapper getDB() { @DynamoDBHashKey(attributeName = "id") AmazonDynamoDBClientBuilder builder = public Integer getId() { return id; } AmazonDynamoDBClientBuilder.standard();

builder.withRegion(Regions.EU_CENTRAL_1); @DynamoDBHashKey(attributeName = "date")

public String getDate() { return date; } return new DynamoDBMapper(builder.build());

} [...]

} }

www.serverless-architecture.io @ServerlessCon # ServerlessCon 25 WHITEPAPER Cloud Services & Backend as a Service

type Mutation { propriate roles and rights. Here, I like to point to the addOrder(newOrder: OrderInput!) : Order very detailed AWS documentation for creating an API } gateway with proxy integration to Lambda. Summary and conclusion input OrderInput { The example shown is a good starting point for using customerId: Int! GraphQL in Java. Furthermore, the implementation amount: Int! within a serverless application guarantees a highly scala- } ble use. Multiple methods can be bundled in a single request in GraphQL. Additionally, there are already The second one is the registration of the handler: existing libraries that allow for a seamless integration of a GraphQL interface with Spring Boot, as well as vari- parser.resolvers(new MutationResolver()); ous implementations for , Node.js and Python, among others. This guarantees a seamless use in server And the third one, the implementation of business logic: and client applications. Due to its design, GraphQL can be easily integrated public Order addOrder(OrderInput newOrder) { into existing architectures. Since it’s only a wrapper bet- Customer c = getDB().load(Customer.class, newOrder.getCustomerId()); ween the requested data and business logic, it’s easy to Order o = new Order(); connect known databases like MySQL and Oracle via JPA. Thanks to this flexibility, it can be very well integ- o.setAmount(newOrder.getAmount()); rated alongside existing REST APIs to guarantee highly o.setDate(DateTime.now().toDateTimeISO().toString()); optimized requests to backend services. I would recommend two more links to the inclined c.getOrders().add(o); reader: The first one is the demonstration code that ac- getDB().save(c); companied the article on GitHub. The second recom- mended link is the GraphQL homepage, featuring many return o; more ideas regarding the use of GraphQL are pointed } out.

Table 1 shows the example request and the return data. Michael Dähnert is a Senior Solutions Architect at The table clearly shows that the structure of a mutation Sopra Steria Consulting. He has been working on is very similar to that of a query. various IT projects for over ten years. His special tasks include evaluating, optimizing, and redesigning IT architectures. Deployment and set up of AWS Lambda One small detail is still missing in order to run the Lam- ba function within AWS. The Maven project must be compiled as a so-called Fat-JAR. Maven offers the pos- sibility to work with the Shade plug-in. This bundles all required dependencies into a single, AWS deployable, artifact. With the execution of mvn clean package, the artifact is now created and can be uploaded to AWS. Lambda is now operational. In order to work with a client such as Angular, it only needs to be connected to the via an API gateway and assigned the ap- Session: Serverless Applications with GraphQL Marcia Villalba How to build a serverless application with the least amount of code needed? In this talk, I will show you how to architect ser- verless applications with GraphQL, using AppSync. I will introduce you to the AppSync service and all its different components. AppSync is a managed service from AWS and is a GraphQL server. It has a lot of out of the box functionalities that are really helpful when building applications like authorization or subscriptions, and it connects directly to services like DynamoDB so you don’t need to code that interface yourself. Table 1

www.serverless-architecture.io @ServerlessCon # ServerlessCon 26