Definitive guide to revolgy serverless on Platform Index 03 So what exactly do we mean by “serverless” in this context?

04 Advantages of Serverless technologies Servers are not what they used to be anymore You pay only for what you actually use Deploy in seconds and improve your CI/CD Save your resources and keep your Ops team happy Disadvantages of Serverless and how to avoid them Change your mindset, unlock new opportunities

05 Grouping of Serverless technologies Function-as-a-service (FaaS) Platform-as-a-service Managed Containers

12 How to build serverless applications Think as a Cloud developer not a VM developer Stateless application for faster scaling Smaller apps for faster initialisation times Security and Robustness

14 Microservices and Serverless Is it really more expensive? Why should you try serverless

16 Importance of a good architecture in Serverless If you think good architecture is expensive try out bad architecture first Reliable, robust and easy to change is better than fast, temporary solutions

18 Summary and key takeaways

revolgy guide to serverless 02 When talking about the Cloud, the word Serverless sounds a bit odd, doesn’t it? Technically speaking the Cloud is a remote server in itself.

So what exactly do In this case, serverless means you can deploy any we mean by serverless function, application or container without needing to in this context? worry about the server on which they run.

And it doesn’t end with the server only. You are going to get the entire platform - from a deployment tool through the logs and error handling to load balancing and end-points management.

As a developer, I have the most experience working with , this is why I will often be referring to Google products here. However, the basic principles of Serverless are the same for all Cloud providers.

revolgy guide to serverless 03 Advantages of Serverless technologies

Servers are not The greatest advantage of serverless is that even what they used to be anymore when you call your cloud “a server” it is not a server anymore. It’s only a certain Service. You do not have to be bothered whether your application is deployed on a real server, virtual machine, Linux, Unix or Windows. You only have to worry about how to implement your application and how to deploy it on the desired cloud platform.

You pay only for what you Another great difference is the payment model for actually use serverless. In the most serverless types you will only pay for an actual utilisation of the services. That means, if you do not use the service, the cost will scale down to 0. In contrast, in the non-serverless Cloud, it is obligatory to pay for the time your VM is on it, or pay for your container’s consumption of memory and CPU even if it’s doing nothing at the moment.

Deploy in seconds Another significant advantage of serverless is the and improve your CI/CD speed of deployment and creation of new instances of your app. We are talking here about deployment in seconds as opposed to minutes. It helps to improve your CI/CD pipelines and makes a tremendous differ- ence when dealing with traffic peaks as well as short- ens the outage time to a minimum when releasing new versions. All this will result in the overall better quality of service for your customers.

Save your resources and keep your Ops team happy

You don’t need a deployment team Last but not least, the advantage I would like to mention here is the amount of overall resources needed to run your serverless services. Most public Cloud platforms provide you with tools for easy deployment. This means that you don’t need a deployment team at all. Any devel- oper will be able to handle it with some basic build tool. Operational team won’t have any trouble with the infra- structure either because its maintenance is completely taken care of. This way you will be able to handle even big applications (50 services and more) in a small team of 2-5 developers. The developers will become 80% pro- grammers and 20% DevOps engineers. In the long run it revolgy guide to serverless 04 will make your application development and operation much more cost effective. Thanks to serverless technol- ogies, in the future, we will see more micro companies of 2 to 10 people worth billions.

As a developer and team leader I see a great benefit in the fact that most of the serverless technologies and tools available to developers are running on open source. This means that, vast majority of us are already familiar with them as we worked with them before.

Another thing is that more and more programming lan- guages are supported by serverless runtimes. For you it means it will be easier to find the right developer for your service and shortens the time to market of your application

Take some time to choose the right environment. Select- ing an environment that is too restrictive and sandboxed means that you will have to obey it’s limitations on librar- ies and resources. For example using threads or direct access to sockets on App Engine. Fortunately, these limitations are slowly becoming a thing of the past with the development of new serverless technologies.

Disadvantages of Serverless Based on the serverless technologies you choose, you and how to avoid them are more or less bound to your cloud provider. Cloud functions are the most bounded with its runt- ime/SDK and the way you will deploy them. Google Cloud Run or AWS Fargate will on the other hand give you the most flexibility to choose your runt- ime, framework and third party libraries.

Change your mindset, unlock new opportunities

Optimize the expenses If you want to go serverless, you have to change your mindset. You have to start actually developing applica- tions suitable for serverless, not only copy your existing apps to a serverless . This means that you have to learn how to optimise your application for better perfor- mance and lower resource consumption. You will also have to figure out how to optimise the total cost of the development of your application. Remember that you pay not only for computing power and memory but also for reading and writing ops to the database, in and out traffic of your service etc.

DDoS in a serverless environment typically takes a different form than usual as it’s very hard to beat revolgy guide to serverless 05 the cloud provider in terms of computing resources. So even during a DDoS attack your application runs smoothly most of the time but consumes huge amount of computational resources until you drain your budget and the cloud provider cuts it down. You can even DDoS yourself in the same way by some programming error or incorrect settings that will skyrocket the usage of your resources in a short period of time and lead to an unpleasant surprise when you see your cloud invoice. That’s why it is so important to know how to properly set the budget and start monitoring it daily as soon as it’s visible. Spending limits (quotas) and error reporting in this case are a MUST to prevent disasters.

Grouping of Serverless technologies

Cloud Functions They are the most commonly used serverless technology – Function-as-a-service (FaaS) on different Cloud platforms. Cloud functions are also widely supported by the major cloud providers such as Google Cloud, (LAMBDA), Micro- soft Azure, IBM and Oracle.

With the SDK of your cloud provider you are able to write simple functions that are triggered by HTTP request, change of the DB entity or triggered by file change.

Best use cases Cloud functions are most commonly used to obtain and process specific data for client applications simply and fast - for example minor and straightforward functions for specific account data, temperature measuring or backing up of user data (as a cron task).

Framework around these functions is built in a way that supports scripting languages common for front-end developers (e.g. Node.js, Python, PHP….). This allows front-end developers to write server-side code easily. Cloud functions are the easiest and fastest (in terms of learning time) gateways to server-side world.

Advantages • Easy to learn • Support many front-end languages • Easy to deploy • Easy to maintain (separate function) • Cloud functions can save your day if you are in need of a simple functionality and don’t have server-side de- veloper at hand. Anyone with basic programming skills can simply write this function and help you deploy your product sooner. revolgy guide to serverless 06 Disadvantages In case of microservices you are also able to write the entire backend by simply triggering nothing else but these functions (Personally, I don’t recommend it, however, in specific scenarios it can work as a proof of concept).

As mentioned earlier, based on the SDK you use, you are risking a vendor lock-in. It may become difficult to move your application from one provider to another. Lots of functions written in different languages in one project could also lead to a “maintenance hell” in the future.

As cloud functions are meant to be only a supportive technology, there are some technical limitations to them such as quotas for CPU and memory usage, max time that one function can run and several others that can limit the functionality of the service as it grows. (E.g. in Google Cloud max. 100 concurrent runs in 1 minute in the region are allowed).

Another disadvantage is that functions are stateless and thus not suitable for all kinds of computations.

A potential problem might occur also with one of the advantages mentioned before. However simple the programming itself is, an inexperienced program- mer can do more harm than good. Incorrect loading from DB without caching, ineffective sorts of big list, unindexed queries and other mistakes may lead to a significant increase of expenses, especially when your application uses such function several thousand times per day.

With or AWS Elastic Beanstalk, we are getting into a higher league of the Serverless world.

Platform-as-a-service App Engine was created with the microservice archi- – Google App Engine / AWS Elastic Beanstalk tecture in mind. That means it allows to deploy an entire written in a specific language (Java, Python, Ruby, Go….) together with a provid- ed SDK. This SDK contains APIs for services on the Google Cloud that your app can use to function as a complete service.

Google App Engine comes not only with its SDK but also with specific tools and technologies which help you build applications even faster. DataStore (NoSQL revolgy guide to serverless 07 database), Memcache or Pub/Sub service which are also automatically managed and scaled for you as well as automated logging, error reporting and tracing which are built into the SDK.

There are two types of App Engine - Standard pro- vided with Google SDK and Flexible where you can deploy your specific application in the customisable container. Let’s discuss both scenarios.

Imagine a Standard App Engine as a sort of Cloud Functions on steroids. You can scale it down to zero (when you do not use it you don’t pay for it), deploy it quickly (in a matter of seconds), but as opposed to Cloud Functions, the whole service can be implement- ed as one application (one or more modules). This way it’s easier to maintain. And again, just like a Cloud Function, every instance of the service can run in a different language. Instances are autoscaled and you can specify the type of the instance (e.g. CPU type, amount of memory, geographic location/region). The advantage of Flexible App Engine is that everything you need is set-up for you (logging grouped by re- quest, tracing, memcache and so on).

Flexible App Engine sacrifices a certain amount of ease of configuration for more developer flexibility. It does not allow you to use App Engine SDK and you must encapsulate your app into a container. Standard App Engine API is substituted with Client APIs of ser- vices provided by Google Cloud. However, with Flex- ible App Engine, you can use your own libraries and APIs that are not allowed in Standard App Engine or would be expensive to run there. Flexible App Engine allows you to set your instances with more granularity and use specific settings for specific services.

Best Use cases for Standard App Engine As mentioned before, Standard App Engine was developed to help you build better and more scalable microservices. It is suited to host Web services with REST (or RPC) APIs, that perform specific actions over DataStore (NoSQL database), Memcache or SQL database.

Standard App Engine does not allow for writing to files in a file system or performing requests that are longer than 60 seconds. In specific cases where Standard is not enough we opt for Flexible.

Best Use cases for Flexible App Engine Flexible App Engine is designed for tiny micro-services (mostly with one specific functionality) which Standard App Engine does not support such as lightweight appli- cations, for one specific purpose.

revolgy guide to serverless 08 You can implement it using any third party frameworks and runtimes (Spring….). Flexible App Engine offers more options for optimising your instances. When you opt for more compute power for example, you can add more CPU to your instances. A nice example could be, when you need your own custom library that is not available in common Standard runtime is a service for creating PDF invoices that are signed with private key and stored into folder in .

Advantages • Easy and fast creation of microservices • Fast deployment • Seamless auto scaling • Almost no configuration • Product maturity (the product has already been in use for the past 8 years and is only getting better) • Pay-as-you-go pricing

Only in a matter of days, you can implement a micros- ervice that is effective, quickly scalable, and can handle millions of requests. With native canary deployment support and integrated tools for logging, tracing and error reporting you can achieve a higher level of servic- es with minimum resources

Disadvantages I do not think there are any significant disadvantages of App Engine. There are some limitations to it based on the design and main focus of the App Engine. Per- sonally, the only problem I see it that it is a perfectly designed product (for its intended use cases) that is often being misused by developers for the functionality for which it wasn’t designed. (such as big service with a lot of computation logic that performs several long- term tasks in one request)

• Built for microservices with request duration shorter than 1 minute • Standard App Engine application is harder to move to another provider (Remove Google Cloud SDK, App Engine SDK). If you want to avoid vendor lock-in to other you have to design it from the beginning as portable (encap- sulate all SDK functionality into your own interface so you can then change it to new one). • Compared to Cloud Functions, it requires more devel- opment skills and experience to create more complex service • You can experience the Full Backfire of Cloud -> There is no computation/memory/storage limit - only big expens- es. (DDoS attack does not end with unavailability of the service, but it also affects your billing)

Managed Containers Google Cloud Run and AWS Fargate are bringing the – Google Cloud Run / AWS Fargate best from both VM/Kubernetes and Serverless worlds. They combine the flexibility of the containers with the revolgy guide to serverless 09 simplicity of the Serverless. You have full control over the runtime and framework packages you use in your container together with the flexibility of the App Engine.

When we are talking about App Engine being the best tool for developing and running microservices, then Cloud Run is even one step further. Again with scaling down to zero in case you do not use it, it extends the advantages of Flexible App Engine (your service is encapsulated in a container). You can go from micros- ervices, to big services in any language, with any desir- able framework and with any technology you need. All this can be completely tailored to your needs. You can actually reuse your older services that were running on VM or in Kubernetes cluster and deploy them with fully automanageble Cloud Run.

There are actually Standard Cloud Run, where you deploy only your Con- two types of Cloud Run. tainer, and Cloud Run. Runtime will take care of all the important things in the background

Cloud Run Over GKE (Google Kubernetes Engine). In this case, you can run your container over managed Ku- bernetes Cluster, but with more control options to your advantage. If you have already run your container on local (on premises) Kubernetes cluster and you know with what settings your application is running best, you can copy these settings here. There are some differences in pricing as well. Cloud Run is priced based on the usage and Cloud Run on GKE is free. You only pay for the underlying compute resources (note that you have to have Istio+Knative deployed on your GKE and that in itself requires some compute resources).

Best Use cases for standard Cloud Run Standard Cloud Run works great for any service that you are able to encapsulate into a container. You can even run your legacy applications (those you had in the past in VM/Kubernetes cluster) with minor changes.

This means that you can easily port to Cloud Run en- vironment any type of code/program that is not suitable for the App Engine, or you can use it for rewriting your older application (with Google Cloud SDK)

Best Use cases for Cloud Run on GKE As mentioned before, if you are an experienced Kuber- netes operator, you can optimise your application to perform at its best on your own (as long as you are not completely satisfied with the automatic performance setting)

You can also use this combination for computing tasks that are beyond standard (or inefficient) set- tings. For Machine Learning tasks as an example, you revolgy guide to serverless 10 can set GPU or TPU usage, or special data cleaning operation (such as MapReduce and so on…).

Another specific use case could be adjusting your own optional settings of networking with VPC (). By using the Cloud Run it may seem as if you are running your application on Kubernetes, but you actually only take care of its settings. You don’t need to worry about how to run it as this has already been taken care of.

Advantages • Pay-as-you-go pricing • Managed Kubernetes (you don’t have to know how to set up Kubernetes clusters) • Legacy application can be migrated into the Cloud Run • More cost-efficient than App Engine - pricing can change as it is only in Beta at the moment • Full control over the runtime and frameworks used • Portability -> easiest way how to move applications from one Cloud provider to another

Disadvantages • As this whole concept was designed to help you create your own containers without getting locked with one Cloud provider SDK, you will need to implement sup- port tools as well.

For example in the Cloud Run container, you need your own HTTP server.

Also the load balancer is not able to manage any state, so again, your service has to be stateless. Compared to the App Engine there’s much more configuration work involved to set the environment the same way. Google Cloud SDK libraries are not automatically up- dated like on the App Engine so you have to re-deploy from time to time. The service is still in beta. There are added expenses for computing power (but those expenses are still lower than what you’d have to pay developers)

revolgy guide to serverless 11 How to build serverless applications

Think as a Cloud developer In previous chapters of this article we discussed some not a VM developer of the tools that help you scale your business faster and easier. However, these are only the environments that you can run your application on.

Application itself is the core of your business and you have to know how to build it so that it performs perfect- ly and stays cost-efficient.

With Serverless, once you deploy your application the resources are endless. The serverless engine will create as many instances as needed. However, if your application doesn’t use these resources wisely it can cost you a fortune.

Before serverless your VM machine had predefined set of resources, and you had to scale on the level of VM. This might have been affecting scalability of your ser- vice, but VMs were created with more resources (such as CPU and memory). In serverless on the other hand, you get instances which are less powerful but highly scalable. And you pay for every operation and resource you use (you are charged for for CPUs, memory usage, read and write operations into DB, transferred data out over the network separately).

If your service is not optimised properly and becomes overloaded, or experiences high traffic peaks, it may cost you a lot.

Stateless application for faster scaling Therefore, you have to learn how to work with mem- cache, how to reduce data transfer in requests and how to index and use NoSQL DB. To simplify, you have to learn how to optimise instances startup times and try to cache as much as possible.

When talking about load balancing and scaling your services, in serverless world all microservices are considered as stateless. The load balancer scales your instances without duplicating sessions or transferring any state data. You should keep this in mind and imple- ment your service accordingly.

revolgy guide to serverless 12 Smaller apps for faster initialisation times The state is saved into some persistent entity outside of the instance (DB, file on Cloud Storage, persistent memcache) and the requests and responses are not part of any global context. If they were, you’d have to handle it with different tools or different approach (transactions, asynchronous tasks and so on...).

Initialisation times, especially during traffic peaks are crucial to the overall charge at the end of your billing period. You only pay for the resources you use, but also for the initialisation and creation of server instances (according to Google pricing you pay for the minimum 15 minutes of the life of the instance). If the initialisa- tion time of one instance is 25 seconds, in a sudden spike period, based on your scaling settings it can keep creating new instances to satisfy all client requests for at least 25 seconds until there’s enough running instances to handle the load. But it means it can easily spawn 50 new instances that in fact handle just a few requests each and will be killed afterwards. Howev- er, you still have to pay for 15 minutes of the startup time of each of them. When we consider that you only needed 5 instances but there were none available in this period of 25 seconds and you in the end paid for 50 (for a short period of 15 minutes) then in case your application experiences several such spikes a day (e.g. accounting app where invoices are uploaded every hour in buckets…) you will end up paying for a lot of unnecessary instances.

This phenomenon was not so obvious in VM environ- ments, where the loading of new VM OS and services can take several minutes. In this case, 25 seconds to start your app is irrelevant and you do not scale your VM this fast as serverless engine can scale your instances.

So to make your application performs better (be cost effective and work fast), you have to learn how to cre- ate services with minimum functionality and with only needed dependencies.

E=mc^2 (Error = power(more code))

Same principle applies to creating containers. The more basic (alpine) configuration you are able to create (with- out building tools and other libraries) the less opportuni- ties you give someone to harm your container. revolgy guide to serverless 13 And with the correct APIs and endpoints exposure we are coming to another topic - security.

Security and Robustness The more modules application contains, the more vulnerable it is to external threads. That’s why applica- tion should have endpoints only for task it is supposed to do. If you need another (administration) endpoints, create them on a different path or port or a hidden ser- vice. Use https and API keys for request calls that are outside of your internal network.

There cannot be a way to break down your instance by calling your endpoints. This means, errors should be handled on application level and propagated to the request caller (client). No request should end with memory leak or infinite loop.

Write your code as if it was an open source. People will always somehow find a way to download your code (by your mistake or by security breach). With this I mean, let people know how it works, but do not allow them to break it (from the outside).

Serverless environment increases the security of your app, because system takes care of OS patches, secu- rity certificates and proposes actions to take in case your app does not follow specific standards.

Microservices and Serverless

These two concepts grow with each other. Serverless helps microservices to perform better and to be more cost-effective and serverless technology is widely used for Cloud Functions and Microservices.

Wrapping some part of your application into a micro- service or building a new one for a task you need and deploying it to the Cloud is the easiest way to test serverless for yourself and make a proof of concept service for your application. In a matter of hours you can deploy a service that can be used by millions of users without any special knowledge of infrastructure/ Linux/Kubernetes and move your business a bit further.

revolgy guide to serverless 14 Why then creating a whole new concept with serverless that is more expensive?

Is it really more expensive? Now we are finally getting to the main questions: Microservices can be created even with other technol- ogies such as VM and Kubernetes which are “cheaper” to run. Why then creating a whole new concept with serverless that is more expensive? Why should you try serverless?

Let’s have a look at it from this perspective: To create a microservice on VM, you need to create new VM, install OS there, install programs to build and run your applica- tions (server, DB….).

To run the VM, you have to maintain it, create health checks for your service, find the way how to get logs and traces from your applications and so on.

Then when you want to scale it, you have to set up a load balancer to start the same copy of the VM, test and optimise it. And now you have only one environment (let’s say Development) and you may need one or two more. With this you have to know how to set the network and create automatic deployment scripts (or new deploy- ment application).

Almost the same scenario will apply to Kubernetes clus- ter. Maybe it will be easier to scale and duplicate con- tainers, but you have to know how to setup Kubernetes cluster, how to manage internal network and you will end up with a similar nightmare.

If you are lucky, you are able to find a developer who can handle all these tasks successfully. But the question is, how many services is he able to manage? After you deploy 7 and more services into production environ- ment, he won’t be able to program any more, because DevOps work will take most of his productive time. After another 7 he won’t even have time to sleep. And after a few months, you are without a developer. Most of the tasks are not automated, and the knowledge on how to maintain at least status Quo is gone too. Your business doesn’t perform, something breaks… You can imagine how this ends.

In order to maintain more applications, you suddenly need more people who do not actually participate in building your core business. More developers and revolgy guide to serverless 15 more DevOps engineers. Depending on how complex your application is becoming, you need exponentially more people to run it. (Of course, there are exceptions to the rule, but these are rarities)

Now let us have a look at the scenario with serverless microservices. You need one developer to create and deploy the service. Once service is there, no other settings are needed (no load balancing, no network settings...). Development and Production environment are set during the deployment phase or by changing of your project in the Cloud Console. One developer is able to maintain more than 15 services on all plat- forms. Developer is in the state of 80% developing and 20% DevOps (as long as there is no major flaw in the architecture).

So you pay more for the technology, but less for human resources. Now the question is, what is more cost-effective in the long run? (A tip: machines cost less than people :-))

Why should you try serverless Because serverless is the future of and the sooner you will put your hands on it the better you will be able to use it to your advantage!

I do recommend to use all the serverless types to find out, what is better for your business or how you can em- ploy this technologies for your future/actual projects.

Importance of a good Even when you know which architecture in Serverless technology is the best for your service, apply the best practices and create services that are fast, scalable and with minimum resources used, you will drain your human resources on operational problems if you fail to create effective architecture for your system.

If you think good architecture is expensive If you create dependencies between APIs or data/ try out bad architecture first state dependencies between microservices you will significantly slow down the development of new features. In serverles we mostly talk about Microservice Architecture pattern. I am not going to revolgy guide to serverless 16 go into detail of concrete patterns but I would like to point out some important principles of microser- vices and how to use them.

Microservice is a standalone application Microservice should be a standalone application. It (a small one) shout work inside its sandbox. That means no other application should change the data or state of the ser- vice outside of the service API.

Once other application is able to touch the data of the service, without the service knowing it, it is not a microservice, it is just another layer of your complex system.

Microservice owns the API not the Clients You should be able to deploy a microservice in one atomic step. Once there are several independent layers of the service, something is wrong. Basically, you should be able to encapsulate your entire service into one container.

Microservice should have an API that provides the base for functionality of the service, is secure and as generic as possible. That means, service should be built to serve several clients, not one particular client. It also means, API should not be dictated (custom build) by one client only.

There are ways to have a specific API for a client. Like versioning of the APIs (v1, v2, v3...). This means, you can run several versions at the same time. You can also divide the security credential on specific APIs such as Admin API, Editor API and so on.

Reliable, robust and easy to change Even when microservices are tempting to be used for is better than fast, temporary solutions troubleshooting and fast fixes of bad solutions, you should not use them for this purpose.

An example could be sending invoices to specific API (Let’s call it customer 1 API) in your old monolithic application. With a new big customer you will add an- other API (Let’s call it customer 2 API) that you build in new microservice (because you have heard about Microservices and you want to try it). But you now split the same functionality (sending invoices to the customer) into two separate applications and you use two different API. Even worse would be if the micros- ervice API uses data binding from monolithic applica- tion and therefore isn’t independent.

revolgy guide to serverless 17 Best practice in this case is to create a new API (your own) on new microservice for sending invoices, move both APIs (for customer 1 and customer 2) to the microservice and use it from whichever application you need (monolithic or another service).

You should aim to create a robust architecture on mi- croservices from small sets of them, to achieve a great system. It will pay off in the future when you will avoid a lot of “temporary” easy fixes from becoming a perma- nent, “wrong” solution.

Summary and key takeaways

Again, Serverless is the future of cloud computing. Every type of Serverless has its advantages and dis- advantages. In my personal opinion it brings more positive energy into cloud computing and will cause unmanaged VMs and Kubernetes to become extinct.

The main advantage of serverless technology is the simplicity of the design, deployment and maintenance of the services you create.

That means, that even in a small team (of 2 to 5 peo- ple) you are able to maintain a product that serves millions of customers worldwide. You will be able to handle thousands of requests per second with ease, continue with the development of new features (new services) and still have time to care about your core business.

You will spend minimum time with DevOps and main- tenance. Serverless technology comes with great tools for monitoring and error reporting. (Cloud Console, Stackdriver...)

All these tools help you create a great million dollar products in a small team of people. So the Cloud (infrastructure) becomes only one of the services you consume and not another department (team of magi- cians) in your company structure. revolgy guide to serverless 18 Cut the costs on your existing solutions,create new ones faster with less resources needed. Be smart, be serverless!

At the moment it goes as far as products such as Google Firebase, where you can find the best tools and technologies needed for Mobile business (automatic authentication, mobile push notifications, analytics and more).

The best thing about it is, it has limitless possibilities!! You can create any type of server application, analytics pipeline, test interface or scheduled tasks (crons) you like even when your company is a one man show devel- oping apps in your garage.

revolgy guide to serverless 19