7 Key Considerations for Microservices-Based Application Delivery Ensuring the Success of Your Cloud-Native Journey
Total Page:16
File Type:pdf, Size:1020Kb
White Paper 7 key considerations for microservices-based application delivery Ensuring the success of your cloud-native journey By Lee Calcote and Pankaj Gupta Citrix | 7 key considerations for microservices-based application delivery 2 The role of application delivery in 3. Choosing the perfect proxy your cloud-native journey 4. Securing your applications and APIs 5. Enabling CI/CD and canary deployment with As digital transformation is changing how your advanced traffic steering organization conducts business, it is also changing 6. Achieving holistic observability how your products and services are delivered. The infrastructure and practices by which your software is 7. Managing monoliths and microservices continuously deployed and operated—your application A thorough evaluation of these seven considerations delivery—is the fulcrum of your organization’s is best done with specific tasks and goals in mind. digital transformation. Likely you are progressing Depending on the size and diversity of your organization, on your cloud-native journey—that is, transitioning you may need to account for a variety of stakeholders’ from monolithic to container-based microservices needs—that is, tasks and goals that differ based on role architectures with the goal of achieving agility, and responsibility. portability, and on-demand scalability. Kubernetes is the platform of choice for many companies, providing In context of application delivery, we’ll survey the the automation and control necessary to manage most common roles with a generalized view of their microservices-based applications at scale and with responsibilities and needs as stakeholders. To help high velocity. facilitate a general understanding, we’ve grouped some roles when responsibilities overlap across multiple teams: The network is part and parcel to each and every service request in your microservices-based application. • Platform: Platform teams are responsible for Therefore, it may come as no surprise that at the core deploying and managing their Kubernetes of application delivery is your application delivery infrastructure. They are responsible for platform controller, an intelligent proxy that accelerates and governance, operational efficiency, and developer manages application delivery. With no standard agility. The platform team is the connective tissue definition of what an application delivery controller among various teams like DevOps, SREs, developers, does, the capabilities of intelligent proxies vary broadly. and network operations teams and therefore must In this white paper, we’ll explore application delivery address and balance the unique needs of a diverse controllers as they relate to your architecture choices, group of stakeholders, or influencers, when choosing your use of container platforms, and open source tools. cloud-native solutions. • DevOps: DevOps teams are responsible for continuously deploying applications. They care about 7 key considerations for faster development and release cycles, CI/CD and microservices-based application automation, and canary and progressive rollout. • SREs: Site reliability engineers must ensure delivery application availability. They care about observability, Before embarking on your cloud-native journey, it’s incident response, and postmortems. SREs often essential to critically assess your organization’s act as architects for the DevOps team and are often readiness so you can choose the solutions that extensions of or directly belong to DevOps teams. best fit your business objectives. There are seven • Developers: Development teams are responsible key considerations to address when planning your for application performance and are focused on microservices-based application delivery design: ensuring a seamless end-user experience, including troubleshooting, and microservices discovery and 1. Architecting your foundation the right way routing. Application performance and troubleshooting 2. Openly integrating with the cloud-native ecosystem is a shared responsibility among multiple teams. Citrix | 7 key considerations for microservices-based application delivery 3 • NetOps: Network operations teams are responsible being redrawn. Be aware that the individuals who fill for ensuring stable, high-performing network these roles typically go through a period of adjustment connectivity, resiliency, security (web application that can be unsettling until they adapt. firewalls and TLS, for example), and are commonly Your cloud-native infrastructure should be as focused on north-south traffic. They care about accommodating as possible to you, your team, and your establishing networking policies and enforcing collective responsibilities and process, so we encourage compliance; achieving management, control, and you to seek solutions that address the needs of all your monitoring of the network; and gaining visibility for the stakeholders. Significantly, this includes evaluating purpose of resources and capacity planning. different architectural models that are best suited to • DevSecOps: DevSecOps teams care about ensuring a the purpose. While every organization doesn’t travel strong security posture and rely on automated tools to the same road to cloud-native, every journey starts orchestrate security for infrastructure, applications, with initial architectural decisions—decisions that have containers, and API gateways. DevSecOps works substantial bearing on your path to cloud native. very closely with NetOps to ensure a holistic security posture. 1. Architecting your foundation the Each role has nuanced responsibilities. Whether you right way have a single person or entire teams assigned to these roles, each role’s function needs to be accounted for. Cloud native novices and experts alike find that designing their application delivery architectures is the It’s important to note that these stakeholders are most challenging part of building microservices. Your undergoing a transformation in their responsibilities— architectural choices will have a significant impact or at least a transformation in the way that they on your cloud-native journey. Some architectures will perform their responsibilities. Depending upon your provide greater or fewer benefits while others will prove organization’s size and structure, your stakeholders may less or more difficult to implement. or may not have clearly defined lines of accountability among roles. As you adopt a cloud-native approach to Whether you are a cloud-native pro or a novice, your application deployment and delivery, you may find that selection of the right application delivery architecture the once-defined lines have blurred or that they are Diverse stakeholders have unique needs Platform team Platform governance, operational efficiency, developer agility DevOps Developers SRE NetOps DevSecOps Faster release and User experience, Application availability, Network policy and Application and deployment cycles, troubleshooting, observability, incident compliance; manage, infrastructure security, CI/CD and automation, microservice discovery, response, postmortems control, and monitor container security and canary and progressive and routing network; resource and API gateways, and rollout capacity planning automation Citrix | 7 key considerations for microservices-based application delivery 4 will balance the tradeoff between the greatest benefits • Open source tools integration and the simplicity needed to match your team’s skill set. • Service mesh and Istio integration Figure 1 highlights four common application delivery • IT skill set required architecture deployment models. Learn more about the evaluation criteria. Tip: Traffic directions Let’s examine each of the four deployment models. North-south (N-S) traffic refers to traffic between clients outside the Kubernetes cluster and services inside the cluster, while east-west (E-W) traffic Two-tier ingress refers to traffic between services inside the Two-tier ingress is the simplest architectural model Kubernetes cluster. to deploy to get teams up and running quickly. In this deployment model, there are two layers of ADCs for Each of the deployment models in Figure 1 come with N-S traffic ingress. The external ADC (at Tier 1), shown their list of pros and cons and are typically the point in green in Figure 2, provides L4 traffic management. of focus of different teams. So how do you choose the Frequently, additional services are assigned to this right architecture for your deployment? Given the needs ADC and can include web application firewall (WAF), of your stakeholders and the many specifics involved secure sockets layer/transport layer security offload in managing both north-south (N-S) and east-west (SSL/TLS) functionality, and authentication. A two-tier (E-W) traffic, it is critical to assess the four different ingress deployment model is often managed by the architectures with respect to the following areas: existing network team (which is familiar with internet- facing traffic), and it can also be used as an ADC for • Application security other existing applications simultaneously. • Observability The second ADC (Tier 2), shown in orange in • Continuous deployment Figure 2, handles L7 load balancing for N-S traffic. • Scalability and performance It is managed by the platform team and is used within Figure 1: Citrix architectures for microservices-based application delivery Citrix ADC High Citrix ADC Citrix ADC (CPX) Pod Pod Citrix ADC (CPX) Service Proxy Service Proxy Citrix ADC Pod Pod Citrix ADC Pod Pod Service Proxy