Μtune: Auto-Tuned Threading for OLDI Microservices
Total Page:16
File Type:pdf, Size:1020Kb
mTune: Auto-Tuned Threading for OLDI Microservices Akshitha Sriraman Thomas F. Wenisch University of Michigan [email protected], [email protected] ABSTRACT ment, protocol routing [25], etc. Several companies, such as Amazon [6], Netflix [1], Gilt [37], LinkedIn [17], Modern On-Line Data Intensive (OLDI) applications have and SoundCloud [9], have adopted microservice architec- evolved from monolithic systems to instead comprise tures to improve OLDI development and scalability [144]. numerous, distributed microservices interacting via Re- These microservices are composed via standardized Re- mote Procedure Calls (RPCs). Microservices face sub- mote Procedure Call (RPC) interfaces, such as Google’s millisecond (sub-ms) RPC latency goals, much tighter Stubby and gRPC [18] or Facebook/Apache’s Thrift [14]. than their monolithic counterparts that must meet ≥ 100 ms latency targets. Sub-ms–scale threading and concur- Whereas monolithic applications face ≥ 100 ms th rency design effects that were once insignificant for such tail (99 +%) latency SLOs (e.g.,∼300 ms for web monolithic services can now come to dominate in the search [126, 133, 142, 150]), microservices must often sub-ms–scale microservice regime. We investigate how achieve sub-ms (e.g., ∼100 ms for protocol routing [151]) threading design critically impacts microservice tail la- tail latencies as many microservices must be invoked se- tency by developing a taxonomy of threading models—a rially to serve a user’s query. For example, a Facebook structured understanding of the implications of how mi- news feed service [79] query may flow through a serial croservices manage concurrency and interact with RPC pipeline of many microservices, such as (1) Sigma [15]: interfaces under wide-ranging loads. We develop mTune, a spam filter, (2) McRouter [118]: a protocol router, (3) a system that has two features: (1) a novel framework that Tao [56]: a distributed social graph data store, (4) My- abstracts threading model implementation from applica- Rocks [29]: a user database, etc., thereby placing tight tion code, and (2) an automatic load adaptation system sub-ms latency SLOs on individual microservices. We ex- that curtails microservice tail latency by exploiting in- pect continued growth in OLDI data sets and applications herent latency trade-offs revealed in our taxonomy to to require composition of ever more microservices with transition among threading models. We study mTune in increasingly complex interactions. Hence, the pressure the context of four OLDI applications to demonstrate up for better microservice latency SLOs continually mounts. to 1.9× tail latency improvement over static threading Threading and concurrency design have been shown choices and state-of-the-art adaptation techniques. to critically affect OLDI response latency [76, 148]. But, prior works [71] focus on monolithic services, which typically have ≥ 100 ms tail SLOs [111]. Hence, sub-ms– 1 Introduction scale OS and network overheads (e.g., a context switch On-Line Data Intensive (OLDI) applications, such as web cost of 5-20 ms [101, 141]) are often insignificant for search, advertising, and online retail, form a major frac- monolithic services. However, sub-ms–scale microser- tion of data center applications [113]. Meeting soft real- vices differ intrinsically: spurious context switches, net- time deadlines in the form of Service Level Objectives work/RPC protocol delays, inept thread wakeups, or lock (SLOs) determines end-user experience [21, 46, 55, 95] contention can dominate microservice latency distribu- and is of paramount importance. Whereas OLDI appli- tions [39]. For example, even a single 20ms spurious con- cations once had largely monolithic software architec- text switch implies a 20% latency penalty for a request to a tures [50], modern OLDI applications comprise numer- 100 ms SLO protocol routing microservice [151]. Hence, ous, distributed microservices [66, 90, 116] like HTTP prior conclusions must be revisited for the microservice connection termination, key-value serving [72], query regime [49]. rewriting [48], click tracking, access-control manage- In this paper, we study how threading design affects mi- Leaf microserver 1 We find that the relationship between optimal threading Mid-tier request path: model and service load is complex—one could not expect 1. Process query Query a developer to pick the best threading model a priori. So, 2. Launch clients to leaf µsrvs we build an intelligent system that uses offline profiling µTune Query Intermediate responseQuery to automatically adapt to time-varying service load. mTune’s second feature is an adaptation system that Intermediate response determines load via event-based load monitoring and Final response Leaf microserver 2 tunes both the threading model (polling vs. blocking Front-end Mid-tier Query microserver microserver Intermediate response network reception; inline vs. dispatched RPC execution) Front-end response path: and thread pool sizes in response to load changes. mTune Response Mid-tier response path: improves tail latency by up to 1.9× over static peak load- presentation Merge to form final response Leaf microserver N sustaining threading models and state-of-the-art adapta- Figure 1: A typical OLDI application fan-out. tion techniques, with < 5% mean latency and instruction overhead. Hence, mTune can be used to dynamically cur- croservice tail latency, and leverage these design effects tail sub-ms–scale OS/network overheads that dominate in to dynamically improve tails. We develop a system called modern microservices. mTune, which features a framework that builds upon open- In summary, we contribute: source RPC platforms [18] to enable microservices to ab- • A taxonomy of threading models: A structured un- stract threading model design from service code. We ana- derstanding of microservice threading models and lyze a taxonomy of threading models enabled by mTune. their implications on performance. We examine synchronous or asynchronous RPCs, in-line or dispatched RPC handlers, and interrupt- or poll-based • mTune’s framework 1 for developing microservices, network reception. We also vary thread pool sizes dedi- which supports a wide variety of threading models. cated to various purposes (network polling, RPC handling, response execution). These design axes yield a rich space • mTune’s load adaptation system for tuning thread- of microservice architectures that interact with the un- ing models and thread pools under varying loads. derlying OS and hardware in starkly varied ways. These • A detailed performance study of OLDI services’ key threading models often have surprising OS and hardware tier built with mTune: the mid-tier microserver. performance effects including cache locality and pollu- tion, scheduling overheads, and lock contention. 2 Motivation We study mTune in the context of four full OLDI ser- vices adopted from mSuite [134]. Each service com- We motivate the need for a threading taxonomy and adap- prises sub-ms microservices that operate on large data tation systems that respond rapidly to wide-ranging loads. sets. We focus our study on mid-tier microservers: widely- Many prior works have studied leaf servers [63, 107, used [50] microservices that accept service-specific RPC 108, 123, 142, 143], as they are typically most numer- queries, fan them out to leaf microservers that perform ous, making them cost-critical. Mid-tier servers [68, 98], relevant computations on their respective data shards, which manage both incoming and outgoing RPCs to many and then return results to be integrated by the mid-tier clients and leaves, perhaps face greater tail latency opti- microserver, as illustrated in Fig. 1. The mid-tier mi- mization challenges, but have not been similarly scruti- croserver is a particularly interesting object of study since nized. Their network fan-out multiplies underlying soft- (1) it acts as both an RPC client and an RPC server, (2) it ware stack interactions. Hence, performance and scalabil- must manage fan-out of a single incoming query to many ity depend critically on mid-tier threading model design. leaf microservers, and (3) its computation typically takes Expert developers extensively tune critical OLDI ser- tens of microseconds, about as long as OS, networking, vices via trial-and-error or experience-based intuition [84]. and RPC overheads. Few services can afford such effort; for the rest, we must We investigate threading models for mid-tier microser- appeal to software frameworks and automatic adaptation vices. Our results show that the best threading model de- to improve performance. mTune aims to empower small pends critically on the offered load. For example, at low teams to develop performant mid-tier microservices that loads, models that poll for network traffic perform best, meet latency goals without enormous tuning efforts. as they avoid expensive OS thread wakeups. Conversely, The need for a threading model taxonomy. We de- at high loads, models that separate network polling from velop a structured understanding of rational design op- RPC execution enable higher service capacity and block- tions for architecting microservices’ OS/network interac- ing outperforms polling for incoming network traffic as it tions in the form of a taxonomy of threading models. We avoids wasting precious CPU on fruitless poll loops. 1Available at https://github.com/wenischlab/MicroTune Block-Based threading model Poll-Based threading model tem adapts to load by choosing optimal threading models and thread pool sizes dynamically to reduce tail latency. saturaon mTune aims to allow a microservice to be built once 2 ∞ and be scalable across wide-ranging loads. Many OLDI 1.5 services experience drastic diurnal load variations [79]. 1.35x 1 Others may face “flash crowds” that cause sudden load spikes (e.g., intense traffic after a major news event). New 0.5 inflecon OLDI services may encounter explosive customer growth point 0 that surpasses capacity planning (e.g., the meteoric launch 99th percen/le tail latency (ms) 10 100 1000 10000 of Pokemon Go [31]). Supporting load scalability over Load (Queries Per Second) many orders of magnitude in a single framework facili- tates rapid scale-up of a popular new service.