Tail Latency in Node.Js: Energy Efficient Turbo Boosting for Long Latency Requests in Event-Driven Web Services

Tail Latency in Node.Js: Energy Efficient Turbo Boosting for Long Latency Requests in Event-Driven Web Services

Tail Latency in Node.js: Energy Efficient Turbo Boosting for Long Latency Requests in Event-Driven Web Services Wenzhi Cui∗ Daniel Richins The University of Texas at Austin The University of Texas at Austin Google USA USA Yuhao Zhuy Vijay Janapa Reddiy The University of Texas at Austin The University of Texas at Austin University of Rochester Harvard University USA USA Abstract CCS Concepts • Software and its engineering → Run- Cloud-based Web services are shifting to the event-driven, time environments; Application specific development scripting language-based programming model to achieve environments; • Information systems → Web applica- productivity, flexibility, and scalability. Implementations of tions; • Hardware → Power estimation and optimization. this model, however, generally suffer from long tail laten- Keywords Node.js, tail latency, event-driven, JavaScript cies, which we measure using Node.js as a case study. Unlike in traditional thread-based systems, reducing long tails is ACM Reference Format: Wenzhi Cui, Daniel Richins, Yuhao Zhu, and Vijay Janapa Reddi. difficult in event-driven systems due to their inherent asyn- 2019. Tail Latency in Node.js: Energy Efficient Turbo Boosting chronous programming model. We propose a framework to for Long Latency Requests in Event-Driven Web Services. In Pro- identify and optimize tail latency sources in scripted event- ceedings of the 15th ACM SIGPLAN/SIGOPS International Confer- driven Web services. We introduce profiling that allows us ence on Virtual Execution Environments (VEE ’19), April 14, 2019, to gain deep insights into not only how asynchronous event- Providence, RI, USA. ACM, New York, NY, USA, 13 pages. https: driven execution impacts application tail latency but also //doi.org/10.1145/3313808.3313823 how the managed runtime system overhead exacerbates the tail latency issue further. Using the profiling framework, we 1 Introduction propose an event-driven execution runtime design that or- Cloud-based Web services are undergoing a transformation chestrates the hardware’s boosting capabilities to reduce tail toward server-side scripting in response to its promises of latency. We achieve higher tail latency reductions with lower increased productivity, flexibility, and scalability. Companies energy overhead than prior techniques that are unaware of such as PayPal, eBay, GoDaddy, and LinkedIn have publicly the underlying event-driven program execution model. The announced their use of asynchronous event-driven scripting lessons we derive from Node.js apply to other event-driven frameworks, such as Node.js, as part of the backends for their services based on scripting language frameworks. Web applications [3, 4, 7]. The use of the asynchronous event- driven execution model coupled with the managed scripting ∗Work was done entirely at The University of Texas at Austin. language leads to better application scalability and higher yWork was done in part at The University of Texas at Austin. throughput, developer productivity, and code portability, which are essential for Web-scale development [41]. A key challenge facing Web-scale deployment is long tail Permission to make digital or hard copies of all or part of this work for latency, and asynchronous event-driven Web services are personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear no exception. Long tail latency is often the single most im- this notice and the full citation on the first page. Copyrights for components portant reason for poor user experience in large scale de- of this work owned by others than ACM must be honored. Abstracting with ployments [15]. We use the popular Node.js framework for credit is permitted. To copy otherwise, or republish, to post on servers or to our study of tail latency in managed event-driven systems. redistribute to lists, requires prior specific permission and/or a fee. Request Node.js is a JavaScript-based, event-driven framework for permissions from [email protected]. developing responsive, scalable Web applications. We find VEE ’19, April 14, 2019, Providence, RI, USA th © 2019 Association for Computing Machinery. that tail latency for requests at the 99.9 percentile is about th ACM ISBN 978-1-4503-6020-3/19/04...$15.00 10× longer than those at the 50 percentile, indicating that https://doi.org/10.1145/3313808.3313823 serious tail latency issues exist in event-driven servers. VEE ’19, April 14, 2019, Providence, RI, USA Wenzhi Cui, Daniel Richins, Yuhao Zhu, and Vijay Janapa Reddi No prior work addresses the tail latency problem from the Req1 event-driven programming paradigm. Prior work addresses A I/O B server-side tail latency by improving request-level paral- lelism in request-per-thread Web services [16, 17, 22, 43] or Req2 C I/O D I/O E by relying on the implicit thread-based programming model [19, 25, 26]. Other work tends to treat the application and its Req3 F underlying runtime as a black box, focusing on system-level implications or network implications [10, 15, 27, 30, 44]. Asynchronous event-driven Web services pose a unique Event A C F D B Idle E challenge for identifying tail latency. Their event-driven Queue nature makes it difficult to reconstruct a request’s latency Time and identify the performance bottleneck because a request’s lifetime is split into three components—I/O, event callback Figure 1. Event-driven execution overlaps compute and I/O exe- execution, and event queuing. The three components of all re- cution. While waiting for I/O to complete for a request, the event quests are interleaved in a single thread, posing a challenge in loop is free to process events from unrelated requests. In traditional piecing together these independently operating components thread-per-request servers, each request is assigned a unique thread into a single request’s latency. In contrast, in a conventional wherein a request’s queue, execution, and I/O time can be directly thread-per-request server, each request is assigned a unique measured. However, queue time is implicit in event-driven servers. thread, which processes each request sequentially and inde- pendently of all others. As such, queuing, event execution, generated native code. To address these tail latency sources, and I/O time can be measured more directly [29]. we first propose GC-Boost, which accelerates GC execution Moreover, in recent years, asynchronous event-driven via frequency and voltage boosting. GC-Boost reduces tail la- Web services have been increasingly developed using man- tency by as much as 19.1% with negligible energy overheads. aged runtime frameworks. Managed runtimes suffer from We also show an orthogonal technique called Queue Boost several well known, sometimes non-deterministic, overheads, that operates by boosting event callback execution in the such as inline caching, garbage collection, and code cache event queue. In combination, GC-Boost and Queue Boost management [38]. While there are studies focused on the outperform state-of-the-art solutions [19, 25] by achieving performance implications of managed-language runtimes in higher tail reduction with lower energy overhead, providing general, how tail latency is affected by the binding of event- Web service operators a much wider trade-off space between driven execution and managed runtimes remains unknown. tail reduction and energy overhead. GC-Boost, Queue Boost, In this paper, we present the first solution to mitigate tail and their combination also pushes the Pareto optimal frontier latency in asynchronous event-driven scripting frameworks, of tail latency towards a new level unmatched by prior work. using the production-grade Node.js event-driven scripting In summary, we make three major contributions: framework. We focus not only on reducing tail latency but • We present the insight that understanding the under- on doing so at minimal energy cost, which is closely aligned lying programming model is important to effectively with the total cost of ownership in data centers [11]. We pinpoint and optimize the sources of tail latency. demonstrate that only by understanding the underlying • We propose the Event Dependency Graph (EDG) as a event-driven model, and leveraging its execution behavior, is general representation of request processing in event- it possible to achieve significant tail reductions in managed driven, managed language-based applications, which event-driven systems with little energy cost. we use to design a tail latency optimization framework. The insight behind our approach to mitigate tail latency • Using the EDG, we show that the major bottleneck of in asynchronous event-driven scripting is to view the pro- tail latency in Node.js is CPU processing, especially cessing of a request in event-driven servers as a graph where processing that involves the JavaScript garbage collec- each node corresponds to a dynamic instance of an event tor and the runtime handling of the event queue. and each edge represents the dependency between events. • We propose, design, and evaluate two event-based opti- We refer to the graph as the Event Dependency Graph (EDG). mizations, GC-Boost and Queue Boost, that reduce tail The critical path of an EDG corresponds to the latency of a latency efficiently with little energy overhead, both of- request. By tracking vital information of each event, such as fering significant improvements over prior approaches I/O time, event queue delay, and execution latency, the EDG that are agnostic to the event-driven model. lets us construct the critical path for request processing, thus The paper is organized as follows. Section2 provides brief letting us attribute tail latency to fine-grained components. background on the event-driven programming model and We use the EDG profiling framework to show that Node.js its advantages over the thread-per-request model. Section3 tail requests are bounded by CPU performance, which is gives an overview of the baseline Node.js setup and the tail op- largely dominated by garbage collection and dynamically timization system we propose and introduces the high-level Tail Latency in Node.js VEE ’19, April 14, 2019, Providence, RI, USA Table 1.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us