The Design and Implementation of Low-Latency Prediction Serving Systems
Total Page:16
File Type:pdf, Size:1020Kb
The Design and Implementation of Low-Latency Prediction Serving Systems Daniel Crankshaw Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2019-171 http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-171.html December 16, 2019 Copyright © 2019, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Acknowledgement To my parents The Design and Implementation of Low-Latency Prediction Serving Systems by Daniel Crankshaw A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Joseph Gonzalez, Chair Professor Ion Stoica Professor Fernando Perez Fall 2019 The Design and Implementation of Low-Latency Prediction Serving Systems Copyright 2019 by Daniel Crankshaw 1 Abstract The Design and Implementation of Low-Latency Prediction Serving Systems by Daniel Crankshaw Doctor of Philosophy in Computer Science University of California, Berkeley Professor Joseph Gonzalez, Chair Machine learning is being deployed in a growing number of applications which demand real- time, accurate, and cost-efficient predictions under heavy query load. These applications employ a variety of machine learning frameworks and models, often composing several models within the same application. However, most machine learning frameworks and systems are optimized for model training and not deployment. In this thesis, I discuss three prediction serving systems designed to meet the needs of modern interactive machine learning applications. The key idea in this work is to utilize a decoupled, layered design that interposes systems on top of training frameworks to build low-latency, scalable serving systems. Velox introduced this decoupled architecture to enable fast online learning and model personalization in response to feedback. Clipper generalized this system architecture to be framework-agnostic and introduced a set of optimizations to reduce and bound prediction latency and improve prediction throughput, accuracy, and robustness without modifying the underlying machine learning frameworks. And InferLine provisions and manages the individual stages of prediction pipelines to minimize cost while meeting end-to-end tail latency constraints. i To my parents ii Contents Contents ii List of Figures iv 1 Introduction 1 1.1 The Machine Learning Lifecycle . 1 1.2 Requirements For Prediction Serving Systems . 3 1.3 Trends in Modern Machine Learning . 6 1.4 Decoupling the Serving System . 10 1.5 Summary and Contributions . 10 2 The Missing Piece in Complex Analytics: Low Latency, Scalable Model Manage- ment and Serving with Velox 12 2.1 Introduction . 12 2.2 Background and Motivation . 14 2.3 System Architecture . 16 2.4 Model Management . 18 2.5 Online Prediction Service . 20 2.6 Adding New Models . 23 2.7 Related Work . 24 2.8 Conclusions and Future Work . 25 3 Clipper: A Low-Latency Online Prediction Serving System 26 3.1 Introduction . 26 3.2 Applications and Challenges . 28 3.3 System Architecture . 33 3.4 Model Abstraction Layer . 33 3.5 Model Selection Layer . 39 3.6 System Comparison . 46 3.7 Limitations . 48 3.8 Related Work . 49 3.9 Conclusion . 50 iii 4 InferLine: ML Prediction Pipeline Provisioning and Management for Tight La- tency Objectives 51 4.1 Introduction . 51 4.2 Background and Motivation . 53 4.3 System Design and Architecture . 56 4.4 Low-Frequency Planning . 57 4.5 High-Frequency Tuning . 60 4.6 Experimental Setup . 62 4.7 Experimental Evaluation . 64 4.8 Related Work . 71 4.9 Limitations and Generality . 73 4.10 Conclusion . 74 5 Discussion and Future Directions 75 5.1 Future Directions . 77 5.2 Challenges and Lessons Learned . 79 Bibliography 81 iv List of Figures 1.1 A simplified view of the machine learning model lifecycle [60]. This illustrates the three stages of a machine learning model. The first two stages encompass model development, where a data scientist first develops the model then trains it for production deployment on large dataset. Once the model is trained, it can be deployed for inference and integrated into a target application. This latter stage typically involves deploying the model to an ML serving platform for scalable and efficient inference and an application developer integrating with this ML platform to use the model to drive an application. .................... 2 2.1 Velox Machine Learning Lifecycle Velox manages the ML model lifecycle, from model training on raw data (the focus of many existing “Big Data” analytics training systems) to the actual actions and predictions that the models inform. Actions produce additional observations that, in turn, lead to further model training. Velox facilitates this cycle by providing functionality to support the missing components in existing analytics stacks. 13 2.2 Velox System Architecture Two core components, the Model Predictor and Model Manager, orchestrate low latency and large-scale serving of data products computed (originally, in batch) by Spark and stored in a lightweight storage layer (e.g., Tachyon). 17 2.3 Update latency vs model complexity Average time to perform an online update to a user model as a function of the number of factors in the model. The results are averaged over 5000 updates of randomly selected users and items from the MovieLens 10M rating data set. Error bars represent 95% confidence intervals. .......................... 20 2.4 Prediction latency vs model complexity Single-node topK prediction latency for both cached and non-cached predictions for the Movie Lens 10M rating dataset, varying size of input set and dimension (d, or, factor). Results are averaged over 10,000 trials. 22 3.1 The Clipper Architecture. ................................. 27 3.2 Machine Learning Lifecycle. ................................ 29 3.3 Model Container Latency Profiles. We measured the batching latency profile of several models trained on the MNIST benchmark dataset. The models were trained using Scikit-Learn (SKLearn) or Spark and were chosen to represent several of the most widely used types of models. The No-Op Container measures the system overhead of the model containers and RPC system. .......................................... 35 3.4 Comparison of Dynamic Batching Strategies. ....................... 36 v 3.5 Throughput Increase from Delayed Batching. ....................... 38 3.6 Scaling the Model Abstraction Layer Across a GPU Cluster. The solid lines refer to aggre- gate throughput of all the model replicas and the dashed lines refer to the mean per-replica throughput. ........................................ 40 3.7 Ensemble Prediction Accuracy. The linear ensembles are composed of five computer vision models (Table 3.2) applied to the CIFAR and ImageNet benchmarks. The 4-agree and 5-agree groups correspond to ensemble predictions in which the queries have been separated by the ensemble prediction confidence (four or five models agree) and the width of each bar defines the proportion of examples in that category. ......................... 43 3.8 Behavior of Exp3 and Exp4 Under Model Failure. After 5K queries the performance of the lowest-error model is severely degraded, and after 10k queries performance recovers. Exp3 and Exp4 quickly compensate for the failure and achieve lower error than any static model selection. 44 3.9 Increase in stragglers from bigger ensembles. The (a) latency, (b) percentage of missing predictions, and (c) prediction accuracy when using the ensemble model selection policy on SK-Learn Random Forest models applied to MNIST. As the size of an ensemble grows, the prediction accuracy increases but the latency cost of blocking until all predictions are available grows substantially. Instead, Clipper enforces bounded latency predictions and transforms the latency cost of waiting for stragglers into a reduction in accuracy from using a smaller ensemble. 45 3.10 Personalized Model Selection. Accuracy of the ensemble selection policy on the speech recognition benchmark. ................................... 46 3.11 TensorFlow Serving Comparison. Comparison of peak throughput and latency (p99 latencies shown in error bars) on three TensorFlow models of varying inference cost. TF-C++ uses TensorFlow’s C++ API and TF-Python the Python API. ................... 48 4.1 InferLine System Overview ................................ 52 4.2 Example Pipelines. We evaluate InferLine on four prediction pipelines that span a wide range of models, control flow, and input characteristics. ...................... 53 4.3 Example Model Profiles on K80 GPU. The preprocess model has no internal parallelism and cannot utilize a GPU. Thus, it sees no benefit from batching. Res152 (image classification) & TF-NMT(text translation model) benefit from batching on a GPU but at the cost of increased latency. .......................................... 55 4.4 Arrival and Service Curves. The arrival curve captures the maximum number of queries to be expected in any interval of time x seconds wide. The service curve plots the expected number of queries processed in an interval of time