Lowering the Latency of Data Processing Pipelines Through FPGA Based Hardware Acceleration

Lowering the Latency of Data Processing Pipelines Through FPGA Based Hardware Acceleration

Lowering the Latency of Data Processing Pipelines Through FPGA based Hardware Acceleration Muhsen Owaida Gustavo Alonso Laura Fogliarini Anthony Hock-Koon Systems Group Pierre-Etienne Melet Dept. of Computer Science, ETH Zurich Amadeus Zurich, Switzerland Sophia-Antipolis, France fi[email protected][email protected] ABSTRACT as the user is browsing, while online payment systems detect Web search engines often involve a complex pipeline of pro- fraudulent transactions before accepting payments. Never- cessing stages including computing, scoring, and ranking theless, the architecture used, whether on-premise installa- potential answers plus returning the sorted results. The la- tions or in the cloud, is similar in all cases: it involves a tency of such pipelines can be improved by minimizing data variety of databases and a pipeline of data processing stages movement, making stages faster, and merging stages. The where each stage runs on a cluster of machines [18]. throughput is determined by the stage with the smallest ca- The most common performance metric for interactive sys- pacity and it can be improved by allocating enough parallel tems is a latency upper-bound given a throughput lower- resources to each stage. In this paper we explore the pos- bound (e.g., processing at least 1000 request per second sibility of employing hardware acceleration (an FPGA) as with a 95-percentile response time of 1 second). To meet a way to improve the overall performance when computing throughput constraints (as well as elasticity and fault toler- answers to search queries. With a real use case as a baseline ance), query processing is done in parallel across several ma- and motivation, we focus on accelerating the scoring func- chines. To meet latency requirements, the response time of tion implemented as a decision tree ensemble, a common ap- the data processing stages must be predictable, which often proach to scoring and classification in search systems. Our implies limiting their scope or the amount of data processed solution uses a novel decision tree ensemble implementation to make sure the resulting response time does not exceed on an FPGA to: 1) increase the number of entries that can the available budget for the stage [58]. In practice, there is be scored per unit of time, and 2) provide a compact im- a trade-off latency vs. throughput. Separating stages allows plementation that can be combined with previous stages. to devote parallel resources to each one of them (improv- The resulting system, tested in Amazon F1 instances, sig- ing throughput). On the other hand, merging stages into a nificantly improves the quality of the search results and im- single step tends to improve latency by cutting down on net- proves performance by two orders of magnitude over the work transfers and data movement but reduces the capacity existing CPU based solution. of each stage due to the higher processing cost per query. In the past, these pipelines involved mostly data inten- PVLDB Reference Format: sive operations (e.g., queries over databases). Nowadays, Muhsen Owaida, Gustavo Alonso, Laura Fogliarini,Anthony Hock there is a widespread use of machine-learning classifiers and Koon, and Pierre-Etienne Melet. Lowering the Latency of Data Processing Pipelines Through FPGA based Hardware Accelera- predictors, which are computationally heavier, as they have tion. PVLDB, 13(1): xxxx-yyyy, 2019. the ability to improve interactive systems in many ways. DOI: https://doi.org/10.14778/3357377.3357383 Thus, existing recommender systems [6, 55] and search en- gines [12] such as those used by Amazon, Netflix, LinkedIn, 1. INTRODUCTION or YouTube, to mention a few, all employ machine-learning within their data processing pipelines. Similarly, the pay- Response time is a critical metric in interactive systems ment systems associated to the same platforms rely as well such as search engines, payment platforms, or recommender on machine learning for, e.g., detecting bots or fraudulent systems (e.g., a flight or hotel reservation system), where the transactions [62, 4]. latency in responding to a search query determines the qual- Using a real use case as motivation and baseline, in this ity of the user experience [8, 10, 28, 1]. Each of these systems paper we explore the interplay of data processing and ma- has to complete a different task: search engines compute po- chine learning in search engines pipelines. We give exam- tential matches to the search query, recommender systems ples of how the latency constraints restrict the amount of identify the user's profile and activity to make suggestions data that can be processed, thereby reducing the quality of the results. We also show the problems associated to This work is licensed under the Creative Commons Attribution- improving only query throughput by scaling out the search NonCommercial-NoDerivatives 4.0 International License. To view a copy engine. We then provide a novel solution to flight route scor- of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. For ing through an FPGA-based implementation of gradient- any use beyond those covered by this license, obtain permission by emailing boosted decision tree ensembles [35, 13] that allows pro- [email protected]. Copyright is held by the owner/author(s). Publication rights cessing a larger number of potential results and enables the licensed to the VLDB Endowment. Proceedings of the VLDB Endowment, Vol. 13, No. 1 merging of stages, thereby reducing the overall latency with- ISSN 2150-8097. out compromising the system's throughput. DOI: https://doi.org/10.14778/3357377.3357383 71 The contributions of the paper are as follows: (1) We discuss with a concrete example the latency vs. throughput trade-off common to many data processing pipe- lines. This is a complex system issue since, as the paper illustrates, improving query throughput is not enough to improve the overall system performance due to the very tight latency requirements and the query volume faced by modern domain search engines. (2) We identify FPGA-based accelerators as the most suit- able processing devices for tackling the trade-off and develop a novel implementation of inference over decision tree ensem- Figure 1: An Example of a Decision Tree. Nodes 0 - 2 are bles using an FPGA. The design generalizes the state of the decision nodes and nodes 3 - 6 are leaf nodes. art to more complex predicate comparisons on the decision nodes and wider data. The resulting system is two orders of magnitude faster than existing CPU-based systems. that helps with interpretability and they do not require pa- (3) We evaluate the resulting system on a variety of plat- rameter setting as many other ML methods do (thus re- forms { PCIe connected FPGAs [33, 46], cache coherent ducing the effort in data preparation). They can handle FPGAs [37], and Amazon F1 instances [7] { to explore de- categorical and numerical data as well as multiple outputs. ployments over the typical configurations made available by Their two most important limitations are variance (the tree cloud providers. We also discuss in detail the effects of mem- changes substantially with small changes in the training set) ory, computing, and I/O bandwidth on the resulting system. and locking on local optima as a result of the greedy search. This analysis should help to project the results obtained into future systems as the hardware involved is rapidly changing 2.2 Gradient Boosted Decision Trees (GBDT) and gaining in both capabilities and capacity. The most common form of decision tree models used nowa- (4) We break down the cost of the proposed design for days are Gradient-Boosting Decision Trees (GBDT), a su- the entire use case, including the overheads of starting the pervised decision tree ensemble method [35]. The use of en- computation, data transfers, and memory capacity. Based sembles (many trees) addresses the problem of local optima on this information, we analyze in depth the advantages as by generating multiple trees with random samples of the in- well as the potential limitations of the solution we propose. put. The use of gradient boosting addressed the problem of variance. Stated simply, GBDT builds a strong learner out of an ensemble of weak learners. A model in GBDT 2. BACKGROUND contains K decision trees. To run inference, it first runs in- ference over all K trees independently, and then combines the inference results. GDBT tends to favor shallow trees 2.1 Decision Trees (DT) without many levels but requires many trees (hundreds to Decision trees have been used in the past for, e.g., data thousands). mining [21, 22], and are nowadays widely used for a range of In this paper, we use a GBDT model generated with H2O, machine-learning tasks such as classification (results are dis- an open-source platform for big data analytics [16] as it is the tinct labels) and regression (results are values). Implemen- system used in production for our use case. We use H2O for tations range from libraries as part of ML packages (H2O training and generating the GBDT model. However, when [16], SciKit [44], or XGBoost [13]), large-scale distributed deploying the model in production, H2O GBDT inference implementations for cloud platforms [43], to database op- was re-implemented in C++ as the original version of H2O erators such as those available in Oracle Data Mining [38]. uses Java (the nature of the changes made is detailed later While here we focus on data pipelines for search engines, in the paper). our solution can be applied as-is in the context of databases Figure 1 shows an example of a binary decision tree as or data warehouses that use FPGA acceleration [54, 3, 32, represented in a H2O Gradient Boosted Machines (GBM), 26, 59].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us