Out-Of-Core GPU Gradient Boosting

Out-Of-Core GPU Gradient Boosting

Out-of-Core GPU Gradient Boosting Rong Ou NVIDIA Santa Clara, CA, USA [email protected] ABSTRACT bus. A naive approach that constantly swaps data in and out of GPU-based algorithms have greatly accelerated many machine learn- GPU memory would cause too much slowdown, negating the per- ing methods; however, GPU memory is typically smaller than main formance gain from GPUs. memory, limiting the size of training data. In this paper, we de- By carefully structuring the data access patterns, and leverag- scribe an out-of-core GPU gradient boosting algorithm implemented ing gradient-based sampling to reduce working memory size, we in the XGBoost library. We show that much larger datasets can were able to significantly increase the size of training data accom- fit on a given GPU, without degrading model accuracy or train- modated by a given GPU, with minimal impact to model accuracy ing time. To the best of our knowledge, this is the first out-of-core and training time. GPU implementation of gradient boosting. Similar approaches can be applied to other machine learning algorithms. 2 BACKGROUND In this section we review the gradient boosting algorithm as im- CCS CONCEPTS plemented by XGBoost, its GPU variant, and the previous CPU- • Computing methodologies → Machine learning; Graphics only external memory support. We also describe the sampling ap- processors; • Information systems → Hierarchical storage proaches used to reduce memory footprint. management. 2.1 Gradient Boosting KEYWORDS n m Given a dataset with n samples {xi,yi } , where xi ∈ R is a GPU, out-of-core algorithms, gradient boosting, machine learning i=1 vector of m-dimensional input features, and yi ∈ R is the label, a decision tree model predicts the label: 1 INTRODUCTION K = x = x Gradient boosting [7] is a popular machine learning method for yˆi F( i) fk ( i), (1) supervised learning tasks, such as classification, regression, and = kÕ1 ranking. A prediction model is built sequentially out of an ensem- where fk ∈ F, the space of regression trees, and K is the number ble of weak prediction models, typically decision trees. With bigger of trees. To learn a model, we minimize the following regularized datasets and deeper trees, training time can become substantial. objective: Graphics Processing Units (GPUs), originally designed to speed L = , + Ω up the rendering of display images, have proven to be powerful (F) l(yˆi yi ) (fk ) (2) accelerators for many parallel computing tasks, including machine Õi Õk 1 learning. GPU-based implementations [4, 6, 15] exist for several where Ω(f ) = γT + λ||w||2 (3) open-source gradient boosting libraries [3, 10, 14] that significantly 2 Ω lower the training time. Here l is a differentiable loss function, is the regularization term Because GPU memory has higher bandwidth and lower latency, that penalizes the number of leaves in the tree T and leaf weights it tends to cost more and thus is typically of smaller size than w, controlled by two hyperparameters γ and λ. arXiv:2005.09148v1 [cs.LG] 19 May 2020 (t) main memory. For example, on Amazon Web Services (AWS), a The model is trained sequentially. Let yˆi be the prediction at p3.2xlarge instance has 1 NVIDIA Tesla V100 GPU with 16 GiB the t-th iteration, we need to find tree ft that minimizes: memory, and 61 GiB main memory. On Google Cloud Platform n L(t) = , (t−1) + x + Ω (GCP), a similar instance can have as much as 78 GiB main mem- l(yi yˆi ft ( i)) (ft ) (4) = ory. Training with large datasets can cause GPU out-of-memory Õi 1 errors when there is plenty of main memory available. The quadratic Taylor expansion is: XGBoost, a widely-used gradient boosting library, has experi- n 1 L(t) , (t−1) + x + 2 x + Ω , mental support for external memory [5], which allows training ≃ [l(yi yˆi ) gi ft ( i) hi ft ( i)] (ft ) (5) 1 = 2 on datasets that do not fit in main memory . Building on top of Õi 1 this feature, we designed and implemented out-of-core GPU algo- where gi and hi are first and second order gradients on the loss rithms that extend XGBoost external memory support to GPUs. function with respect to yˆ(t−1). For a given tree structure q(x), let This is challenging since GPUs are typically connected to the rest Ij = {i|q(xi ) = j} be the set of samples that fall into leaf j. The of the computer system through the PCI Express (PCIe) bus, which ∗ optimal weight wj of leaf j can be computed as: has lower bandwidth and higher latency than the main memory g ∗ = i ∈Ij i wj − , (6) 1In this paper, "out-of-core" and "external memory" are used interchangeably. h + λ iÍ∈Ij i Í Rong Ou and the corresponding optimal loss value is: during tree construction, the data pages are streamed from disk T 2 via a multi-threaded pre-fetcher. 1 ( i ∈Ij gi ) L˜(t)(q) = − + γT . (7) 2 h + λ = Íi ∈Ij i 2.4 Sampling Õj 1 When constructing an individualÍ tree, we start from a single leaf In its default setting, gradient boosting is a batch algorithm: the whole dataset needs to be read and processed to construct each and greedily add branches to the tree. Let IL and IR be the sets of samples that fall into the left and right nodes after a split, then the tree. Different sampling approaches have been proposed, mainly loss reduction for a split is: as an additional regularization factor to get better generalization performance, but they can also reduce the computation needed, 2 2 2 1 ( i ∈I gi ) ( i ∈I gi ) ( g ) leading to faster training time. L = L + R − i ∈I i −γ (8) split 2 h + λ h + λ h + λ " Íi ∈IL i Íi ∈IR i Íi ∈I i # 2.4.1 Stochastic Gradient Boosting (SGB). Shortly after introduc- where I = IL ∪ÍIR . Í Í ing gradient boosting, Friedman [8] proposed an improvement: at each iteration a subsample of the training data is drawn at random 2.2 GPU Tree Construction without replacement from the full training dataset. This randomly selected subsample is then used in place of the full sample to con- The GPU tree construction algorithm in XGBoost [11, 12] relies on struct the decision tree and compute the model update for the cur- a two-step process. First, in a preprocessing step, each input feature rent iteration. It was shown that this sampling approach improves is divided into quantiles and put into bins (max_bin defaults to 256). model accuracy. However, the sampling ratio, f , needs to stay rel- The bin numbers are then compressed into ELLPACK format, greatly atively high, 0.5 ≤ f ≤ 0.8, for this improvement to occur. reducing the size of the training data. This step is time consuming, so it should only be done once at the beginning of training. 2.4.2 Gradient-based One-Side Sampling (GOSS). Ke etal. proposed a sampling strategy weighted by the absolute value of the gra- Algorithm 1: GPU Tree Construction dients [10]. At the beginning of each iteration, the top a × 100% of training instances with the largest gradients are selected, then Input: X: training examples from the rest of the data a random sample of b × 100% instances Input: g: gradient pairs for training examples is drawn. The samples are scaled by 1−a to make the gradient sta- Output: tree: set of output nodes b tistics unbiased. Compared to SGB, GOSS can sample more aggres- tree ← {} sively, only using 10% - 20% of the data to achieve similar model queue ← InitRoot() accuracy. while queue is not empty do entry ← queue.pop() 2.4.3 Minimal Variance Sampling (MVS). Ibragimov etal. proposed tree.insert(entry) another gradient-based sampling approach that aims to minimize // Sort samples into leaf nodes the variance of the model. At each iteration the whole dataset is RepartitionInstances(entry, X) sampled with probability proportional to regularized absolute value // Build gradient histograms of gradients: BuildHistograms(entry, X, g) gˆ = g2 + λh2, (9) // Find the optimal split for children i i i left_entry ← EvaluateSplit(entry.left_histogram) where gi and hi are the first andq second order gradients, λ can be right_entry ← EvaluateSplit(entry.right_histogram) either a hyperparameter, or estimated from the squared mean of queue.push(left_entry) the initial leaf value. queue.push(right_entry) MVS was shown to perform better than both SGB and GOSS, with sampling rate as low as 10%. In the second step, the tree construction algorithm is shown 3 METHOD in Algorithm 1. Note that this is a simplified version for single In this section we describe the design of out-of-core GPU-based GPU only. In a distributed environment with multiple GPUs, the gradient boosting. Since XGBoost is widely used in production, as gradient histograms need to be summed across all GPUs using much as possible, we try to preserve the existing behavior when AllReduce. adding new features. In external memory mode, we assume the training data is already parsed and written to disk in CSR pages. 2.3 XGBoost Out-of-Core Computation XGBoost has experimental support for out-of-core computation [3, 3.1 Incremental Quantile Generation 5]. When enabled, training is also done in a two-step process. First, As stated above, GPU tree construction in XGBoost is a two-step in the preprocessing step, input data is read and parsed into an in- process. In the preprocessing step, input features are converted ternal format, which can be Compressed Sparse Row (CSR), Com- into a quantile representation.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us