A System for Massively Parallel Hyperparameter Tuning

A System for Massively Parallel Hyperparameter Tuning

ASYSTEM FOR MASSIVELY PARALLEL HYPERPARAMETER TUNING Liam Li 1 Kevin Jamieson 2 Afshin Rostamizadeh 3 Ekaterina Gonina 3 Jonathan Ben-Tzur 4 Moritz Hardt 5 Benjamin Recht 5 Ameet Talwalkar 1 4 ABSTRACT Modern learning models are characterized by large hyperparameter spaces and long training times. These prop- erties, coupled with the rise of parallel computing and the growing demand to productionize machine learning workloads, motivate the need to develop mature hyperparameter optimization functionality in distributed com- puting settings. We address this challenge by first introducing a simple and robust hyperparameter optimization algorithm called ASHA, which exploits parallelism and aggressive early-stopping to tackle large-scale hyperparam- eter optimization problems. Our extensive empirical results show that ASHA outperforms existing state-of-the-art hyperparameter optimization methods; scales linearly with the number of workers in distributed settings; and is suitable for massive parallelism, as demonstrated on a task with 500 workers. We then describe several design decisions we encountered, along with our associated solutions, when integrating ASHA in Determined AI’s end-to-end production-quality machine learning system that offers hyperparameter tuning as a service. 1 INTRODUCTION ter setting. Leveraging distributed computational resources presents a solution to the increasingly challenging problem Although machine learning (ML) models have recently of hyperparameter optimization. achieved dramatic successes in a variety of practical ap- plications, these models are highly sensitive to internal pa- 4. Productionization of ML. ML is increasingly driving rameters, i.e., hyperparameters. In these modern regimes, innovations in industries ranging from autonomous vehicles four trends motivate the need for production-quality systems to scientific discovery to quantitative finance. As ML moves that support massively parallel for hyperparameter tuning: from R&D to production, ML infrastructure must mature accordingly, with hyperparameter optimization as one of the 1. High-dimensional search spaces. Models are becoming core supported workloads. increasingly complex, as evidenced by modern neural net- works with dozens of hyperparameters. For such complex In this work, we address the problem of developing models with hyperparameters that interact in unknown ways, production-quality hyperparameter tuning functionality in a practitioner is forced to evaluate potentially thousands of a distributed computing setting. Support for massive paral- different hyperparameter settings. lelism is a cornerstone design criteria of such a system and thus a main focus of our work. 2. Increasing training times. As datasets grow larger and models become more complex, training a model has become To this end, and motivated by the shortfalls of existing meth- dramatically more expensive, often taking days or weeks ods, we first introduce Asynchronous Successive Halving on specialized high-performance hardware. This trend is Algorithm (ASHA), a simple and practical hyperparameter arXiv:1810.05934v5 [cs.LG] 16 Mar 2020 particularly onerous in the context of hyperparameter opti- optimization method suitable for massive parallelism that mization, as a new model must be trained to evaluate each exploits aggressive early stopping. Our algorithm is inspired candidate hyperparameter configuration. by the Successive Halving algorithm (SHA) (Karnin et al., 2013; Jamieson & Talwalkar, 2015), a theoretically princi- 3. Rise of parallel computing. The combination of a grow- pled early stopping method that allocates more resources to ing number of hyperparameters and longer training time per promising configurations. ASHA is designed for what we model precludes evaluating configurations sequentially; we refer to as the ‘large-scale regime,’ where to find a good hy- simply cannot wait years to find a suitable hyperparame- perparameter setting, we must evaluate orders of magnitude 1Carnegie Mellon University 2University of Washington more hyperparameter configurations than available parallel 3Google Research 4Determined AI 5University of California, workers in a small multiple of the wall-clock time needed Berkeley. Correspondence to: Liam Li <[email protected]>. to train a single model. Proceedings of the 3 rd MLSys Conference, Austin, TX, USA, We next perform a thorough comparison of several hyper- 2020. Copyright 2020 by the author(s). A System for Massively Parallel Hyperparameter Tuning parameter tuning methods in both the sequential and par- cate more training “resources” to promising configurations. allel settings. We focus on ‘mature’ methods, i.e., well- Previous methods like Gyorgy¨ & Kocsis(2011); Agarwal established techniques that have been empirically and/or et al.(2011); Sabharwal et al.(2016) provide theoretical theoretically studied to an extent that they could be con- guarantees under strong assumptions on the convergence sidered for adoption in a production-grade system. In the behavior of intermediate losses. (Krueger et al., 2015) re- sequential setting, we compare SHA with Fabolas (Klein lies on a heuristic early-stopping rule based on sequential et al., 2017a), Population Based Tuning (PBT) (Jaderberg analysis to terminate poor configurations. et al., 2017), and BOHB (Falkner et al., 2018), state-of-the- In contrast, SHA (Jamieson & Talwalkar, 2015) and Hyper- art methods that exploit partial training. Our results show band (Li et al., 2018) are adaptive configuration evaluation that SHA outperforms these methods, which when coupled approaches which do not have the aforementioned draw- with SHA’s simplicity and theoretical grounding, motivate backs and have achieved state-of-the-art performance on the use of a SHA-based method in production. We further several empirical tasks. SHA serves as the inner loop for verify that SHA and ASHA achieve similar results. In the Hyperband, with Hyperband automating the choice of the parallel setting, our experiments demonstrate that ASHA early-stopping rate by running different variants of SHA. addresses the intrinsic issues of parallelizing SHA, scales While the appropriate choice of early stopping rate is prob- linearly with the number of workers, and exceeds the per- lem dependent, Li et al.(2018)’s empirical results show that formance of PBT, BOHB, and Vizier (Golovin et al., 2017), aggressive early-stopping works well for a wide variety of Google’s internal hyperparameter optimization service. tasks. Hence, we focus on adapting SHA to the parallel set- Finally, based on our experience developing ASHA within ting in Section3, though we also evaluate the corresponding Determined AI’s production-quality machine learning sys- asynchronous Hyperband method. tem that offers hyperparameter tuning as a service, we de- Hybrid approaches combine adaptive configuration selec- scribe several systems design decisions and optimizations tion and evaluation (Swersky et al., 2013; 2014; Domhan that we explored as part of the implementation. We focus et al., 2015; Klein et al., 2017a). Li et al.(2018) showed on four key considerations: (1) streamlining the user inter- that SHA/Hyperband outperforms SMAC with the learning face to enhance usability; (2) autoscaling parallel training to curve based early-stopping method introduced by Domhan systematically balance the tradeoff between lower latency et al.(2015). In contrast, Klein et al.(2017a) reported in individual model training and higher throughput in total state-of-the-art performance for Fabolas on several tasks in configuration evaluation; (3) efficiently scheduling ML jobs comparison to Hyperband and other leading methods. How- to optimize multi-tenant cluster utilization; and (4) tracking ever, our results in Section 4.1 demonstrate that under an parallel hyperparameter tuning jobs for reproducibility. appropriate experimental setup, SHA and Hyperband in fact outperform Fabolas. Moreover, we note that Fabolas, along 2 RELATED WORK with most other Bayesian optimization approaches, can be parallelized using a constant liar (CL) type heuristic (Gins- We will first discuss related work that motivated our focus bourger et al., 2010; Gonzalez´ et al., 2016). However, the on parallelizing SHA for the large-scale regime. We then parallel version will underperform the sequential version, provide an overview of methods for parallel hyperparameter since the latter uses a more accurate posterior to propose tuning, from which we identify a mature subset to compare new points. Hence, our comparisons to these methods are to in our empirical studies (Section4). Finally, we discuss restricted to the sequential setting. related work on systems for hyperparameter optimization. Other hybrid approaches combine Hyperband with adap- Sequential Methods. Existing hyperparameter tuning tive sampling. For example, Klein et al.(2017b) combined methods attempt to speed up the search for a good con- Bayesian neural networks with Hyperband by first train- figuration by either adaptively selecting configurations or ing a Bayesian neural network to predict learning curves adaptively evaluating configurations. Adaptive configura- and then using the model to select promising configura- tion selection approaches attempt to identify promising re- tions to use as inputs to Hyperband. More recently, Falkner gions of the hyperparameter search space from which to et al.(2018) introduced BOHB, a hybrid method combining sample new configurations to evaluate (Hutter et al., 2011; Bayesian optimization with Hyperband. They

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    17 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us