Balancing Efficiency and Fairness in Heterogeneous GPU Clusters for Deep Learning

Balancing Efficiency and Fairness in Heterogeneous GPU Clusters for Deep Learning

Balancing Efficiency and Fairness in Heterogeneous GPU Clusters for Deep Learning Shubham Chaudhary Ramachandran Ramjee Muthian Sivathanu [email protected] [email protected] [email protected] Microsoft Research India Microsoft Research India Microsoft Research India Nipun Kwatra Srinidhi Viswanatha [email protected] [email protected] Microsoft Research India Microsoft Research India Abstract Learning. In Fifteenth European Conference on Computer Sys- tems (EuroSys ’20), April 27–30, 2020, Heraklion, Greece. We present Gandivafair, a distributed, fair share sched- uler that balances conflicting goals of efficiency and fair- ACM, New York, NY, USA, 16 pages. https://doi.org/10. 1145/3342195.3387555 ness in GPU clusters for deep learning training (DLT). Gandivafair provides performance isolation between users, enabling multiple users to share a single cluster, thus, 1 Introduction maximizing cluster efficiency. Gandivafair is the first Love Resource only grows by sharing. You can only have scheduler that allocates cluster-wide GPU time fairly more for yourself by giving it away to others. among active users. - Brian Tracy Gandivafair achieves efficiency and fairness despite cluster heterogeneity. Data centers host a mix of GPU Several products that are an integral part of modern generations because of the rapid pace at which newer living, such as web search and voice assistants, are pow- and faster GPUs are released. As the newer generations ered by deep learning. Companies building such products face higher demand from users, older GPU generations spend significant resources for deep learning training suffer poor utilization, thus reducing cluster efficiency. (DLT), managing large GPU clusters with tens of thou- Gandivafair profiles the variable marginal utility across sands of GPUs. Multiple teams compete for these GPUs various jobs from newer GPUs, and transparently incen- to run DLT jobs. Partitioning GPUs statically across mul- tivizes users to older GPUs by a novel resource trading tiple users provides predictability and performance isola- mechanism that maximizes cluster efficiency without af- tion, but results in poor cluster utilization. fecting fairness guarantees of any user. With a prototype A single shared cluster across all users is attractive implementation and evaluation in a heterogeneous 200- for overall efficiency, but in order to be practical, sucha GPU cluster, we show that Gandivafair achieves both cluster must guarantee that each user will get at least the fairness and efficiency under realistic multi-user work- same performance as they would have with a statically loads. partitioned cluster. In other words, if a user A was enti- ACM Reference Format: tled to a 20% global share of GPUs, regardless of other Shubham Chaudhary, Ramachandran Ramjee, Muthian Sivathanu, jobs/users running on the shared cluster, the effective Nipun Kwatra, and Srinidhi Viswanatha. 2020. Balancing Effi- performance of user A in the shared cluster must be at ciency and Fairness in Heterogeneous GPU Clusters for Deep least the same as if A ran on a dedicated cluster with 20% of the GPUs. If user A is unable to utilize their quota, the Permission to make digital or hard copies of all or part of this work unused capacity must be shared across other active users, for personal or classroom use is granted without fee provided that thus, maximizing cluster efficiency. copies are not made or distributed for profit or commercial advantage An additional dimension that complicates sharing is and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the hardware heterogeneity, a particularly stark problem in author(s) must be honored. Abstracting with credit is permitted. To GPU clusters. As newer GPU generations get released at copy otherwise, or republish, to post on servers or to redistribute to lists, a rapid pace, large clusters, over time, typically have a requires prior specific permission and/or a fee. Request permissions mix of heterogeneous GPUs. Users prefer newer genera- from [email protected]. tions of GPUs for their higher performance, thus leaving EuroSys ’20, April 27–30, 2020, Heraklion, Greece the older generations under-utilized. With a single shared © 2020 Copyright held by the owner/author(s). Publication rights li- censed to ACM. cluster, the scheduler needs to intelligently allocate GPUs ACM ISBN 978-1-4503-6882-7/20/04. $15.00 of different generations across users, to maximize effi- https://doi.org/10.1145/3342195.3387555 ciency while ensuring fairness. Unfortunately, while there have been several DLT job Third, Gandivafair addresses hardware heterogeneity schedulers proposed in the literature [19, 33, 41], none of by employing a novel technique of automatic resource them support user fairness even in homogeneous clusters. trading across users. From a fairness perspective, Gandivafair In fact, none of them have a notion of users and oper- maps user tickets to a proportional share of each gener- ate at a job-level, either focusing on improving cluster ation of GPUs in the cluster. For example, if the cluster efficiency [41] or job-completion time [19, 33]. Even had 5000 V100 and 10000 P100 GPUs, a user with a 20% schedulers used in companies today [25] simply divide a share of the cluster would be entitled to 1000 V100s and cluster statically into virtual clusters to isolate one group 2000 P100s, and Gandivafair guarantees that the user’s from another, resulting in poor efficiency. performance would be at least the same as a dedicated In this paper, we present Gandivafair, a scheduler that cluster with 1000 V100s and 2000 P100s. While preserv- guarantees cluster-wide fair share of GPU minutes across ing such guarantee, Gandivafair uses automatic trading users in a shared cluster of heterogeneous GPUs, while to maximize cluster efficiency. The key insight behind ensuring cluster efficiency by distributing unused quota Gandivafair’s resource trading is to leverage the varying fairly among other active users. Like traditional fair-share marginal utility of faster GPUs for different DLT jobs. schedulers [39], Gandivafair uses the notion of tickets As an example, if jobs of user A achieve a speedup of to provide fair-share of resources in a cluster to a par- 1.25x on V100 compared to a K80 GPU, while jobs of ticular user. Gandivafair assumes that GPU is the only user B achieve a 5x speedup on V100, user B has a 4x dominant resource for DLT jobs [33, 41]. However, if higher marginal utility, and thus would be better off trad- network/CPU can also become bottlenecks, one could ing 4 of his K80 GPUs in exchange for 1 V100; both extend Gandivafair to incorporate a fairness metric like users benefit from this trade, and cluster efficiency also dominant resource fairness [14] for apportioning multi- improves. As we show in Section 2, the factor speedup ple resources fairly. varies across a wide spread between jobs, enabling such While big-data schedulers like Yarn [38] also support win-win trades that maximize cluster efficiency. Further- user fairness, there are two key differences between big- more, this mechanism prevents users from gaming these data jobs and DLT jobs that make big-data schedulers incentives since it leverages the incentive-compatible unsuitable for the deep learning setting. First, unlike Yarn, benefits of second-price auctions [32]. DLT job scheduler needs to be gang-aware, i.e, they need We have implemented Gandivafair as a custom sched- to schedule all the GPUs required by a DLT job in an all- uler in Kubernetes [10], and evaluate it on a cluster of or-nothing manner. Second, big-data schedulers resort 200 GPUs, running a wide diversity of jobs across multi- to job preemption during over-subscription. However, ple users. We show that Gandivafair achieves inter-user DLT jobs are typically long-running and their preemption fairness at the cluster-wide level, while also maximizing can result in loss of hours/days of job state. Instead, cluster efficiency. We also demonstrate the efficacy of Gandivafair relies on job migration as a key primitive automatic resource trading in Gandivafair, that results in for enforcing fairness without forfeiting job state. faster progress rate for user jobs compared to a fairness- Gandivafair achieves fairness and efficiency using three only scheduler without trading. key techniques. First, it uses a novel split, gang-aware We make the following contributions in the paper: stride scheduler to enforce fairness at short time-scales (time-quantum of few minutes). A central scheduler is responsible for gang-aware scheduling of large jobs, jobs • We present the first cluster scheduler for deep learning that require large number of GPUs and span multiple training jobs that guarantees user-level fair share of servers, while a local per-server gang-aware scheduler cluster-wide GPU minutes, while maximizing cluster schedules small jobs, jobs whose GPU requirement fit efficiency. within that server. Such a split design allows the central • We demonstrate that migration can be used as a first- scheduler to coordinate multiple servers, necessary for class primitive to achieve cluster-wide fairness without scheduling a large job in a gang-aware manner, while the resorting to preemption. distributed, local schedulers managing small jobs allow • We present an automated trading strategy to handle scalability. GPU heterogeneity, by leveraging the variable mar- However, the split scheduler by itself cannot guaran- ginal utility of faster GPUs for various jobs, thus im- tee fairness as jobs arrive and depart. Thus, a second proving efficiency while ensuring fairness. component of Gandivafair is a load balancer that uses • Using a prototype implementation on a 200-GPU het- job migration as a key mechanism to distribute jobs, erogeneous cluster, we show that Gandivafair is able weighted by their tickets, evenly throughout the cluster. to achieve its goals of fairness and efficiency under large-scale multi-user workloads.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us