Themis: Fair and Efficient GPU Cluster Scheduling for Machine Learning Workloads

Themis: Fair and Efficient GPU Cluster Scheduling for Machine Learning Workloads

THEMIS: Fair and Efficient GPU Cluster Scheduling Kshiteej Mahajan⇤, Arjun Balasubramanian⇤, Arjun Singhvi⇤, Shivaram Venkataraman⇤ † Aditya Akella⇤, Amar Phanishayee , Shuchi Chawla⇤ † University of Wisconsin - Madison⇤, Microsoft Research Abstract: Modern distributed machine learning (ML) train- anisms has been widely studied in prior cluster scheduling ing workloads benefit significantly from leveraging GPUs. frameworks, e.g., Quincy [18], DRF [8], and Carbyne [11]. However, significant contention ensues when multiple such However, these techniques were designed for big data work- workloads are run atop a shared cluster of GPUs. A key ques- loads, and while they are used widely to manage GPU clusters tion is how to fairly apportion GPUs across workloads. We today, they are far from effective. find that established cluster scheduling disciplines are a poor The key reason is that ML workloads have unique charac- fit because of ML workloads’ unique attributes: ML jobs have teristics that make existing “fair” allocation schemes actually long-running tasks that need to be gang-scheduled, and their unfair. First, unlike batch analytics workloads, ML jobs have performance is sensitive to tasks’ relative placement. long running tasks that need to be scheduled together, i.e., We propose THEMIS, a new scheduling framework for ML gang-scheduled. Second, each task in a job often runs for a training workloads. It’s GPU allocation policy enforces that number of iterations while synchronizing model updates at ML workloads complete in a finish-time fair manner, a new the end of each iteration. This frequent communication means notion we introduce. To capture placement sensitivity and that jobs are placement-sensitive, i.e., placing all the tasks ensure efficiency, THEMIS uses a two-level scheduling archi- for a job on the same machine or the same rack can lead to tecture where ML workloads bid on available resources that significant speedups. Equally importantly, as we show, ML are offered in an auction run by a central arbiter. Our auction jobs differ in their placement-sensitivity (Section 3.1.2). design allocates GPUs to winning bids by trading off fairness In Section 3, we show that having long-running tasks means for efficiency in the short term, but ensuring finish-time fair- that established schemes such as DRF – which aims to equally ness in the long term. Our evaluation on a production trace allocate the GPUs released upon task completions – can arbi- shows that THEMIS can improve fairness by more than 2.25X trarily violate sharing incentive. We show that even if GPU and is ~5% to 250% more cluster efficient in comparison to resources were released/reallocated on fine time-scales [13], state-of-the-art schedulers. placement sensitivity means that jobs with same aggregate resources could have widely different performance, violating 1 Introduction sharing incentive. Finally, heterogeneity in placement sensi- With the widespread success of machine learning (ML) for tivity means that existing scheduling schemes also violate tasks such as object detection, speech recognition, and ma- Pareto efficiency and envy-freedom, two other properties that chine translation, a number of enterprises are now incorporat- are central to fairness [34]. ing ML models into their products. Training individual ML Our scheduler, THEMIS, address these challenges, and sup- models is time- and resource-intensive with each training job ports sharing incentive, Pareto efficiency, and envy-freedom typically executing in parallel on a number of GPUs. for ML workloads. It multiplexes a GPU cluster across ML With different groups in the same organization training ML applications (Section 2), or apps for short, where every app models, it is beneficial to consolidate GPU resources into a consists of one or more related ML jobs, each running with shared cluster. Similar to existing clusters used for large scale different hyper-parameters, to train an accurate model for a data analytics, shared GPU clusters for ML have a number of given task. To capture the effect of long running tasks and operational advantages, e.g., reduced development overheads, placement sensitivity, THEMIS uses a new long-term fairness lower costs for maintaining GPUs, etc. However, today, there metric, finish-time fairness, which is the ratio of the running 1 are no ML workload-specific mechanisms to share a GPU time in a shared cluster with N apps to running alone in a N cluster in a fair manner. cluster. THEMIS’s goal is thus to minimize the maximum fin- Our conversations with cluster operators indicate that fair- ish time fairness across all ML apps while efficiently utilizing ness is crucial; specifically, that sharing an ML cluster be- cluster GPUs. We achieve this goal using two key ideas. comes attractive to users only if they have the appropriate First, we propose to widen the API between ML apps and sharing incentive. That is, if there are a total N users sharing the scheduler to allow apps to specify placement preferences. a cluster C, every user’s performance should be no worse than We do this by introducing the notion of a round-by-round C using a private cluster of size N . Absent such incentive, users auction. THEMIS uses leases to account for long-running ML are either forced to sacrifice performance and suffer long wait tasks, and auction rounds start when leases expire. At the start times for getting their ML jobs scheduled, or abandon shared of a round, our scheduler requests apps for their finish-time clusters and deploy their own expensive hardware. fairness metrics, and makes all available GPUs visible to a Providing sharing incentive through fair scheduling mech- fraction of apps that are currently farthest in terms of their fairness metric. Each such app has the opportunity to bid for such as learning rate, momentum etc. [21, 29] to identify and subsets of these GPUs as a part of an auction; bid values train the best target model for the activity at hand. reflect the app’s new (placement sensitive) finish time fairness Each job’s constituent work is performed by a number of metric from acquiring different GPU subsets. A central arbiter parallel tasks. At any given time, all of a job’s tasks collec- determines the global winning bids to maximize the aggregate tively process a mini-batch of training data; we assume that improvement in the finish time fair metrics across all bidding the size of the batch is fixed for the duration of a job. Each task apps. Using auctions means that we need to ensure that apps typically processes a subset of the batch, and, starting from are truthful when they bid for GPUs. Thus, we use a partial an initial version of the model, executes multiple iterations of allocation auction that incentivizes truth telling, and ensures the underlying learning algorithm to improve the model. We Pareto-efficiency and envy-freeness by design. assume all jobs use the popular synchronous SGD [4]. While a far-from-fair app may lose an auction round, per- We consider the finish time of an app to be when the best haps because it is placed less ideally than another app, its bid model and relevant hyper-parameters have been identified. values for subsequent auctions naturally increase (because a Along the course of identifying such a model, the app may losing app’s finish time fairness worsens), thereby improving decide to terminate some of its constituent jobs early [3, 29]; the odds of it winning future rounds. Thus, our approach con- such jobs may be exploring hyper-parameters that are clearly verges to fair allocations over the long term, while staying sub-optimal (the jobs’ validation accuracy improvement over efficient and placement-sensitive in the short term. iterations is significantly worse than other jobs in the same Second, we present a two-level scheduling design that con- app). For apps that contain a single job, finish time is the time tains a centralized inter-app scheduler at the bottom level, taken to train this model to a target accuracy or maximum and a narrow API to integrate with existing hyper-parameter number of iterations. tuning frameworks at the top level. A number of existing frameworks such as Hyperdrive [29] and HyperOpt [3] can in- 2.2 Characterizing Production ML Apps telligently apportion GPU resources between various jobs in We perform an analysis of the properties of GPU-based ML a single app, and in some cases also terminate a job early if its training workloads by analyzing workload traces obtained progress is not promising. Our design allows apps to directly from Microsoft. The GPU cluster we study supports over use such existing hyper parameter tuning frameworks. We de- 5000 unique users. We restrict our analysis to a subset of the scribe how THEMIS accommodates various hyper-parameter trace that contains 85 ML training apps submitted using a tuning systems and how its API is exercised in extracting hyper-parameter tuning framework. relevant inputs from apps when running auctions. GPU clusters are known to be heavily contented [19], and We implement THEMIS atop Apache YARN 3.2.0, and we find this also holds true in the subset of the trace of ML evaluate by replaying workloads from a large enterprise trace. apps we consider (Figure 1). For instance, we see that GPU Our results show that THEMIS is at least 2.25X more fair demand is bursty and the average GPU demand is ~50 GPUs. (finish-time fair) than state-of-the-art schedulers while also We also use the trace to provide a first-of-a-kind view improving cluster efficiency by ~5% to 250%. To further into the characteristics of ML apps. As mentioned in Section understand our scheduling decisions, we perform an event- 2.1, apps may either train a single model to reach a target driven simulation using the same trace, and our results show accuracy (1 job) or may use the cluster to explore various that THEMIS offers greater benefits when we increase the hyper-parameters for a given model (n jobs).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us