BAGUA: Scaling up Distributed Learning with System Relaxations

BAGUA: Scaling up Distributed Learning with System Relaxations

BAGUA: Scaling up Distributed Learning with System Relaxations Shaoduo Gan∗, Jiawei Jiang, Xiangru Lian∗, Rui Wang, Jianbin Chang, Binhang Yuan, Ce Zhang Chengjun Liu, Hongmei Shi, Shengzhuo Zhang, ETH Zurich,¨ Switzerland Xianghong Li, Tengxu Sun, Sen Yang, Ji Liu fsgan, jiawei.jiang, Kuaishou Technology, China binhang.yuan, [email protected] [email protected] [email protected] July 2021 Abstract Recent years have witnessed a growing list of systems for distributed data-parallel training. Existing sys- tems largely fit into two paradigms, i.e., parameter server and MPI-style collective operations. On the algo- rithmic side, researchers have proposed a wide range of techniques to lower the communication via “system relaxations”: quantization, decentralization, and communication delay. However, most, if not all, existing systems only rely on standard synchronous and asynchronous stochastic gradient (SG) based optimization, therefore, cannot take advantage of all possible optimizations that the machine learning community has been developing recently. Given this emerging gap between the current landscapes of systems and theory, we build BAGUA, a communication framework whose design goal is to provide a system abstraction that is both flexible and modular to support state-of-the-art system relaxation techniques of distributed training. Powered by the new system design, BAGUA has a great ability to implement and extend various state-of- the-art distributed learning algorithms. In a production cluster with up to 16 machines (128 GPUs), BAGUA can outperform PyTorch-DDP, Horovod and BytePS in the end-to-end training time by a significant margin (up to 1.95×) across a diverse range of tasks. Moreover, we conduct a rigorous tradeoff exploration show- ing that different algorithms and system relaxations achieve the best performance over different network conditions. 1 Introduction The increasing scalability and performance of distributed machine learning systems has been one of the arXiv:2107.01499v3 [cs.LG] 12 Jul 2021 main driving forces behind the rapid advancement of machine learning techniques. From AlexNet [1] in 2012 to GPT-3 [2] in 2020, each leap in model quality is enabled by the growth of both the model size and the amount of data one can train a model with, along with a rapid increase in computations [3]. Behind this improvement are two major enabling factors: hardware accelerations (e.g., GPUs and TPUs) and the development of efficient and scalable distributed training algorithms [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. It is not unfair to say that a scalable distributed training system is the cornerstone of modern deep learning techniques. * Equal contribution. BAGUA is publicly available at https://github.com/BaguaSys/bagua. 1 Alg. Sync. Precision Centralization PyTorch-DDP Horovod BytePS BAGUA [23] Sync. Full Prec. Centralized 3 3 3 3 [13] Sync. Full Prec. Decentralized 3 [4, 38] Sync. Low Prec. Centralized 3 3 3 3 [17, 18] Sync. Low Prec. Decentralized 3 [36] Async. Full Prec. Centralized 3 3∗ [16] Async. Full Prec. Decentralized 3∗ [39] Async. Low Prec. Centralized 3∗ - Async. Low Prec. Decentralized Table 1: Different system relaxation techniques. The goal of BAGUA is to support these diverse communica- tion patterns. Current Landscape of Data Parallel Training Systems In this paper, we scope ourselves and focus on data parallel training, one of the most popular distributed training paradigms in which the data set is par- titioned across different workers and the model fits into a single device. Not surprisingly, recently years have witnessed a growing list of systems for distributed data parallel training. Existing systems fit into two paradigms, following the seminal work done by Li et al. [23] on parameter server and Sergeev et al. [24] on using MPI collective operations such as Allreduce. Both paradigms have enabled industrial-scale distributed training systems [3]: Adam (Microsoft) [25], early TensorFlow (Google) [26], Poseidon (Petuum) [27], An- gel (Tencent) [28], and BytePS (ByteDance) [29] are based on parameter server, while PyTorch-DDP (Face- book) [30], Mariana (Tencent) [31], MALT (NEC Labs) [32], NCCL (NVIDIA) [33], and Horovod (Uber) [24] are based on MPI-style collective operations. These systems often involve joint efforts from machine learning, systems, and data management communities, and have been successful in making distributed training easier and more scalable. Current Landscape of Data Parallel Training Algorithms On the theory and algorithm side, researchers have also been active in improving the performance of standard synchronous and asynchronous stochastic gradient (SG) based algorithms. Rightly noticing that a major system bottleneck is communication, re- searchers have proposed a range of techniques to lower the communication overhead mainly by “relaxing” certain aspects of the communication. Examples include (1) communication compression (e.g., quantiza- tion [4, 5, 6, 7], sparsification [8, 9, 10, 11], and error compensation [12]), (2) communication decen- tralization [13, 14, 15, 16, 17, 18], and (3) communication delay (e.g., LocalSGD [19, 20, 21, 22]) and asynchronization [16, 34, 35, 36, 37]. These techniques are optimized for different workloads and different network conditions. These techniques together hold great promises to significantly decrease the commu- nication overheads, in terms of both bandwidth and latency, or increase the tolerance to the existence of stragglers. An Emerging Gap between System and Theory In this paper, we are motivated by one emerging gap be- tween the current landscapes of systems and theory: Despite the recent advance of distributed learning theory and algorithm on system relaxations, most, if not all, existing systems only rely on standard synchronous and asynchronous stochastic gradient (SG) based algorithms. The main consequence is that existing systems are not taking advantage of all possible optimizations that the machine learning community has been develop- ing, and potentially many real-world applications can be further accelerated. In this paper, we ask: Can we further accelerate distributed learning systems with system relaxations for communications? If so, what is the right system abstraction for this purpose? Technical Challenges To close this gap requires far beyond simply implementing these algorithms using the abstractions of parameter server and Allreduce from the existing systems. There are two challenges. First, it is challenging to support these system relaxations directly and naturally in a parameter server or an 2 Allreduce paradigm. For example, it is challenging to use the put/get abstraction provided by a parameter server to support an algorithm that requires memory and states on the server side, which is required by most communication compression algorithms using error compensation. Similarly, it is hard for both paradigms to support decentralized communications. As a result, one has to revisit the design of system abstractions in a fundamental way in order to support many of today’s relaxation algorithms. Second, we need to sup- port modular system abstractions and optimizations to handle the diversity of these system relaxations. When existing systems such as Horovod [24] and BytePS [29] optimize for the performance, they often focus on the communication pattern of a textbook SG based algorithm. When we hope to support a large collection of training algorithms, as illustrated in Table 1, we cannot optimize each individually; instead, we have to understand how to automatically optimize this diverse set of algorithms in a common framework. The BAGUA System and Our Contributions Motivated by these two challenges, we build BAGUA, a com- munication framework whose design goal is to support state-of-the-art system relaxation techniques of dis- tributed training. We made two technical contributions. Our first contribution is the system design of BAGUA, which provides a modular design for communi- cations. Our abstraction goes beyond parameter server and Allreduce paradigms, and provides a collection of MPI-style collective operations to facilitate communications with different precision and centralization strategies. This abstraction is flexible and modular enough to support many algorithms, illustrated in Table 1. More- over, we also develop a simple automatic optimization framework that speeds up algorithms implemented within the BAGUA framework. The key behind this framework is automatic batching and scheduling of communications. Different from previous work such as Horovod [24] and BytePS [29], our optimization framework can be applied more widely beyond the standard SG based algorithm. Our second contribution is an extensive empirical study centered around two hypotheses: (1) By sup- porting different system relaxation techniques, BAGUA is able to provide significant improvement for real-world applications and workloads with real-world infrastructure over existing systems; and (2) By supporting a diverse range of system relaxations, BAGUA is able to provide a scalable ML training over a diverse network conditions to allow a user picking different algorithms. To this end, we conduct a large-scale empirical study with both benchmark tasks and real-world applications running at Kwai Inc. On a cluster with up to 16 machines (128 GPUs in total, aggregated 2 petaFLOPS with Tensor Cores) we consider various network conditions following how V100 GPU machines (p3.8xlarge, p3.16xlarge, p3dn.24xlarge)

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    22 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us