A Comprehensive Study of Deep Learning Over Big Data Stacks on HPC Clusters

A Comprehensive Study of Deep Learning Over Big Data Stacks on HPC Clusters

IEEE TRANSACTIONS ON MULTI-SCALE COMPUTING SYSTEMS, VOL. 4, NO. 4, OCTOBER-DECEMBER 2018 635 DLoBD: A Comprehensive Study of Deep Learning over Big Data Stacks on HPC Clusters Xiaoyi Lu , Member, IEEE, Haiyang Shi, Rajarshi Biswas, M. Haseeb Javed , and Dhabaleswar K. Panda, Fellow, IEEE Abstract—Deep Learning over Big Data (DLoBD) is an emerging paradigm to mine value from the massive amount of gathered data. Many Deep Learning frameworks, like Caffe, TensorFlow, etc., start running over Big Data stacks, such as Apache Hadoop and Spark. Even though a lot of activities are happening in the field, there is a lack of comprehensive studies on analyzing the impact of RDMA-capable networks and CPUs/GPUs on DLoBD stacks. To fill this gap, we propose a systematical characterization methodology and conduct extensive performance evaluations on four representative DLoBD stacks (i.e., CaffeOnSpark, TensorFlowOnSpark, MMLSpark/CNTKOnSpark, and BigDL) to expose the interesting trends regarding performance, scalability, accuracy, and resource utilization. Our observations show that RDMA-based design for DLoBD stacks can achieve up to 2.7x speedup compared to the IPoIB-based scheme. The RDMA scheme also scales better and utilizes resources more efficiently than IPoIB. For most cases, GPU-based schemes can outperform CPU-based designs, but we see that for LeNet on MNIST,CPU + MKL can achieve better performance than GPU and GPU + cuDNN on 16 nodes. Through our evaluation and an in-depth analysis on TensorFlowOnSpark, we find that there are large rooms to improve the designs of current-generation DLoBD stacks. Index Terms—DLoBD, deep learning, big data, CaffeOnSpark, TensorFlowOnSpark, MMLSpark (CNTKOnSpark), BigDL, RDMA Ç 1INTRODUCTION S the explosive growth of ‘Big Data’ continues, there is Big Data stacks can easily access the data without moving it Aan increasing demand for getting Big Value out of Big back and forth. 3) From the infrastructure management per- Data to drive the revenue continuously growing. To mine spective, we do not need to set up new dedicated Deep more value from the massive amount of gathered data, in Learning clusters if we can run Deep Learning jobs directly these days, Deep Learning over Big Data (DLoBD) is becom- on existing Big Data analytics clusters. This could signifi- ing one of the most efficient analyzing paradigms. With this cantly reduce the costs of device purchasing and infrastruc- emerging paradigm, more and more Deep Learning tools or ture management. libraries start being run over Big Data stacks, such as the With the benefits of integrating Deep Learning capabili- most popular representatives—Apache Hadoop and Spark. ties with Big Data stacks, we see a lot of activities in the com- By combining the advanced capabilities from Deep Learn- munity to build DLoBD stacks, such as CaffeOnSpark,1 ing libraries (e.g., Caffe [1], TensorFlow [2], and Microsoft SparkNet [4], TensorFlowOnSpark,2 DL4J [5], BigDL [6], and Cognitive Toolkit (CNTK) [3]) and Big Data stacks (e.g., Microsoft Machine Learning for Apache Spark (MMLSpark Spark and Hadoop), the DLoBD approach can enable pow- or CNTKOnSpark) [7]. Many of these DLoBD stacks are also erful distributed Deep Learning on Big Data analytics clus- being deployed and used on Cloud Computing platforms, ters with at least following three major benefits. 1) From the such as Microsoft Azure. For DLoBD stacks, one of the typi- data analytics workflow perspective, if we run Deep Learn- cal concerns is about their ‘sub-optimal’ performance. As ing jobs on Big Data stacks, we can easily integrate Deep shown in Fig. 1, with the convergence of HPC, Big Data, and Learning components with other Big Data processing com- Deep Learning, these emerging DLoBD stacks are being ponents in the whole workflow. 2) From the data locality designed to leverage Remote Direct Memory Access perspective, since the large amount of gathered data in com- (RDMA) capable high-performance interconnects and panies typically is already stored or being processed in Big multi-/many-core based CPUs/GPUs. These powerful devi- Data stacks (e.g., stored in HDFS), Deep Learning jobs on ces give a lot of opportunities to speed up the DLoBD stacks. 1.1 Motivation The authors are with the Department of Computer Science and Engineer- ing, The Ohio State University, Columbus, OH 43202. Fig. 1 shows the main components that a typical Deep E-mail: {lu.932, shi.876, biswas.91, javed.19, panda.2}@osu.edu. Learning job running over DLoDB stacks involves. As we Manuscript received 16 Jan. 2018; revised 1 May 2018; accepted 23 May 2018. can see, there are at least five major layers: Deep Learning Date of publication 11 June 2018; date of current version 29 Jan. 2019. model or application layer, Deep Learning library layer, Big (Corresponding author: Xiaoyi Lu.) Data analytics framework layer, resource scheduler layer, Recommended for acceptance by R. Grant. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference the Digital Object Identifier below. 1. https://github.com/yahoo/CaffeOnSpark Digital Object Identifier no. 10.1109/TMSCS.2018.2845886 2. https://github.com/yahoo/TensorFlowOnSpark 2332-7766 ß 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html_ for more information. 636 IEEE TRANSACTIONS ON MULTI-SCALE COMPUTING SYSTEMS, VOL. 4, NO. 4, OCTOBER-DECEMBER 2018 TensorFlowOnSpark, MMLSpark, and BigDL) based on their popularity and designs. We overview their architec- ture differences and similarities in Section 2, which help us to design our characterization methodology. Then, we fur- ther propose a systematical characterization methodology in Section 3 to cover a broad range of evaluation dimen- sions, such as comparing different networking protocols (i.e., IPoIB versus RDMA), comparing different ways of integration with Big Data stacks (i.e., in-band communica- tion versus out-of-band communication), and comparing solutions using different computing devices (i.e., CPU ver- sus GPU). Our characterization will focus on four different perspectives, including performance, accuracy, scalability, and resource utilization. Fig. 1. Convergence of deep learning, big data, and HPC; Overview of the corresponding characterization scope of DLoBD stacks. Section 4 presents our detailed evaluation, which shows that RDMA-based DLoBD stacks can achieve up to 2.7x and distributed file system layer. There are a lot of efforts in speedup compared to the IPoIB based scheme. RDMA- the field to improve the performance of each of these layers. based designs can also scale better and utilize resources For example, the default Caffe, TensorFlow, and CNTK can more efficiently than the IPoIB scheme. For most cases, we leverage the high-performance GPU accelerators with the see GPU-based Deep Learning can outperform CPU-based co-designed efficient cuDNN [8] library, while BigDL can designs, but not always. We see that for LeNet on MNIST, efficiently run on Intel CPUs or Xeon Phi devices by utiliz- CPU + MKL can achieve better performance than GPU and ing the highly optimized Intel MKL [9] library or BLAS GPU + cuDNN on 16 nodes. libraries. Yahoo! researchers have proposed RDMA-based In addition to benchmarking these DLoBD stacks in a communication in CaffeOnSpark and TensorFlowOnSpark. black-box manner, we further provide an in-depth analysis Our earlier work [10], [11], [12] have proposed RDMA- of TensorFlowOnSpark in Section 5. The in-depth analysis based designs for Spark and Hadoop. Even though these with TensorFlowOnSpark chooses a vertical approach to work have been proposed and well studied with their tar- breakdown the Deep Learning workload performance across geted workloads and environments, there is a lack of sys- DLoBD layers. From the analysis, we find that up to tematic studies on analyzing the impact of RDMA-capable 15.5 percent time could be spent in the Apache Hadoop networks and CPUs/GPUs on DLoBD stacks with different YARN scheduler layer, while up to 18.1 percent execution Deep Learning models and datasets. We lack understanding time could be consumed by the Spark job execution layer. the impact of these advanced hardware and the associated Compared to native TensorFlow, TensorFlowOnSpark can efficient building blocks (e.g., RDMA, GPUDirect RDMA, get the benefit of automatically scaling out Deep Learning cuDNN, and MKL) on various Deep Learning aspects, applications across nodes and accessing data easily from including performance, accuracy, scalability, and resource HDFS. But in the meantime, our studies show that the com- utilization. These lead to the following broad challenges: munity may need to spend more effort to reduce the overhead of heavy DLoBD stacks, even though such kind of overhead How are current generation DLoBD stacks being may be negligible in long-running Deep Learning jobs. designed? Why do they need high-performance From the communication perspective, the analysis in Sec- communication subsystems? tion 5 shows that the RDMA-based communication channel Can RDMA-based designs in DLoBD stacks improve in TensorFlow or TensorFlowOnSpark has the potential to performance, scalability, and resource utilization on benefit large or complex Deep Learning models. Our evalu- high-performance interconnects, GPUs, and multi- ation shows that training a ResNet50 model on TensorFlow core CPUs? can get around 21 percent performance benefit with RDMA What are the performance characteristics of repre- compared to using the IPoIB protocol. sentative DLoBD stacks when they run typical Deep Through our evaluation and analysis, we see that there Learning workloads on RDMA-capable high-speed are still large rooms to improve the designs of current gen- networks? eration DLoBD stacks. More insights are shared in this What kind of trends and insights can we observe in paper to guide designing next-generation DLoBD stacks. our evaluations for performance and accuracy, Section 6 discusses related work.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us