Experiences in Designing, Developing, Packaging, and Deploying the MVAPICH2 Libraries

Talk at E4S Forum (September ‘19) by

Hari Subramoni The Ohio State University E-mail: [email protected]

Follow us on http://www.cse.ohio-state.edu/~subramon https://twitter.com/mvapich High-End Computing (HEC): PetaFlop to ExaFlop

100 PFlops in 2017 149 PFlops in 2018

1 EFlops in 2020-2021?

Expected to have an ExaFlop system in 2020-2021! Network Based Computing Laboratory E4S Forum (Sept. ’19) 2 Supporting Programming Models for Multi-Petaflop and Exaflop Systems: Challenges

Application Kernels/Applications (HPC and DL) Co-Design Middleware Opportunities and Programming Models Challenges MPI, PGAS (UPC, , OpenSHMEM), CUDA, OpenMP, across Various OpenACC, , Hadoop (MapReduce), Spark (RDD, DAG), etc. Layers

Communication or Runtime for Programming Models Performance Point-to-point Collective Energy- Synchronization I/O and Fault Communication Communication Awareness and Locks File Systems Tolerance Resilience Networking Technologies Multi-/Many-core Accelerators (InfiniBand, 40/100/200GigE, Architectures (GPU and FPGA) Aries, and Omni-Path)

Network Based Computing Laboratory E4S Forum (Sept. ’19) 3 Designing (MPI+X) at Exascale • Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided) – Scalable job start-up – Low memory footprint • Scalable Collective communication – Offload – Non-blocking – Topology-aware • Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores) – Multiple end-points per node • Support for efficient multi-threading • Integrated Support for Accelerators (GPGPUs and FPGAs) • Fault-tolerance/resiliency • QoS support for communication and I/O • Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, MPI+UPC++, CAF, …) • Virtualization • Energy-Awareness Network Based Computing Laboratory E4S Forum (Sept. ’19) 4 Presentation Overview

• MVAPICH Project – MPI and PGAS Library with CUDA-Awareness • HiDL Project – High-Performance Deep Learning • Public Cloud Deployment – Microsoft-Azure and Amazon-AWS • Deployment Solutions • Conclusions

Network Based Computing Laboratory E4S Forum (Sept. ’19) 5 Parallel Programming Models Overview

P1 P2 P3 P1 P2 P3 P1 P2 P3

Logical Memory Memory Memory Shared Memory Memory Memory Memory

Shared Memory Model Model Partitioned Global Address Space (PGAS) SHMEM, DSM MPI (Message Passing Interface) OpenSHMEM, UPC, Chapel, X10, CAF, … • Programming models provide abstract machine models • Models can be mapped on different types of systems – e.g. Distributed Shared Memory (DSM), MPI within a node, etc. • PGAS models and Hybrid MPI+PGAS models are gradually receiving importance Network Based Computing Laboratory E4S Forum (Sept. ’19) 6 Overview of the MVAPICH2 Project

• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE) – MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002 – MVAPICH2-X (MPI + PGAS), Available since 2011 – Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014 – Support for Virtualization (MVAPICH2-Virt), Available since 2015 – Support for Energy-Awareness (MVAPICH2-EA), Available since 2015 – Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015 – Used by more than 3,025 organizations in 89 countries – More than 589,000 (> 0.5 million) downloads from the OSU site directly – Empowering many TOP500 clusters (Nov ‘18 ranking) • 3rd, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China • 5th, 448, 448 cores (Frontera) at TACC • 8th, 391,680 cores (ABCI) in Japan • 15th, 570,020 cores (Neurion) in South Korea and many others – Available with software stacks of many vendors and Distros (RedHat, SuSE, and OpenHPC) – http://mvapich.cse.ohio-state.edu Partner in the TACC Frontera System • Empowering Top500 systems for over a decade Network Based Computing Laboratory E4S Forum (Sept. ’19) 7 MVAPICH2 Release Timeline and Downloads 600000 2.3 MIC 2.0 MIC

500000 - GDR 2.0b GDR - AWS - rc2 MV2 1.9 MV2 MV2 2.3 400000 1.8 X GDR 2.3.2 GDR - MV2 - Virt 2.2 Virt 1.7 MV2 MV2 2.3.2 MV2 MV2 MV2 1.6 MV2 MV2 300000 0.9.3 INAM OSU 1.5 Azure Azure 2.3.2 - MV2 1.4 1.0.3 1.1 1.0 1.0 MV2 MV2 MV2 MV 200000 MV2 MV Number of Downloads Number MV2 MV2 0.9.8 MV2 MV 0.9.4 MV MV2 0.9.0 MV2

100000

0 Jul-15 Jul-10 Jul-05 Jan-18 Jan-13 Jan-08 Jun-18 Jun-13 Jun-08 Oct-16 Oct-11 Oct-06 Apr-19 Apr-14 Apr-09 Sep-14 Feb-15 Sep-09 Feb-10 Sep-04 Feb-05 Dec-15 Dec-10 Dec-05 Aug-17 Aug-12 Aug-07 Nov-18 Nov-13 Nov-08 Mar-17 Mar-12 Mar-07 May-16 May-11 May-06 Timeline Network Based Computing Laboratory E4S Forum (Sept. ’19) 8 Architecture of MVAPICH2 Software Family

High Performance Parallel Programming Models

Message Passing Interface PGAS Hybrid --- MPI + X (MPI) (UPC, OpenSHMEM, CAF, UPC++) (MPI + PGAS + OpenMP/Cilk)

High Performance and Scalable Communication Runtime Diverse and Mechanisms

Point-to- Remote Collectives Energy- I/O and Fault Active Introspection point Job Startup Memory Virtualization Algorithms Awareness File Systems Tolerance Messages & Analysis Primitives Access

Support for Modern Networking Technology Support for Modern Multi-/Many-core Architectures (InfiniBand, iWARP, RoCE, Omni-Path, Elastic Fabric Adapter) (Intel-Xeon, OpenPOWER, Xeon-Phi, ARM, NVIDIA GPGPU) Transport Protocols Modern Features Transport Mechanisms Modern Features

SR- Multi Shared RC SRD UD DC UMR ODP CMA IVSHMEM XPMEM Optane* NVLink CAPI* IOV Rail Memory

* Upcoming Network Based Computing Laboratory E4S Forum (Sept. ’19) 9 Strong Procedure for Design, Development and Release

• Research is done for exploring new designs • Designs are first presented to conference/journal publications • Best performing designs are incorporated into the codebase • Rigorous Q&A procedure before making a release – Exhaustive unit testing – Various test procedures on diverse range of platforms and interconnects – Test 19 different benchmarks and applications including, but not limited to • OMB, IMB, MPICH Test Suite, Intel Test Suite, NAS, ScaLAPACK, and SPEC – Spend about 18,000 core hours per commit – Performance regression and tuning – Applications-based evaluation – Evaluation on large-scale systems (Lassen, Frontera, Summit etc) • All versions (alpha, beta, RC1 and RC2) go through the above testing

Network Based Computing Laboratory E4S Forum (Sept. ’19) 10 MVAPICH2 Software Family

Requirements Library

MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2

Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS MVAPICH2-X with IB, Omni-Path, and RoCE MPI with IB, RoCE & GPU and Support for Deep Learning MVAPICH2-GDR

HPC Cloud with MPI & IB MVAPICH2-Virt

Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA

MPI Energy Monitoring Tool OEMT

InfiniBand Network Analysis and Monitoring OSU INAM

Microbenchmarks for Measuring MPI and PGAS Performance OMB

Network Based Computing Laboratory E4S Forum (Sept. ’19) 11 Startup Performance on TACC Frontera MPI_Init on Frontera 5000 Intel MPI 2019 4.5s 4000 MVAPICH2 2.3.2 3.9s 3000

Time Taken Taken Time 2000 (Milliseconds) 1000 0 56 112 224 448 896 1792 3584 7168 143362867257344 Number of Processes

• MPI_Init takes 3.9 seconds on 57,344 processes on 1,024 nodes • All numbers reported with 56 processes per node

New designs available in MVAPICH2-2.3.2 Network Based Computing Laboratory E4S Forum (Sept. ’19) 12 One-way Latency: MPI over IB with MVAPICH2

1.8 Small Message Latency 120 Large Message Latency TrueScale-QDR 1.6 1.19 100 ConnectX-3-FDR 1.4 1.11 ConnectIB-DualFDR 1.2 80 ConnectX-4-EDR 1 60 Omni-Path 0.8 1.15 ConnectX-6 HDR 0.6 1.01 40 Latency (us) Latency

Latency (us) 1.04 0.4 1.1 20 0.2 0 0

Message Size (bytes) Message Size (bytes) TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch ConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Network Based Computing Laboratory E4S Forum (Sept. ’19) 13 Bandwidth: MPI over IB with MVAPICH2

30000 Unidirectional Bandwidth 60000 Bidirectional Bandwidth TrueScale-QDR 24,532 48,027 25000 50000 ConnectX-3-FDR 20000 40000 ConnectIB-DualFDR ConnectX-4-EDR 15000 30000 21,983 12,590 Omni-Path 24,136 12,366 ConnectX-6 HDR

Bandwidth Bandwidth 10000

Bandwidth Bandwidth 20000 (MBytes/sec) 12,083 (MBytes/sec) 21,227 5000 6,356 10000 12,161 3,373 6,228 0 0

Message Size (bytes) Message Size (bytes) TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch ConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Network Based Computing Laboratory E4S Forum (Sept. ’19) 14 Intra-node Point-to-Point Performance on OpenPower Intra-Socket Small Message Latency Intra-Socket Large Message Latency 1 100 MVAPICH2-2.3.1 MVAPICH2-2.3.1 80 SpectrumMPI-2019.02.07 60 SpectrumMPI-2019.02.07 0.5 40 Latency (us)Latency Latency (us)Latency 20 0.22us 0 0 0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M Intra-Socket Bandwidth Intra-Socket Bi-directional Bandwidth 40000 40000 MVAPICH2-2.3.1 MVAPICH2-2.3.1 30000 SpectrumMPI-2019.02.07 SpectrumMPI-2019.02.07 20000 20000

10000 - Bandwidth (MB/s) Bandwidth (MB/s)

0 Bi 0 1 8 64 512 4K 32K 256K 2M 1 8 64 512 4K 32K 256K 2M Platform: Two nodes of OpenPOWER (POWER9-ppc64le) CPU using Mellanox EDR (MT4121) HCA Network Based Computing Laboratory E4S Forum (Sept. ’19) 15 Point-to-point: Latency & Bandwidth (Intra-socket) on Mayer

Latency - Small Messages Latency - Medium Messages Latency - Large Messages

2 20 2000 MVAPICH2-X MVAPICH2-X 8.2x better 1600 15 OpenMPI+UCX 1.5 OpenMPI+UCX 1200 MVAPICH2-X 1 10 OpenMPI+UCX (us) Latency 800 Latency (us) Latency Latency (us) Latency 5 0.5 400

0 0 0 0 2 8 32 128 512 2048 8192 32K 128K 512K 2M Message Size (Bytes) Message Size (Bytes) Message Size (Bytes)

Bandwidth - Small Messages Bandwidth – Medium Messages Bandwidth - Large Messages 500 10000 12000 MVAPICH2-X MVAPICH2-X 400 8000 OpenMPI+UCX OpenMPI+UCX 9000 300 6000 MVAPICH2-X 6000 OpenMPI+UCX 200 4000 Bandwidth Bandwidth (MB/s) Bandwidth (MB/s)

Bandwidth (MB/s) 3000 100 2000

0 0 0 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Message Size (Bytes) Message Size (Bytes) Message Size (Bytes)

Network Based Computing Laboratory E4S Forum (Sept. ’19) 16 Point-to-point: Latency & Bandwidth (Inter-socket) on Mayer

Latency - Small Messages Latency - Medium Messages Latency - Large Messages 3 35 2500 MVAPICH2-X MVAPICH2-X 3.5x better 30 8.3x better 2.5 OpenMPI+UCX 2000 25 OpenMPI+UCX 2 1500 MVAPICH2-X 20 1.5 15 1000 OpenMPI+UCX Latency (us) Latency (us) Latency

Latency (us) Latency 1 10 500 0.5 5

0 0 0 0 2 8 32 128 512 2048 8192 32K 128K 512K 2M Message Size (Bytes) Message Size (Bytes) Message Size (Bytes)

Bandwidth - Small Messages Bandwidth – Medium Messages Bandwidth - Large Messages 400 10000 12000 5x better MVAPICH2-X MVAPICH2-X 8000 10000 300 OpenMPI+UCX OpenMPI+UCX 8000 6000 MVAPICH2-X 200 6000 4000 OpenMPI+UCX Bandwidth Bandwidth (MB/s) 4000 100 Bandwidth (MB/s) Bandwidth Bandwidth (MB/s) 2000 2000

0 0 0 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Message Size (Bytes) Message Size (Bytes) Message Size (Bytes)

Network Based Computing Laboratory E4S Forum (Sept. ’19) 17 Shared Address Space (XPMEM)-based Collectives Design OSU_Allreduce (Broadwell 256 procs) OSU_Reduce (Broadwell 256 procs) MVAPICH2-2.3b 100000 MVAPICH2-2.3b 100000 1.8X IMPI-2017v1.132 IMPI-2017v1.132 4X 10000 10000 MVAPICH2-X-2.3rc1 MVAPICH2-2.3rc1

1000 1000 73.2 37.9 100 100 Latency (us) Latency 10 10 36.1 16.8 1 1 16K 32K 64K 128K 256K 512K 1M 2M 4M 16K 32K 64K 128K 256K 512K 1M 2M 4M Message Size Message Size • “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2 • Offloaded computation/communication to peers ranks in reduction collective operation • Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction Available since MVAPICH2-X 2.3rc1 Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018. Network Based Computing Laboratory E4S Forum (Sept. ’19) 18 Benefits of the New Asynchronous Progress Design: Broadwell + InfiniBand P3DFFT High Performance Linpack (HPL) 9 30000 8 27% Lower is better Higher is better 29% 25000 7 6 20000 Memory Consumption = 69% 5 26% 12% 4 15000 33% 3 10000 2 8% 1 5000 0 0

112 224 448 Performance in GFLOPS 224 448 896 Time per loop in seconds loop Time per Number of Processes PPN=28 Number of Processes PPN=28 MVAPICH2 Async MVAPICH2 Default IMPI 2019 Async MVAPICH2 Async MVAPICH2 Default IMPI 2019 Default

Up to 33% performance improvement in P3DFFT application with 448 processes Up to 29% performance improvement in HPL application with 896 processes

A. Ruhela, H. Subramoni, S. Chakraborty, M. Bayatpour, P. Kousha, and D.K. Panda, Efficient Asynchronous Communication Progress for MPI without Dedicated Resources, EuroMPI 2018. Enhanced version accepted for PARCO Journal. Available since MVAPICH2-X 2.3rc1

Network Based Computing Laboratory E4S Forum (Sept. ’19) 19 SPEC MPI 2007 Benchmarks: Broadwell + InfiniBand 160 Intel MPI 18.1.163 140 MVAPICH2-X 120 29% 5% 100 80 -12% 60 1% 40 31% 11% 20 Execution Time Time in (s) Execution 0 MILC Leslie3D POP2 LAMMPS WRF2 LU MVAPICH2-X outperforms Intel MPI by up to 31%

Configuration: 448 processes on 16 Intel E5-2680v4 (Broadwell) nodes having 28 PPN and interconnected with 100Gbps Mellanox MT4115 EDR ConnectX-4 HCA Network Based Computing Laboratory E4S Forum (Sept. ’19) 20 GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU • Standard MPI interfaces used for unified data movement • Takes advantage of Unified Virtual Addressing (>= CUDA 4.0) • Overlaps data movement from GPU with RDMA transfers

At Sender:

MPI_Send(s_devbuf, size, …); inside MVAPICH2 At Receiver: MPI_Recv(r_devbuf, size, …);

High Performance and High Productivity

Network Based Computing Laboratory E4S Forum (Sept. ’19) 21 Optimized MVAPICH2-GDR Design GPU-GPU Inter-node Latency 30 6000 GPU-GPU Inter-node Bi-Bandwidth

20 4000 10 11X 1.85us 10x 2000

Latency (us)Latency 0 0 1 2 4 8 Bandwidth (MB/s) Bandwidth 0 16 32 64 1K 2K 4K 8K 1 2 4 8 128 256 512 16 32 64 1K 2K 4K

Message Size (Bytes) 128 256 512 Message Size (Bytes) MV2-(NO-GDR) MV2-GDR 2.3 MV2-(NO-GDR) MV2-GDR-2.3

4000 GPU-GPU Inter-node Bandwidth 3000 2000 9x MVAPICH2-GDR-2.3 Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores 1000 NVIDIA Volta V100 GPU

Bandwidth (MB/s) Bandwidth 0 Mellanox Connect-X4 EDR HCA 1 2 4 8 16 32 64 1K 2K 4K CUDA 9.0 128 256 512 Mellanox OFED 4.0 with GPU-Direct-RDMA Message Size (Bytes) MV2-(NO-GDR) MV2-GDR-2.3

Network Based Computing Laboratory E4S Forum (Sept. ’19) 22 Application-Level Evaluation (HOOMD-blue) 64K Particles 256K Particles 3500 3000 2500 MV2 MV2+GDR 2500 2000 2X 2000 1500 2X 1500 1000 1000 second (TPS)second 500 second (TPS) second 500

0 per Steps Time Average 0 Average Time Steps per per Steps Time Average 4 8 16 32 4 8 16 32 Number of Processes Number of Processes • Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB) • HoomdBlue Version 1.0.5 • GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384

Network Based Computing Laboratory E4S Forum (Sept. ’19) 23 Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland

Wilkes GPU Cluster CSCS GPU cluster

Default Callback-based Event-based Default Callback-based Event-based

1.2 1.2 1 1 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 0 0 4 8 16 32 Normalized Execution Time 16 32 64 96 Normalized Execution Time Number of GPUs Number of GPUs

• 2X improvement on 32 GPUs nodes Cosmo model: http://www2.cosmo-model.org/content • 30% improvement on 96 GPU nodes (8 GPUs/node) /tasks/operational/meteoSwiss/ On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-enabled Systems, IPDPS’16 Network Based Computing Laboratory E4S Forum (Sept. ’19) 24 Presentation Overview

• MVAPICH Project – MPI and PGAS Library with CUDA-Awareness • HiDL Project – High-Performance Deep Learning • Public Cloud Deployment – Microsoft-Azure and Amazon-AWS • Deployment Solutions • Conclusions

Network Based Computing Laboratory E4S Forum (Sept. ’19) 25 Deep Learning: New Challenges for MPI Runtimes • Deep Learning frameworks are a different game Desired altogether NCCL2 – Unusually large message sizes (order of megabytes) – Most communication based on GPU buffers cuDNN MPI • Existing State-of-the-art – cuDNN, cuBLAS, NCCL --> scale-up performance MKL-DNN – NCCL2, CUDA-Aware MPI --> scale-out performance • For small and medium message sizes only! up Performance up - • Proposed: Can we co-design the MPI runtime (MVAPICH2- GDR) and the DL framework (Caffe) to achieve both? gRPC Scale – Efficient Overlap of Computation and Communication – Efficient Large-Message Communication (Reductions) Hadoop – What application co-designs are needed to exploit communication-runtime co-designs? Scale-out Performance A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17) Network Based Computing Laboratory E4S Forum (Sept. ’19) 26 High-Performance Deep Learning

• CPU-based Deep Learning • GPU-based Deep Learning

Network Based Computing Laboratory E4S Forum (Sept. ’19) 27 Performance of CNTK with MVAPICH2-X on CPU-based Deep Learning

CNTK AlexNet Training • CPU-based training of AlexNet neural (B.S=default, iteration=50, ppn=28) network using ImageNet ILSVRC2012 Intel MPI dataset 800 MVAPICH2 9% MVAPICH2-XPMEM • Advanced XPMEM-based designs show 600 20%

up to 20% benefits over Intel MPI (IMPI) 400 for CNTK DNN training using All_Reduce 200 • The proposed designs show good Execution Time (s) scalability with increasing system size 0 28 56 112 224 No. of Processes Available since MVAPICH2-X 2.3rc1 release

Designing Efficient Shared Address Space Reduction Collectives for Multi-/Many-cores, J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and DK Panda, 32nd IEEE International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018 Network Based Computing Laboratory E4S Forum (Sept. ’19) 28 Deep Learning on Frontera

• TensorFlow, PyTorch, and MXNet are widely used Deep Learning Frameworks • Optimized by Intel using Math Kernel Library for DNN (MKL-DNN) for Intel processors • Single Node performance can be improved by running Multiple MPI processes

Impact of Batch Size on Performance for ResNet-50 Performance Improvement using Multiple MPI processes

A. Jain et al., Scaling Deep Learning Frameworks on Frontera using MVAPICH2 MPI, under review Network Based Computing Laboratory E4S Forum (Sept. ’19) 29 Deep Learning on Frontera

• Observed 260K images per sec for ResNet-50 on 2,048 Nodes • Scaled MVAPICH2-X on 2,048 nodes on Frontera for Distributed Training using TensorFlow • ResNet-50 can be trained in 7 minutes on 2048 nodes (114,688 cores)

A. Jain et al., Scaling Deep Learning Frameworks on Frontera using MVAPICH2 MPI, under review

Network Based Computing Laboratory E4S Forum (Sept. ’19) 30 MVAPICH2-GDR: Enhanced MPI_Allreduce at Scale

• Optimized designs in upcoming MVAPICH2-GDR offer better performance for most cases • MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) up to 1,536 GPUs

Latency on 1,536 GPUs Bandwidth on 1,536 GPUs 128MB Message 10 450 6 400 8 5 1.7X better 350 300 4 6 1.7X better 250 1.6X better 3 4 200

Latency (us) 150 2 (GB/s) Bandwidth 2

100 (GB/s) Bandwidth 1 50 0 0 0 24 48 96 192 384 768 1536 4 16 64 256 1K 4K 16K 32M 64M 128M 256M Number of GPUs Message Size (Bytes) Message Size (Bytes) SpectrumMPI 10.2.0.11 OpenMPI 4.0.1 MVAPICH2-GDR-2.3.2 NCCL 2.4 MVAPICH2-GDR-2.3.2 NCCL 2.4 NCCL 2.4 MVAPICH2-GDR-2.3.2

Platform: Dual-socket IBM POWER9 CPU, 6 NVIDIA Volta V100 GPUs, and 2-port InfiniBand EDR Interconnect Network Based Computing Laboratory E4S Forum (Sept. ’19) 31 Distributed Training with TensorFlow and MVAPICH2-GDR

• ResNet-50 Training using TensorFlow benchmark on 400 SUMMIT -- 1536 Volta ImageNet-1k has 1.2 million images 350 GPUs! 300 MVAPICH2-GDR reaching ~0.35 million 250 • 1,281,167 (1.2 mil.) images 200 images per second for ImageNet-1k! 150 100 • Time/epoch = 3.6 seconds 50 0 • Total Time (90 epochs) 1 2 4 6 12 24 48 96 192 384 768 1536 Image per second (Thousands) second per Image = 3.6 x 90 = 332 seconds = Number of GPUs 5.5 minutes! NCCL-2.4 MVAPICH2-GDR-2.3.2 *We observed errors for NCCL2 beyond 96 GPUs Platform: The Summit (#1 on Top500.org) – 6 NVIDIA Volta GPUs per node connected with NVLink, CUDA 9.2 Network Based Computing Laboratory E4S Forum (Sept. ’19) 32 MVAPICH2-GDR on Container with Negligible Overhead GPU-GPU Inter-node Latency GPU-GPU Inter-node Bi-Bandwidth 1000 20000 Docker Native Docker Native 100 15000 10000 10

Latency (us) 5000

1 (MB/s) Bandwidth 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M 0 Message Size (Bytes) Message Size (Bytes) GPU-GPU Inter-node Bandwidth 12000 10000 Docker Native 8000 6000 MVAPICH2-GDR-2.3.2 4000 Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores 2000 NVIDIA Volta V100 GPU Bandwidth (MB/s) Bandwidth 0 Mellanox Connect-X4 EDR HCA CUDA 9.0 Mellanox OFED 4.0 with GPU-Direct-RDMA Message Size (Bytes)

Network Based Computing Laboratory E4S Forum (Sept. ’19) 33 Presentation Overview

• MVAPICH Project – MPI and PGAS Library with CUDA-Awareness • HiDL Project – High-Performance Deep Learning • Public Cloud Deployment – Microsoft-Azure and Amazon-AWS • Deployment Solutions • Conclusions

Network Based Computing Laboratory E4S Forum (Sept. ’19) 34 MVAPICH2-Azure 2.3.2

• Released on 08/16/2019 • Major Features and Enhancements – Based on MVAPICH2-2.3.2 – Enhanced tuning for point-to-point and collective operations – Targeted for Azure HB & HC virtual machine instances – Flexibility for 'one-click' deployment – Tested with Azure HB & HC VM instances

Network Based Computing Laboratory E4S Forum (Sept. ’19) 35 Performance of Radix

Total Execution Time on HC (Lower is better) Total Execution Time on HB (Lower is better) 25 30 38% faster MVAPICH2-X 3x faster MVAPICH2-X 25 20 HPCx HPCx 20 15 15 10 10 5 5 Execution Time (Seconds) Execution Time (Seconds) 0 0 60(1X60) 120(2X60) 240(4X60)

Number of Processes (Nodes X PPN) Number of Processes (Nodes X PPN)

Network Based Computing Laboratory E4S Forum (Sept. ’19) 36 MVAPICH2-X-AWS 2.3

• Released on 08/12/2019 • Major Features and Enhancements – Based on MVAPICH2-X 2.3 – New design based on Amazon EFA adapter's Scalable Reliable Datagram (SRD) transport protocol – Support for XPMEM based intra-node communication for point-to-point and collectives – Enhanced tuning for point-to-point and collective operations – Targeted for AWS instances with Amazon Linux 2 AMI and EFA support – Tested with c5n.18xlarge instance

Network Based Computing Laboratory E4S Forum (Sept. ’19) 37 Application Performance

miniGhost 10% better CloverLeaf 80 30 MV2X OpenMPI MV2X-UD 70 25 60 MV2X-SRD 20 50 27.5% OpenMPI 40 15 better 30 10 20 10 5 0 0 Execution Time (Seconds) 72(2x36) 144(4x36) 288(8x36) Execution Time (Seconds) 72(2x36) 144(4x36) 288(8x36) Processes (Nodes X PPN) Processes (Nodes X PPN)

• Up to 10% performance improvement for MiniGhost on 8 nodes

• Up to 27% better performance with CloverLeaf on 8 nodes

S. Chakraborty, S. Xu, H. Subramoni and D. K. Panda, Designing Scalable and High-Performance MPI Libraries on Amazon Elastic Adapter, Hot Interconnect, 2019 Network Based Computing Laboratory E4S Forum (Sept. ’19) 38 Presentation Overview

• MVAPICH Project – MPI and PGAS Library with CUDA-Awareness • HiBD Project – High-Performance Big Data Analytics Library • HiDL Project – High-Performance Deep Learning • Public Cloud Deployment – Microsoft-Azure and Amazon-AWS • Deployment Solutions – RPM, OpenHPC, Spack • Conclusions Network Based Computing Laboratory E4S Forum (Sept. ’19) 39 RPM and Debian Deployments

• Provide customized RPMs for different system requirements – ARM, Power8, Power9, x86 (Intel and AMD) – Different versions of Compilers (ICC, PGI, GCC, XLC, ARM), CUDA, OFED/Intel IFS

Network Based Computing Laboratory E4S Forum (Sept. ’19) 40 SPACK : Solving complexities in Software installation

• A flexible package manager that supports multiple versions, configurations, platforms, and compilers. • Automates all package-related processes : • Source Download • Checksum Verification • Configuration • Build • Installation • Testing (Very Basic)

How to install Spack: $ git clone https://github.com/spack/spack $ . spack/share/spack/setup-env.sh

How to install a package: MVAPICH2 $ spack install mvapich2 fabrics=mrail

Running applications through SPACK : AMG $ spack install amg %gcc ^[email protected] fabrics=mrail $ spack cd -i amg # to go in AMG install folder $ spack find –p [email protected] # Set your MPI_HOME $ $MPI_HOME/bin/mpirun_rsh -np 56 -hostfile hosts ./bin/amg

Network Based Computing Laboratory E4S Forum (Sept. ’19) 41 SPACK : Solving complexities in Software installation

How to install Spack: $ git clone https://github.com/spack/spack $ . spack/share/spack/setup-env.sh

How to install a package: MVAPICH2 $ spack install mvapich2

Test with MVAPICH2 OMB

Network Based Computing Laboratory E4S Forum (Sept. ’19) 42 Concluding Remarks

• Upcoming Exascale systems need to be designed with a holistic view of HPC, Deep Learning, and Cloud • Presented an overview of designing convergent software stacks • Presented solutions enable HPC and Deep Learning communities to take advantage of current and next-generation systems • Presented solutions to deploy these solutions on traditional HPC and Cloud systems

Network Based Computing Laboratory E4S Forum (Sept. ’19) 43 Commercial Support for MVAPICH2, HiBD, and HiDL Libraries • Supported through X-ScaleSolutions (http://x-scalesolutions.com) • Benefits: – Help and guidance with installation of the library – Platform-specific optimizations and tuning – Timely support for operational issues encountered with the library – Web portal interface to submit issues and tracking their progress – Advanced debugging techniques – Application-specific optimizations and tuning – Obtaining guidelines on best practices – Periodic information on major fixes and updates – Information on major releases – Help with upgrading to the latest release – Flexible Service Level Agreements • Support provided to Lawrence Livermore National Laboratory (LLNL) for the last two years

Network Based Computing Laboratory E4S Forum (Sept. ’19) 44 Silver ISV Member for the OpenPOWER Consortium + Products • Has joined the OpenPOWER Consortium as a silver ISV member • Provides flexibility: – To have MVAPICH2, HiDL and HiBD libraries getting integrated into the OpenPOWER software stack – A part of the OpenPOWER ecosystem – Can participate with different vendors for bidding, installation and deployment • Introduced two new integrated products with support for OpenPOWER systems (Presented at the OpenPOWER North America Summit) – X-ScaleHPC – X-ScaleAI – Send an e-mail to [email protected] for free trial!!

Network Based Computing Laboratory E4S Forum (Sept. ’19) 45 Funding Acknowledgments Funding Support by

Equipment Support by

Network Based Computing Laboratory E4S Forum (Sept. ’19) 46 Personnel Acknowledgments Current Students (Graduate) Current Research Scientist Current Post-doc – A. Awan (Ph.D.) – Kamal Raj (M.S.) – Q. Zhou (Ph.D.) – H. Subramoni – M. S. Ghazimeersaeed – M. Bayatpour (Ph.D.) – K. S. Khorassani (Ph.D.) – A. Ruhela – C.-H. Chu (Ph.D.) – P. Kousha (Ph.D.) – K. Manian Current Students (Undergraduate) – J. Hashmi (Ph.D.) – A. Quentin (Ph.D.) – V. Gangal (B.S.) Current Research Specialist – A. Jain (Ph.D.) – B. Ramesh (M. S.) – N. Sarkauskas (B.S.) – J. Smith – K. S. Kandadi (M.S.) – S. Xu (M.S.)

Past Students Past Research Scientist – A. Augustine (M.S.) – T. Gangadharappa (M.S.) – P. Lai (M.S.) – R. Rajachandrasekar (Ph.D.) – K. Hamidouche – P. Balaji (Ph.D.) – K. Gopalakrishnan (M.S.) – J. Liu (Ph.D.) – D. Shankar (Ph.D.) – S. Sur – R. Biswas (M.S.) – W. Huang (Ph.D.) – M. Luo (Ph.D.) – G. Santhanaraman (Ph.D.) – A. Singh (Ph.D.) – S. Bhagvat (M.S.) – W. Jiang (M.S.) – A. Mamidala (Ph.D.) – X. Lu – J. Sridhar (M.S.) – A. Bhat (M.S.) – J. Jose (Ph.D.) – G. Marsh (M.S.) Past Programmers – S. Kini (M.S.) – S. Sur (Ph.D.) – D. Buntinas (Ph.D.) – V. Meshram (M.S.) – D. Bureddy – M. Koop (Ph.D.) – A. Moody (M.S.) – H. Subramoni (Ph.D.) – L. Chai (Ph.D.) – J. Perkins – K. Vaidyanathan (Ph.D.) – B. Chandrasekharan (M.S.) – K. Kulkarni (M.S.) – S. Naravula (Ph.D.) – A. Vishnu (Ph.D.) – S. Chakraborthy (Ph.D.) – R. Kumar (M.S.) – R. Noronha (Ph.D.) Past Research Specialist – J. Wu (Ph.D.) – N. Dandapanthula (M.S.) – S. Krishnamoorthy (M.S.) – X. Ouyang (Ph.D.) – M. Arnold – W. Yu (Ph.D.) – V. Dhanraj (M.S.) – K. Kandalla (Ph.D.) – S. Pai (M.S.) – M. Li (Ph.D.) – S. Potluri (Ph.D.) – J. Zhang (Ph.D.) Past Post-Docs – D. Banerjee – J. Lin – S. Marcarelli – X. Besseron – M. Luo – J. Vienne – H.-W. Jin – E. Mancini – H. Wang Network Based Computing Laboratory E4S Forum (Sept. ’19) 47 Thank You! [email protected]

Follow us on Network-Based Computing Laboratory https://twitter.com/mvapich http://nowlab.cse.ohio-state.edu/

The High-Performance MPI/PGAS Project The High-Performance Big Data Project The High-Performance Deep Learning Project http://mvapich.cse.ohio-state.edu/ http://hibd.cse.ohio-state.edu/ http://hidl.cse.ohio-state.edu/

Network Based Computing Laboratory E4S Forum (Sept. ’19) 48