The MVAPICH2 Project: Latest Developments and Plans Towards Exascale Computing

Presentation at Mellanox Theatre (SC ‘19) by

Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~panda Drivers of Modern HPC Cluster Architectures

High Performance Interconnects - Accelerators / Coprocessors InfiniBand high compute density, high Multi-core Processors <1usec latency, 200Gbps Bandwidth> performance/watt SSD, NVMe-SSD, NVRAM >1 TFlop DP on a chip • Multi-core/many-core technologies • Remote Direct Memory Access (RDMA)-enabled networking (InfiniBand and RoCE) • Solid State Drives (SSDs), Non-Volatile Random-Access Memory (NVRAM), NVMe-SSD • Accelerators (NVIDIA GPGPUs and Phi) • Available on HPC Clouds, e.g., Amazon EC2, NSF Chameleon, Microsoft Azure, etc.

Summit Sierra Sunway TaihuLight K - Computer Network Based Computing Laboratory Mellanox Theatre - SC’19 MPI+X Programming model: Broad Challenges at Exascale

• Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided) – Scalable job start-up • Scalable Collective communication – Offload – Non-blocking – Topology-aware • Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores) – Multiple end-points per node • Support for efficient multi-threading • Integrated Support for GPGPUs and FPGAs • Fault-tolerance/resiliency • QoS support for communication and I/O • Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI+UPC++, MPI + OpenSHMEM, CAF, …) • Virtualization • Energy-Awareness

Network Based Computing Laboratory Mellanox Theatre - SC’19 Overview of the MVAPICH2 Project • High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE) – MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002 – MVAPICH2-X (MPI + PGAS), Available since 2011 – Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014 – Support for Virtualization (MVAPICH2-Virt), Available since 2015 – Support for Energy-Awareness (MVAPICH2-EA), Available since 2015 – Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015 – Used by more than 3,050 organizations in 89 countries – More than 615,000 (> 0.6 million) downloads from the OSU site directly – Empowering many TOP500 clusters (Jun ‘19 ranking) • 3rd, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China • 5th, 448, 448 cores (Frontera) at TACC • 8th, 391,680 cores (ABCI) in Japan • 15th, 570,020 cores (Neurion) in South Korea and many others – Available with software stacks of many vendors and Linux Distros (RedHat, SuSE, and OpenHPC)

– http://mvapich.cse.ohio-state.edu Partner in the TACC Frontera System • Empowering Top500 systems for over a decade Network Based Computing Laboratory Mellanox Theatre - SC’19 MVAPICH2 Release Timeline and Downloads 600000 2.3 MIC 2.0

500000 - GDR 2.0b - AWS - rc2 MV2 1.9 MV2 MV2 2.3 400000 1.8 X GDR 2.3.2 - MV2 - Virt 2.2 1.7 MV2 MV2 2.3.2 MV2 MV2 1.6 MV2 MV2 300000 OSU INAM 0.9.3 Azure 2.3.2 1.5 - MV2 1.4 1.0.3 1.1 1.0 1.0 MV2 MV2 MV2 MV 200000 MV2 MV Number of Downloads Number MV2 MV2 0.9.8 MV 0.9.4 MV2 0.9.0

100000

0 Jul-05 Jul-10 Jul-15 Jan-08 Jan-13 Jan-18 Jun-08 Jun-13 Jun-18 Oct-06 Oct-11 Oct-16 Apr-09 Apr-14 Apr-19 Sep-04 Feb-05 Sep-09 Feb-10 Sep-14 Feb-15 Dec-05 Dec-10 Dec-15 Aug-07 Aug-12 Aug-17 Nov-08 Nov-13 Nov-18 Mar-07 Mar-12 Mar-17 May-06 May-11 May-16 Timeline Network Based Computing Laboratory Mellanox Theatre - SC’19 Architecture of MVAPICH2 Software Family (HPC and DL)

High Performance Parallel Programming Models

Message Passing Interface PGAS Hybrid --- MPI + X (MPI) (UPC, OpenSHMEM, CAF, UPC++) (MPI + PGAS + OpenMP/Cilk)

High Performance and Scalable Communication Runtime Diverse APIs and Mechanisms

Point-to- Remote Collectives Energy- I/O and Fault Active Introspection point Job Startup Memory Virtualization Algorithms Awareness Tolerance Messages & Analysis Primitives Access File Systems

Support for Modern Networking Technology Support for Modern Multi-/Many-core Architectures (InfiniBand, iWARP, RoCE, Omni-Path, Elastic Fabric Adapter) (Intel-Xeon, OpenPOWER, Xeon-Phi, ARM, NVIDIA GPGPU) Transport Protocols Modern Features Transport Mechanisms Modern Features

SR- Multi Shared RC SRD UD DC UMR ODP CMA IVSHMEM XPMEM Optane* NVLink CAPI* IOV Rail Memory

* Upcoming Network Based Computing Laboratory Mellanox Theatre - SC’19 MVAPICH2 Software Family

Requirements Library

MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2

Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS MVAPICH2-X with IB, Omni-Path, and RoCE MPI with IB, RoCE & GPU and Support for Deep Learning MVAPICH2-GDR

HPC Cloud with MPI & IB MVAPICH2-Virt

Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA

MPI Energy Monitoring Tool OEMT

InfiniBand Network Analysis and Monitoring OSU INAM

Microbenchmarks for Measuring MPI and PGAS Performance OMB

Network Based Computing Laboratory Mellanox Theatre - SC’19 MVAPICH2 Distributions • MVAPICH2 – Basic MPI support for IB, iWARP and RoCE • MVAPICH2-X – Advanced MPI features and support for INAM – MPI, PGAS and Hybrid MPI+PGAS support for IB • MVAPICH2-Virt – Optimized for HPC Clouds with IB and SR-IOV virtualization – Support for OpenStack, Docker, and Singularity • OSU Micro-Benchmarks (OMB) – MPI (including CUDA-aware MPI), OpenSHMEM and UPC • OSU INAM – InfiniBand Network Analysis and Monitoring Tool • MVAPICH2-GDR and Deep Learning (Will be presented on Thursday at 1:30-2:00pm)

Network Based Computing Laboratory Mellanox Theatre - SC’19 8 One-way Latency: MPI over IB with MVAPICH2

1.8 Small Message Latency 120 Large Message Latency TrueScale-QDR 1.6 1.19 100 ConnectX-3-FDR 1.4 1.11 ConnectIB-DualFDR 1.2 80 ConnectX-4-EDR 1 60 Omni-Path 0.8 1.15 ConnectX-6 HDR 0.6 1.01 40 Latency (us)

Latency (us) 1.04 0.4 1.1 20 0.2 0 0

Message Size (bytes) Message Size (bytes) TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch ConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Network Based Computing Laboratory Mellanox Theatre - SC’19 Bandwidth: MPI over IB with MVAPICH2

30000 Unidirectional Bandwidth 60000 Bidirectional Bandwidth TrueScale-QDR 24,532 48,027 25000 50000 ConnectX-3-FDR 20000 40000 ConnectIB-DualFDR ConnectX-4-EDR 15000 30000 21,983 12,590 Omni-Path 24,136 12,366 ConnectX-6 HDR Bandwidth

10000 Bandwidth 20000 (MBytes/sec) 12,083 (MBytes/sec) 21,227 5000 6,356 10000 12,161 3,373 6,228 0 0

Message Size (bytes) Message Size (bytes) TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch ConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Network Based Computing Laboratory Mellanox Theatre - SC’19 Intra-node Point-to-Point Performance on OpenPOWER Intra-Socket Small Message Latency Intra-Socket Large Message Latency 0.8 300 MVAPICH2-X 2.3.2 MVAPICH2-X 2.3.2 SpectrumMPI-10.3.0.01 0.6 SpectrumMPI-10.3.0.01 200 0.4 100 Latency (us) Latency Latency (us) Latency 0.2 0.25us 0 0 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M Intra-Socket Bandwidth Intra-Socket Bi-directional Bandwidth 40000 40000 MVAPICH2-X 2.3.2 MVAPICH2-X 2.3.2 SpectrumMPI-10.3.0.01 SpectrumMPI-10.3.0.01 30000 30000

20000 20000

10000 10000 Bandwidth (MB/s) Bandwidth Bandwidth (MB/s) Bandwidth 0 0 1 8 64 512 4K 32K 256K 2M 1 8 64 512 4K 32K 256K 2M Platform: Two nodes of OpenPOWER (Power9-ppc64le) CPU using Mellanox EDR (MT4121) HCA Network Based Computing Laboratory Mellanox Theatre - SC’19 Point-to-point: Latency & Bandwidth (Intra-socket) on ARM

Latency - Small Messages Latency - Medium Messages Latency - Large Messages 4 600 0.5 MVAPICH2-X-Next 500 MVAPICH2-X-Next 0.4 3 HPCX HPCX OpenMPI+UCX 400 OpenMPI+UCX 0.3 2 300

0.2 Latency (us) 200 MVAPICH2-X-Next Latency (us) Latency (us) 1 0.1 HPCX 100 OpenMPI+UCX 0 0 0 0 2 8 32 128 512 2048 8192 32K 128K 512K 2M Message Size (Bytes) Message Size (Bytes) Message Size (Bytes)

Bandwidth - Small Messages Bandwidth – Medium Messages Bandwidth - Large Messages 600 10000 15000 MVAPICH2-X-Next MVAPICH2-X-Next 500 8000 12000 HPCX HPCX 400 OpenMPI+UCX 6000 OpenMPI+UCX 9000 70% better 300 4000 6000 200 MVAPICH2-X-Next

Bandwidth (MB/s) 100 2000 3000 HPCX Bandwidth Bandwidth (MB/s)

Bandwidth Bandwidth (MB/s) OpenMPI+UCX 0 0 0 1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M Message Size (Bytes) Message Size (Bytes) Message Size (Bytes) Network Based Computing Laboratory Mellanox Theatre - SC’19 Startup Performance on TACC Frontera MPI_Init on Frontera 5000 Intel MPI 2019 4.5s 4000 MVAPICH2 2.3.2 3.9s 3000

2000

1000

Time Taken (Milliseconds) Time Taken 0 56 112 224 448 896 1792 3584 7168 14336 28672 57344 Number of Processes

• MPI_Init takes 3.9 seconds on 57,344 processes on 1,024 nodes • All numbers reported with 56 processes per node

New designs available in MVAPICH2-2.3.2 Network Based Computing Laboratory Mellanox Theatre - SC’19 Bcast with RDMA_CM Hardware Multicast on Frontera

1000 16 256 4K 64K 512K

100

10 Latency (us)

1 8 32 128 512 No. of Nodes • MPI_Bcast shows flat scalability for increasing number of nodes • All numbers reported with 56 processes per node

Network Based Computing Laboratory Mellanox Theatre - SC’19 Evaluation of SHArP based Non Blocking Allreduce

MPI_Iallreduce Benchmark

MVAPICH2 MVAPICH2 1 PPN*, 8 Nodes 1 PPN, 8 Nodes 9 2.3x 50 MVAPICH2-SHArP 8 MVAPICH2-SHArP 45

Lower is Better is Lower 7 40 6 35 5 30 25

4 Computation Overlap (%)

- 20 3 15 2 10 Higher is Better 1 5 0 0 4 8 16 32 64 128 Communication 4 8 16 32 64 128 Message Size (Bytes) Message Size (Bytes) Pure CommunicationPure Latency(us)

• Complete offload of Allreduce collective operation to Switch helps to have much higher overlap of communication and computation Available since MVAPICH2 2.3a *PPN: Processes Per Node Network Based Computing Laboratory Mellanox Theatre - SC’19 Benefits of SHARP Allreduce at Application Level Avg DDOT Allreduce time of HPCG 0.4 12% MVAPICH2 0.3 0.2 0.1

Latency (seconds) 0 (4,28) (8,28) (16,28) (Number of Nodes, PPN) SHARP support available since MVAPICH2 2.3a

Parameter Description Default MV2_ENABLE_SHARP=1 Enables SHARP-based collectives Disabled --enable-sharp Configure flag to enable SHARP Disabled

• Refer to Running Collectives with Hardware based SHARP support section of MVAPICH2 user guide for more information • http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3-userguide.html#x1-990006.26 Network Based Computing Laboratory Mellanox Theatre - SC’19 Optimized CMA-based Collectives for Large Messages

1000000 1000000 1000000 KNL (2 Nodes, 128 Procs) KNL (4 Nodes, 256 Procs) KNL (8 Nodes, 512 Procs) 100000 100000 100000

10000 ~ 2.5x 10000 ~ 3.2x 10000 ~ 4x Better Better Better 1000 1000 1000 MVAPICH2-2.3a MVAPICH2-2.3a MVAPICH2-2.3a

Latency (us) 100 Intel MPI 2017 100 100 ~ 17x Intel MPI 2017 Better Intel MPI 2017 OpenMPI 2.1.0 10 10 OpenMPI 2.1.0 10 OpenMPI 2.1.0 Tuned CMA 1 Tuned CMA Tuned CMA 1 1 1K 2K 4K 8K 1M 2M 4M 16K 32K 64K 1K 2K 4K 8K 1K 2K 4K 8K 1M 2M 1M 128K 256K 512K 16K 32K 64K 16K 32K 64K Message Size Message Size 128K 256K 512K Message Size 128K 256K 512K Performance of MPI_Gather on KNL nodes (64PPN) • Significant improvement over existing implementation for Scatter/Gather with 1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPower) • New two-level algorithms for better scalability • Improved performance for other collectives (Bcast, Allgather, and Alltoall) S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist Available since MVAPICH2-X 2.3b Network Based Computing Laboratory Mellanox Theatre - SC’19 Shared Address Space (XPMEM)-based Collectives Design OSU_Allreduce (Broadwell 256 procs) OSU_Reduce (Broadwell 256 procs) MVAPICH2-2.3b 100000 MVAPICH2-2.3b 100000 1.8X IMPI-2017v1.132 IMPI-2017v1.132 4X 10000 10000 MVAPICH2-X-2.3rc1 MVAPICH2-2.3rc1

1000 1000 73.2 37.9 100 100 Latency (us) Latency 10 10 36.1 16.8 1 1 16K 32K 64K 128K 256K 512K 1M 2M 4M 16K 32K 64K 128K 256K 512K 1M 2M 4M Message Size Message Size • “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2 • Offloaded computation/communication to peers ranks in reduction collective operation • Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction Available in MVAPICH2-X 2.3rc1 Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018. Network Based Computing Laboratory Mellanox Theatre - SC’19 Minimizing Memory Footprint by Direct Connect (DC) Transport • Constant connection cost (One QP for any peer) 0 P0 P1 Node • Full Feature Set (RDMA, Atomics etc) • Separate objects for send (DC Initiator) and receive (DC Node 3 IB Node 1 Target) P2 Network – DC Target identified by “DCT Number” P7 P3 – Messages routed with (DCT Number, LID) – Requires same “DC Key” to enable communication 2 P4 Node • Available since MVAPICH2-X 2.2a Memory Footprint for Alltoall NAMD - Apoa1: Large data set RC DC-Pool UD XRC RC DC-Pool UD XRC 97 1 100 47 22 10 10 10 10 10 10 5 0.5 3 Time 2 1 1 1 1 1 1 0 80 160 320 640 Normalized Execution 160 320 620

Connection Connection Memory (KB) Number of Processes Number of Processes H. Subramoni, K. Hamidouche, A. Venkatesh, S. Chakraborty and D. K. Panda, Designing MPI Library with Dynamic Connected Transport (DCT) of InfiniBand : Early Experiences. IEEE International Supercomputing Conference (ISC ’14) Network Based Computing Laboratory Mellanox Theatre - SC’19 User-mode Memory Registration (UMR) • Introduced by Mellanox to support direct local and remote noncontiguous memory access • Avoid packing at sender and unpacking at receiver • Available since MVAPICH2-X 2.2b Small & Medium Message Latency Large Message Latency 350 20000 300 UMR UMR 250 Default 15000 Default 200 150 10000 100 5000 Latency (us) Latency

50 (us) Latency 0 0 4K 16K 64K 256K 1M 2M 4M 8M 16M Message Size (Bytes) Message Size (Bytes) Connect-IB (54 Gbps): 2.8 GHz Dual Ten-core (IvyBridge) Intel PCI Gen3 with Mellanox IB FDR switch M. Li, H. Subramoni, K. Hamidouche, X. Lu and D. K. Panda, “High Performance MPI Datatype Support with User-mode Memory Registration: Challenges, Designs and Benefits”, CLUSTER, 2015 Network Based Computing Laboratory Mellanox Theatre - SC’19 20 On-Demand Paging (ODP)

• Applications no longer need to pin down underlying physical pages • Memory Region (MR) are NEVER pinned by the OS Applications (64 Processes) • Paged in by the HCA when needed 1000 Pin-down

• Paged out by the OS when reclaimed (s) ODP 100

• ODP can be divided into two classes Time – Explicit ODP 10

• Applications still register memory buffers for Execution communication, but this operation is used to define access 1 control for IO rather than pin-down the pages – Implicit ODP • Applications are provided with a special memory key that represents their complete address space, does not need to register any virtual address range • Advantages • Simplifies programming M. Li, K. Hamidouche, X. Lu, H. Subramoni, J. Zhang, and D. K. Panda, • Unlimited MR sizes “Designing MPI Library with On-Demand Paging (ODP) of InfiniBand: • Physical memory optimization Challenges and Benefits”, SC 2016. Available since MVAPICH2-X 2.3b Network Based Computing Laboratory Mellanox Theatre - SC’19 Impact of Zero Copy MPI Message Passing using HW Tag Matching (Point-to-point)

Eager Rendezvous osu_latency 35% osu_latency 8 400 6 300 4 200 2 100 Latency (us) Latency Latency (us) Latency 0 0 0 2 8 32 128 512 2K 8K 32K 64K 128K 256K 512K 1M 2M 4M Message Size (byte) Message Size (byte)

MVAPICH2 MVAPICH2+HW-TM MVAPICH2 MVAPICH2+HW-TM

Removal of intermediate buffering/copies can lead up to 35% performance improvement in latency of medium messages on TACC Frontera

Network Based Computing Laboratory Mellanox Theatre - SC’19 22 Benefits of the New Asynchronous Progress Design: Broadwell + InfiniBand P3DFFT High Performance Linpack (HPL)

150 44% Memory Consumption = 69% 140 33% PPN=28 100 119 120 50 106 109 100 100 100 100 0

56 112 224 448 GFLOPS Number of processes 80 224 448 896 Time per seconds per in Time loop ( 28 PPN ) MVAPICH2-X Async MVAPICH2-X Default Number of Processes Normalized Performance in Higher is better Lower is better MVAPICH2-X Async MVAPICH2-X Default Intel MPI 18.1.163

Up to 44% performance improvement in P3DFFT application with 448 processes Up to 19% and 9% performance improvement in HPL application with 448 and 896 processes

A. Ruhela, H. Subramoni, S. Chakraborty, M. Bayatpour, P. Kousha, and D.K. Panda, Efficient Asynchronous Communication Progress for MPI without Dedicated Resources, EuroMPI 2018 Available in MVAPICH2-X 2.3rc1

Network Based Computing Laboratory Mellanox Theatre - SC’19 Evaluation of Applications on Frontera (Cascade Lake + HDR100)

250 2.5

200 2

150 1.5

100 1 CG Time

50 Time per Step 0.5

0 0 13824 27648 41472 69984 7168 14336 28672 Number of Processes Number of Processes PPN=54 PPN=56 MIMD Lattice Computation (MILC) WRF2

Performance of MILC and WRF2 applications scales well with increase in system size

Network Based Computing Laboratory Mellanox Theatre - SC’19 MVAPICH2 Distributions • MVAPICH2 – Basic MPI support for IB, iWARP and RoCE • MVAPICH2-X – Advanced MPI features and support for INAM – MPI, PGAS and Hybrid MPI+PGAS support for IB • MVAPICH2-Virt – Optimized for HPC Clouds with IB and SR-IOV virtualization – Support for OpenStack, Docker, and Singularity • OSU Micro-Benchmarks (OMB) – MPI (including CUDA-aware MPI), OpenSHMEM and UPC • OSU INAM – InfiniBand Network Analysis and Monitoring Tool • MVAPICH2-GDR and Deep Learning (Will be presented on Thursday at 10:30am)

Network Based Computing Laboratory Mellanox Theatre - SC’19 25 Can HPC and Virtualization be Combined? • Virtualization has many benefits – Fault-tolerance – Job migration – Compaction • Have not been very popular in HPC due to overhead associated with Virtualization • New SR-IOV (Single Root – IO Virtualization) support available with Mellanox InfiniBand adapters changes the field • Enhanced MVAPICH2 support for SR-IOV • MVAPICH2-Virt 2.2 supports: – OpenStack, Docker, and singularity J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based Virtualized InfiniBand Clusters? EuroPar'14 J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR-IOV enabled InfiniBand Clusters, HiPC’14 J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build HPC Clouds, CCGrid’15 Network Based Computing Laboratory Mellanox Theatre - SC’19 Application-Level Performance on Chameleon

6000 400 MV2-SR-IOV-Def MV2-SR-IOV-Def 350 5000 MV2-SR-IOV-Opt MV2-SR-IOV-Opt 300 4000 MV2-Native MV2-Native 2% 250 3000 200 150 9.5% 2000

Execution Time (s) 1% Execution Time (ms) 5% 100 1000 50

0 0 22,20 24,10 24,16 24,20 26,10 26,16 milc leslie3d pop2 GAPgeofem zeusmp2 lu Problem Size (Scale, Edgefactor) Graph500 SPEC MPI2007 • 32 VMs, 6 Core/VM • Compared to Native, 2-5% overhead for Graph500 with 128 Procs • Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs

Network Based Computing Laboratory Mellanox Theatre - SC’19 Performance of Radix on Microsoft Azure

Total Execution Time on HC (Lower is better) Total Execution Time on HB (Lower is better) 25 30 38% faster MVAPICH2-X 3x faster MVAPICH2-X 25 20 HPCx HPCx 20 15 15 10 10 5 5 Execution (Seconds)Time Execution Execution (Seconds)Time Execution 0 0 60(1X60) 120(2X60) 240(4X60)

Number of Processes (Nodes X PPN) Number of Processes (Nodes X PPN)

Network Based Computing Laboratory Mellanox Theatre - SC’19 Amazon Elastic Fabric Adapter (EFA)

• Enhanced version of Elastic Network Adapter (ENA) Feature UD SRD • Allows OS bypass, up to 100 Gbps bandwidth Send/Recv ✔ ✔ • New QP type: Scalable Reliable Datagram (SRD) • Network aware multi-path routing - low tail latency Send w/ Immediate ✖ ✖ • Guaranteed Delivery, no ordering guarantee RDMA Read/Write/Atomic ✖ ✖ • Exposed through verbs and libfabric interfaces Scatter Gather Lists ✔ ✔

Verbs-level Latency Unidirectional Message Rate Shared Receive Queue ✖ ✖ 25 2.5 2.02 Reliable Delivery ✖ ✔ 20 16.95 2 Ordering ✖ ✖ 15 1.5 1.77 15.69 10 1 Inline Sends ✖ ✖ Latency (us) 5 UD Rate Message 0.5 Global Routing Header ✔ ✖ SRD (million msg/sec) UD 0 SRD 0 MTU Size 4KB 8KB 2 8 32 128 512 2048 2 8 32 128 512 2048 Message Size (Bytes) Message Size (Bytes)

Network Based Computing Laboratory Mellanox Theatre - SC’19 MPI-level Performance with SRD Scatterv – 8 node 36 ppn Gatherv – 8 node 36 ppn Allreduce – 8 node 36 ppn 10000 1000 800 MV2X-UD MV2X-UD 800 MV2X-SRD 1000 600 MV2X-SRD OpenMPI OpenMPI 600 100 400 400 MV2X-UD 10 200 MV2X-SRD 200 Latency (us) Latency (us) Latency (us) OpenMPI 1 0 0 1 4 16 64 256 1k 4k 1 4 16 64 256 1k 4k 4 16 64 256 1k 4k Message Size (Bytes) Message Size (Bytes) Message Size (Bytes)

miniGhost 10% better CloverLeaf 80 30 Instance type: c5n.18xlarge 70 MV2X OpenMPI MV2X-UD 25 MV2X-SRD CPU: Intel Xeon Platinum 8124M @ 3.00GHz 60 MVAPICH2 version: MVAPICH2-X 2.3rc2 + SRD support 50 20 27.5% OpenMPI OpenMPI version: Open MPI v3.1.3 with libfabric 1.7 40 15 better 30 10 S. Chakraborty, S. Xu, H. Subramoni, and D. K. Panda, 20 5 Designing Scalable and High-performance MPI 10 Libraries on Amazon Elastic Fabric Adapter, to be 0 0 presented at the 26th Symposium on High 72(2x36) 144(4x36) 288(8x36) 72(2x36) 144(4x36) 288(8x36) Performance Interconnects, (HOTI ’19)

Execution (Seconds)Time Execution Processes (Nodes X PPN) (Seconds)Time Execution Processes (Nodes X PPN)

Network Based Computing Laboratory Mellanox Theatre - SC’19 MVAPICH2 Distributions • MVAPICH2 – Basic MPI support for IB, iWARP and RoCE • MVAPICH2-X – Advanced MPI features and support for INAM – MPI, PGAS and Hybrid MPI+PGAS support for IB • MVAPICH2-Virt – Optimized for HPC Clouds with IB and SR-IOV virtualization – Support for OpenStack, Docker, and Singularity • OSU Micro-Benchmarks (OMB) – MPI (including CUDA-aware MPI), OpenSHMEM and UPC • OSU INAM – InfiniBand Network Analysis and Monitoring Tool • MVAPICH2-GDR and Deep Learning (Will be presented on Thursday at 10:30am)

Network Based Computing Laboratory Mellanox Theatre - SC’19 31 OSU Microbenchmarks • Available since 2004 • Suite of microbenchmarks to study communication performance of various programming models • Benchmarks available for the following programming models – Message Passing Interface (MPI) – Partitioned Global Address Space (PGAS) • Unified Parallel C (UPC) • Unified Parallel C++ (UPC++) • OpenSHMEM • Benchmarks available for multiple accelerator based architectures – Compute Unified Device Architecture (CUDA) – OpenACC Application Program Interface • Part of various national resource procurement suites like NERSC-8 / Trinity Benchmarks • Please visit the following link for more information – http://mvapich.cse.ohio-state.edu/benchmarks/ Network Based Computing Laboratory Mellanox Theatre - SC’19 MVAPICH2 Distributions • MVAPICH2 – Basic MPI support for IB, iWARP and RoCE • MVAPICH2-X – Advanced MPI features and support for INAM – MPI, PGAS and Hybrid MPI+PGAS support for IB • MVAPICH2-Virt – Optimized for HPC Clouds with IB and SR-IOV virtualization – Support for OpenStack, Docker, and Singularity • OSU Micro-Benchmarks (OMB) – MPI (including CUDA-aware MPI), OpenSHMEM and UPC • OSU INAM – InfiniBand Network Analysis and Monitoring Tool • MVAPICH2-GDR and Deep Learning (Will be presented on Wednesday at 1:30 pm)

Network Based Computing Laboratory Mellanox Theatre - SC’19 33 Overview of OSU INAM

• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the MPI runtime – http://mvapich.cse.ohio-state.edu/tools/osu-inam/ • Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes • Capability to analyze and profile node-level, job-level and process-level activities for MPI communication – Point-to-Point, Collectives and RMA • Ability to filter data based on type of counters using “drop down” list • Remotely monitor various metrics of MPI processes at user specified granularity • "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X • Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes • OSU INAM 0.9.4 released on 11/10/2018 – Enhanced performance for fabric discovery using optimized OpenMP-based multi-threaded designs – Ability to gather InfiniBand performance counters at sub-second granularity for very large (>2000 nodes) clusters – Redesign database layout to reduce database size – Enhanced fault tolerance for database operations • Thanks to Trey Dockendorf @ OSC for the feedback – OpenMP-based multi-threaded designs to handle database purge, read, and insert operations simultaneously – Improved database purging time by using bulk deletes – Tune database timeouts to handle very long database operations – Improved debugging support by introducing several debugging levels Network Based Computing Laboratory Mellanox Theatre - SC’19 OSU INAM Features

Comet@SDSC --- Clustered View Finding Routes Between Nodes (1,879 nodes, 212 switches, 4,377 network links) • Show network topology of large clusters • Visualize traffic pattern on different links • Quickly identify congested links/links in error state • See the history unfold – play back historical state of the network

Network Based Computing Laboratory Mellanox Theatre - SC’19 OSU INAM Features (Cont.)

Visualizing a Job (5 Nodes) Estimated Process Level Link Utilization • Job level view • Estimated Link Utilization view • Show different network metrics (load, error, etc.) for any live job • Classify data flowing over a network link at • Play back historical data for completed jobs to identify bottlenecks different granularity in conjunction with • Node level view - details per process or per node MVAPICH2-X 2.2rc1 • CPU utilization for each rank/node • Job level and • Bytes sent/received for MPI operations (pt-to-pt, collective, RMA) • Process level • Network metrics (e.g. XmitDiscard, RcvError) per rank/node

Network Based Computing Laboratory Mellanox Theatre - SC’19 Applications-Level Tuning: Compilation of Best Practices • MPI runtime has many parameters • Tuning a set of parameters can help you to extract higher performance • Compiled a list of such contributions through the MVAPICH Website – http://mvapich.cse.ohio-state.edu/best_practices/ • Initial list of applications – Amber – HoomDBlue – HPCG – Lulesh – MILC – Neuron – SMG2000 – Cloverleaf – SPEC (LAMMPS, POP2, TERA_TF, WRF2) • Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu. • We will link these results with credits to you.

Network Based Computing Laboratory Mellanox Theatre - SC’19 Commercial Support for MVAPICH2 Libraries • Supported through X-ScaleSolutions (http://x-scalesolutions.com) • Benefits: – Help and guidance with installation of the library – Platform-specific optimizations and tuning – Timely support for operational issues encountered with the library – Web portal interface to submit issues and tracking their progress – Advanced debugging techniques – Application-specific optimizations and tuning – Obtaining guidelines on best practices – Periodic information on major fixes and updates – Information on major releases – Help with upgrading to the latest release – Flexible Service Level Agreements • Support provided to Lawrence Livermore National Laboratory (LLNL) during last two years

Network Based Computing Laboratory Mellanox Theatre - SC’19 MVAPICH2 – Plans for Exascale • Performance and Memory scalability toward 1M-10M cores • Hybrid programming (MPI + OpenSHMEM, MPI + UPC, MPI + CAF …) • MPI + Task* • Enhanced Optimization for GPUs and FPGAs* • Taking advantage of advanced features of Mellanox InfiniBand • Tag Matching* • Adapter Memory* • Enhanced communication schemes for upcoming architectures • NVLINK* • CAPI* • Extended topology-aware collectives • Extended Energy-aware designs and Virtualization Support • Extended Support for MPI Tools Interface (as in MPI 3.0) • Extended FT support • Support for * features will be available in future MVAPICH2 Releases

Network Based Computing Laboratory Mellanox Theatre - SC’19 One More Presentation

• Wednesday (11/21/19) at 1:30pm

MVAPICH2-GDR: Pushing the Frontier of HPC and Deep Learning

Network Based Computing Laboratory Mellanox Theatre - SC’19 Join us for Multiple Events at SC ‘19 • Presentations at OSU and X-Scale Booth (#2094) – Members of the MVAPICH, HiBD and HiDL members – External speakers • Presentations at SC main program (Tutorials, Workshops, BoFs, Posters, and Doctoral Showcase) • Presentation at many other booths (Mellanox, Intel, Microsoft, and AWS) and satellite events • Complete details available at http://mvapich.cse.ohio-state.edu/conference/752/talks/

Network Based Computing Laboratory Mellanox Theatre - SC’19 Thank You! [email protected]

Network-Based Computing Laboratory http://nowlab.cse.ohio-state.edu/

The High-Performance MPI/PGAS Project The High-Performance Big Data Project The High-Performance Deep Learning Project http://mvapich.cse.ohio-state.edu/ http://hibd.cse.ohio-state.edu/ http://hidl.cse.ohio-state.edu/

Network Based Computing Laboratory Mellanox Theatre - SC’19