Software Libraries and Middleware for Exascale Systems

Software Libraries and Middleware for Exascale Systems

Experiences in Designing, Developing, Packaging, and Deploying the MVAPICH2 Libraries Talk at E4S Forum (September ‘19) by Hari Subramoni The Ohio State University E-mail: [email protected] Follow us on http://www.cse.ohio-state.edu/~subramon https://twitter.com/mvapich High-End Computing (HEC): PetaFlop to ExaFlop 100 PFlops in 2017 149 PFlops in 2018 1 EFlops in 2020-2021? Expected to have an ExaFlop system in 2020-2021! Network Based Computing Laboratory E4S Forum (Sept. ’19) 2 Supporting Programming Models for Multi-Petaflop and Exaflop Systems: Challenges Application Kernels/Applications (HPC and DL) Co-Design Middleware Opportunities and Programming Models Challenges MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP, across Various OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc. Layers Communication Library or Runtime for Programming Models Performance Point-to-point Collective Energy- Synchronization I/O and Fault Communication Communication Awareness and Locks File Systems Tolerance Scalability Resilience Networking Technologies Multi-/Many-core Accelerators (InfiniBand, 40/100/200GigE, Architectures (GPU and FPGA) Aries, and Omni-Path) Network Based Computing Laboratory E4S Forum (Sept. ’19) 3 Designing (MPI+X) at Exascale • Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided) – Scalable job start-up – Low memory footprint • Scalable Collective communication – Offload – Non-blocking – Topology-aware • Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores) – Multiple end-points per node • Support for efficient multi-threading • Integrated Support for Accelerators (GPGPUs and FPGAs) • Fault-tolerance/resiliency • QoS support for communication and I/O • Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, MPI+UPC++, CAF, …) • Virtualization • Energy-Awareness Network Based Computing Laboratory E4S Forum (Sept. ’19) 4 Presentation Overview • MVAPICH Project – MPI and PGAS Library with CUDA-Awareness • HiDL Project – High-Performance Deep Learning • Public Cloud Deployment – Microsoft-Azure and Amazon-AWS • Deployment Solutions • Conclusions Network Based Computing Laboratory E4S Forum (Sept. ’19) 5 Parallel Programming Models Overview P1 P2 P3 P1 P2 P3 P1 P2 P3 Logical shared memory Memory Memory Memory Shared Memory Memory Memory Memory Shared Memory Model Distributed Memory Model Partitioned Global Address Space (PGAS) SHMEM, DSM MPI (Message Passing Interface) OpenSHMEM, UPC, Chapel, X10, CAF, … • Programming models provide abstract machine models • Models can be mapped on different types of systems – e.g. Distributed Shared Memory (DSM), MPI within a node, etc. • PGAS models and Hybrid MPI+PGAS models are gradually receiving importance Network Based Computing Laboratory E4S Forum (Sept. ’19) 6 Overview of the MVAPICH2 Project • High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE) – MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002 – MVAPICH2-X (MPI + PGAS), Available since 2011 – Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014 – Support for Virtualization (MVAPICH2-Virt), Available since 2015 – Support for Energy-Awareness (MVAPICH2-EA), Available since 2015 – Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015 – Used by more than 3,025 organizations in 89 countries – More than 589,000 (> 0.5 million) downloads from the OSU site directly – Empowering many TOP500 clusters (Nov ‘18 ranking) • 3rd, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China • 5th, 448, 448 cores (Frontera) at TACC • 8th, 391,680 cores (ABCI) in Japan • 15th, 570,020 cores (Neurion) in South Korea and many others – Available with software stacks of many vendors and Linux Distros (RedHat, SuSE, and OpenHPC) – http://mvapich.cse.ohio-state.edu Partner in the TACC Frontera System • Empowering Top500 systems for over a decade Network Based Computing Laboratory E4S Forum (Sept. ’19) 7 Network Based Computing Computing Laboratory Based Network Number of Downloads 600000 100000 200000 300000 400000 500000 0 MVAPICH2 Release Timeline and Release and Timeline DownloadsMVAPICH2 Sep-04 Feb-05 Jul-05 Dec-05 MV 0.9.4 May-06 Oct-06 Mar-07 MV2 0.9.0 Aug-07 Jan-08 Jun-08 MV2 0.9.8 Nov-08 Apr-09 Sep-09 MV2 1.0 Feb-10 E4S Forum (Sept. ’19) (Sept. Forum E4S Jul-10 MV 1.0 Dec-10 MV2 1.0.3 May-11 MV 1.1 Timeline Oct-11 Mar-12 Aug-12 MV2 1.4 Jan-13 Jun-13 Nov-13 MV2 1.5 Apr-14 MV2 1.6 Sep-14 Feb-15 Jul-15 MV2 1.7 Dec-15 MV2 1.8 May-16 Oct-16 Mar-17 MV2 1.9 Aug-17 MV2-GDR 2.0b Jan-18 MV2 Virt 2.2MV2-MIC 2.0 Jun-18 OSU INAM 0.9.3 MV2-X 2.3 rc2 Nov-18 MV2-GDR 2.3.2 Apr-19 MV2 2.3.2 MV2-Azure 2.3.2 MV2-AWS 2.3 8 Architecture of MVAPICH2 Software Family High Performance Parallel Programming Models Message Passing Interface PGAS Hybrid --- MPI + X (MPI) (UPC, OpenSHMEM, CAF, UPC++) (MPI + PGAS + OpenMP/Cilk) High Performance and Scalable Communication Runtime Diverse APIs and Mechanisms Point-to- Remote Collectives Energy- I/O and Fault Active Introspection point Job Startup Memory Virtualization Algorithms Awareness File Systems Tolerance Messages & Analysis Primitives Access Support for Modern Networking Technology Support for Modern Multi-/Many-core Architectures (InfiniBand, iWARP, RoCE, Omni-Path, Elastic Fabric Adapter) (Intel-Xeon, OpenPOWER, Xeon-Phi, ARM, NVIDIA GPGPU) Transport Protocols Modern Features Transport Mechanisms Modern Features SR- Multi Shared RC SRD UD DC UMR ODP CMA IVSHMEM XPMEM Optane* NVLink CAPI* IOV Rail Memory * Upcoming Network Based Computing Laboratory E4S Forum (Sept. ’19) 9 Strong Procedure for Design, Development and Release • Research is done for exploring new designs • Designs are first presented to conference/journal publications • Best performing designs are incorporated into the codebase • Rigorous Q&A procedure before making a release – Exhaustive unit testing – Various test procedures on diverse range of platforms and interconnects – Test 19 different benchmarks and applications including, but not limited to • OMB, IMB, MPICH Test Suite, Intel Test Suite, NAS, ScaLAPACK, and SPEC – Spend about 18,000 core hours per commit – Performance regression and tuning – Applications-based evaluation – Evaluation on large-scale systems (Lassen, Frontera, Summit etc) • All versions (alpha, beta, RC1 and RC2) go through the above testing Network Based Computing Laboratory E4S Forum (Sept. ’19) 10 MVAPICH2 Software Family Requirements Library MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2 Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS MVAPICH2-X with IB, Omni-Path, and RoCE MPI with IB, RoCE & GPU and Support for Deep Learning MVAPICH2-GDR HPC Cloud with MPI & IB MVAPICH2-Virt Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA MPI Energy Monitoring Tool OEMT InfiniBand Network Analysis and Monitoring OSU INAM Microbenchmarks for Measuring MPI and PGAS Performance OMB Network Based Computing Laboratory E4S Forum (Sept. ’19) 11 Startup Performance on TACC Frontera MPI_Init on Frontera 5000 Intel MPI 2019 4.5s 4000 MVAPICH2 2.3.2 3.9s 3000 Time Taken Taken Time 2000 (Milliseconds) 1000 0 56 112 224 448 896 1792 3584 7168 143362867257344 Number of Processes • MPI_Init takes 3.9 seconds on 57,344 processes on 1,024 nodes • All numbers reported with 56 processes per node New designs available in MVAPICH2-2.3.2 Network Based Computing Laboratory E4S Forum (Sept. ’19) 12 One-way Latency: MPI over IB with MVAPICH2 1.8 Small Message Latency 120 Large Message Latency TrueScale-QDR 1.6 1.19 100 ConnectX-3-FDR 1.4 1.11 ConnectIB-DualFDR 1.2 80 ConnectX-4-EDR 1 60 Omni-Path 0.8 1.15 ConnectX-6 HDR 0.6 1.01 40 Latency (us) Latency Latency (us) 1.04 0.4 1.1 20 0.2 0 0 Message Size (bytes) Message Size (bytes) TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch ConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Network Based Computing Laboratory E4S Forum (Sept. ’19) 13 Bandwidth: MPI over IB with MVAPICH2 30000 Unidirectional Bandwidth 60000 Bidirectional Bandwidth TrueScale-QDR 24,532 48,027 25000 50000 ConnectX-3-FDR 20000 40000 ConnectIB-DualFDR ConnectX-4-EDR 15000 30000 21,983 12,590 Omni-Path 24,136 12,366 ConnectX-6 HDR Bandwidth Bandwidth 10000 Bandwidth Bandwidth 20000 (MBytes/sec) 12,083 (MBytes/sec) 21,227 5000 6,356 10000 12,161 3,373 6,228 0 0 Message Size (bytes) Message Size (bytes) TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch ConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch Network Based Computing Laboratory E4S Forum (Sept. ’19) 14 Intra-node Point-to-Point Performance on OpenPower Intra-Socket Small Message Latency Intra-Socket Large Message Latency 1 100 MVAPICH2-2.3.1 MVAPICH2-2.3.1 80 SpectrumMPI-2019.02.07 60 SpectrumMPI-2019.02.07 0.5 40 Latency (us)Latency Latency (us)Latency 20 0.22us 0 0 0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M Intra-Socket Bandwidth Intra-Socket Bi-directional Bandwidth 40000 40000 MVAPICH2-2.3.1 MVAPICH2-2.3.1 30000 SpectrumMPI-2019.02.07 SpectrumMPI-2019.02.07 20000 20000 10000 Bandwidth (MB/s) - Bandwidth (MB/s) 0 Bi 0 1 8 64 512 4K 32K 256K 2M 1 8 64 512 4K 32K 256K 2M Platform: Two nodes of OpenPOWER (POWER9-ppc64le) CPU using Mellanox EDR (MT4121) HCA Network Based Computing Laboratory E4S Forum (Sept.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    48 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us