Software Libraries and Middleware for Exascale Systems

Software Libraries and Middleware for Exascale Systems

Designing HPC, Big Data, Deep Learning, and Cloud Middleware for Exascale Systems: Challenges and Opportunities Keynote Talk at SEA Symposium (April ‘18) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~panda High-End Computing (HEC): Towards Exascale 100 PFlops in 2016 1 EFlops in 2019- 2020? Expected to have an ExaFlop system in 2019-2020! Network Based Computing Laboratory SEA Symposium (April ’18) 2 Increasing Usage of HPC, Big Data and Deep Learning Big Data HPC (Hadoop, Spark, (MPI, RDMA, HBase, Lustre, etc.) Memcached, etc.) Convergence of HPC, Big Deep Learning Data, and Deep Learning! (Caffe, TensorFlow, BigDL, etc.) Increasing Need to Run these applications on the Cloud!! Network Based Computing Laboratory SEA Symposium (April ’18) 3 Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure? Physical Compute Network Based Computing Laboratory SEA Symposium (April ’18) 4 Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure? Network Based Computing Laboratory SEA Symposium (April ’18) 5 Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure? Network Based Computing Laboratory SEA Symposium (April ’18) 6 Can We Run Big Data and Deep Learning Jobs on Existing HPC Infrastructure? Hadoop Job Deep Learning Job Spark Job Network Based Computing Laboratory SEA Symposium (April ’18) 7 HPC, Big Data, Deep Learning, and Cloud • Traditional HPC – Message Passing Interface (MPI), including MPI + OpenMP – Support for PGAS and MPI + PGAS (OpenSHMEM, UPC) – Exploiting Accelerators • Big Data/Enterprise/Commercial Computing – Spark and Hadoop (HDFS, HBase, MapReduce) – Memcached is also used for Web 2.0 • Deep Learning – Caffe, CNTK, TensorFlow, and many more • Cloud for HPC and BigData – Virtualization with SR-IOV and Containers Network Based Computing Laboratory SEA Symposium (April ’18) 8 Parallel Programming Models Overview P1 P2 P3 P1 P2 P3 P1 P2 P3 Logical shared memory Memory Memory Memory Shared Memory Memory Memory Memory Shared Memory Model Distributed Memory Model Partitioned Global Address Space (PGAS) SHMEM, DSM MPI (Message Passing Interface) Global Arrays, UPC, Chapel, X10, CAF, … • Programming models provide abstract machine models • Models can be mapped on different types of systems – e.g. Distributed Shared Memory (DSM), MPI within a node, etc. • PGAS models and Hybrid MPI+PGAS models are gradually receiving importance Network Based Computing Laboratory SEA Symposium (April ’18) 9 Partitioned Global Address Space (PGAS) Models • Key features - Simple shared memory abstractions - Light weight one-sided communication - Easier to express irregular communication • Different approaches to PGAS - Languages - Libraries • Unified Parallel C (UPC) • OpenSHMEM • Co-Array Fortran (CAF) • UPC++ • X10 • Global Arrays • Chapel Network Based Computing Laboratory SEA Symposium (April ’18) 10 Hybrid (MPI+PGAS) Programming • Application sub-kernels can be re-written in MPI/PGAS HPC Application based on communication characteristics Kernel 1 • Benefits: MPI – Best of Distributed Computing Model Kernel 2 PGASMPI – Best of Shared Memory Computing Model Kernel 3 MPI Kernel N PGASMPI Network Based Computing Laboratory SEA Symposium (April ’18) 11 Supporting Programming Models for Multi-Petaflop and Exaflop Systems: Challenges Application Kernels/Applications Co-Design Middleware Opportunities and Programming Models Challenges MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP, across Various OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc. Layers Communication Library or Runtime for Programming Models Performance Point-to-point Collective Energy- Synchronization I/O and Fault Communication Communication Awareness and Locks File Systems Tolerance Scalability Resilience Networking Technologies Multi-/Many-core Accelerators (InfiniBand, 40/100GigE, Architectures (GPU and FPGA) Aries, and Omni-Path) Network Based Computing Laboratory SEA Symposium (April ’18) 12 Broad Challenges in Designing Communication Middleware for (MPI+X) at Exascale • Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided) – Scalable job start-up • Scalable Collective communication – Offload – Non-blocking – Topology-aware • Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores) – Multiple end-points per node • Support for efficient multi-threading • Integrated Support for GPGPUs and Accelerators • Fault-tolerance/resiliency • QoS support for communication and I/O • Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, MPI+UPC++, CAF, …) • Virtualization • Energy-Awareness Network Based Computing Laboratory SEA Symposium (April ’18) 13 Additional Challenges for Designing Exascale Software Libraries • Extreme Low Memory Footprint – Memory per core continues to decrease • D-L-A Framework – Discover • Overall network topology (fat-tree, 3D, …), Network topology for processes for a given job • Node architecture, Health of network and node – Learn • Impact on performance and scalability • Potential for failure – Adapt • Internal protocols and algorithms • Process mapping • Fault-tolerance solutions – Low overhead techniques while delivering performance, scalability and fault-tolerance Network Based Computing Laboratory SEA Symposium (April ’18) 14 Overview of the MVAPICH2 Project • High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE) – MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002 – MVAPICH2-X (MPI + PGAS), Available since 2011 – Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014 – Support for Virtualization (MVAPICH2-Virt), Available since 2015 – Support for Energy-Awareness (MVAPICH2-EA), Available since 2015 – Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015 – Used by more than 2,875 organizations in 86 countries – More than 460,000 (> 0.46 million) downloads from the OSU site directly – Empowering many TOP500 clusters (Nov ‘17 ranking) • 1st, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China • 9th, 556,104 cores (Oakforest-PACS) in Japan • 12th, 368,928-core (Stampede2) at TACC • 17th, 241,108-core (Pleiades) at NASA • 48th, 76,032-core (Tsubame 2.5) at Tokyo Institute of Technology – Available with software stacks of many vendors and Linux Distros (RedHat and SuSE) – http://mvapich.cse.ohio-state.edu • Empowering Top500 systems for over a decade Network Based Computing Laboratory SEA Symposium (April ’18) 15 Network Based Computing Computing Based Laboratory Network Number of Downloads 100000 150000 200000 250000 300000 350000 400000 450000 500000 50000 0 MVAPICH2 Release TimelineDownloadsand Sep-04 Feb-05 Jul-05 Dec-05 MV 0.9.4 May-06 Oct-06 MV2 0.9.0 Mar-07 Aug-07 Jan-08 MV2 0.9.8 Jun-08 Nov-08 MV2 1.0 Apr-09 SEA SymposiumSEA (April ’18) Sep-09 MV 1.0 Feb-10 MV2 1.0.3 Jul-10 MV 1.1 Timeline Dec-10 May-11 Oct-11 MV2 1.4 Mar-12 Aug-12 MV2 1.5 Jan-13 Jun-13 MV2 1.6 Nov-13 Apr-14 MV2 1.7 Sep-14 MV2 1.8 Feb-15 Jul-15 Dec-15 MV2 1.9 May-16 MV2-GDR 2.0b Oct-16 MV2-MIC 2.0 MV2 Virt 2.2 Mar-17 MV2-X 2.3b Aug-17 MV2-GDR 2.3a MV2 2.3rc1 16 Jan-18 OSU INAM 0.9.3 Architecture of MVAPICH2 Software Family High Performance Parallel Programming Models Message Passing Interface PGAS Hybrid --- MPI + X (MPI) (UPC, OpenSHMEM, CAF, UPC++) (MPI + PGAS + OpenMP/Cilk) High Performance and Scalable Communication Runtime Diverse APIs and Mechanisms Point-to- Remote Collectives Energy- I/O and Fault Active Introspection point Job Startup Memory Virtualization Algorithms Awareness Tolerance Messages & Analysis Primitives Access File Systems Support for Modern Networking Technology Support for Modern Multi-/Many-core Architectures (InfiniBand, iWARP, RoCE, Omni-Path) (Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU) Transport Protocols Modern Features Transport Mechanisms Modern Features SR- Multi Shared RC XRC UD DC UMR ODP CMA IVSHMEM XPMEM* MCDRAM* NVLink* CAPI* IOV Rail Memory * Upcoming Network Based Computing Laboratory SEA Symposium (April ’18) 17 Overview of A Few Challenges being Addressed by the MVAPICH2 Project for Exascale • Scalability for million to billion processors – Support for highly-efficient inter-node and intra-node communication – Scalable Start-up – Optimized Collectives using SHArP and Multi-Leaders – Optimized CMA-based Collectives – Upcoming Optimized XPMEM-based Collectives – Performance Engineering with MPI-T Support – Integrated Network Analysis and Monitoring • Unified Runtime for Hybrid MPI+PGAS programming (MPI + OpenSHMEM, MPI + UPC, CAF, UPC++, …) • Integrated Support for GPGPUs • Optimized MVAPICH2 for OpenPower (with/ NVLink) and ARM • Application Scalability and Best Practices Network Based Computing Laboratory SEA Symposium (April ’18) 18 One-way Latency: MPI over IB with MVAPICH2 2 Small Message Latency 120 Large Message Latency TrueScale-QDR 1.8 ConnectX-3-FDR 1.6 1.19 100 ConnectIB-DualFDR 1.4 1.15 80 ConnectX-5-EDR 1.2 Omni-Path 1 60 0.8 1.11 0.6 1.04 40 Latency (us) Latency (us) 0.98 0.4 20 0.2 0 0 Message Size (bytes) Message Size (bytes) TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switch ConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    85 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us