High-Performance Pdc (Parallel and Distributed Computing)

Total Page:16

File Type:pdf, Size:1020Kb

High-Performance Pdc (Parallel and Distributed Computing) HIGH-PERFORMANCE PDC (PARALLEL AND DISTRIBUTED COMPUTING) Prasun Dewan Department of Computer Science University of North Carolina at Chapel Hill [email protected] GOAL High Performance PDC (Parallel and Distributed Computing) Important for traditional problems when computers were slow Back to the future with emergence of data science Modern popular software abstractions: OpenMP (Parallel Computing and Distributed Shared Memory) Late nineties- MapReduce (Distributed Computing) 2004- OpenMPI (~ Sockets, not covered) Prins 633 course Covers HPC PDC algorithms and OpenMP in detail. Don Smith 590 Course Covers MapReduce uses in detail Connect OpenMP and MapReduce to Each other (reduction) Non HPC PDC Research (with which I am familiar) Derive the design and implementation of both kinds of abstractions From similar but different performance issues that motivate them. 2 A TALE OF TWO DISTRIBUTION KINDS Remotely Accessible Services (Printers, Desktops) Replicated Repositories (Files, Databases) Differences between the Collaborative Applications two groups? (Games, Shared Desktops) Distributed Sensing (Disaster Prediction) Computation Distribution (Number, Matrix Multiplication) 3 PRIMARY/SECONDARY DISTRIBUTION REASON Primary Secondary Remotely Accessible Services (Printers, Desktops) Remote Service Replicated Repositories (Files, Fault Tolerance, Databases) Availabilty Collaborative Applications Collaboration among Speedup (Games, Shared distributed users Desktops) Distributed Sensing Aggregation of (Disaster Prediction) Distributed Data Computation Remote Distribution High-Performance: (Number, Matrix service, Speedup Multiplication) aggregation,… 4 DIST. VS PARALLEL-DIST. EVOLUTION Non Remotely Accessible Distributed Services (Printers, Existing Desktops) Program Replicated Repositories (Files, Databases) Distributed Distributed T T T T Collaborative Program Program Applications (Games, Shared Desktops) Non Distributed Non Distributed Distributed Sensing Existing T T (Disaster Prediction) Single-Thread Multi-Thread Program Program Computation Distribution Distributed Distributed (Number, Matrix T Program T T Program T Multiplication) 5 CONCURRENCY-DISTRIBUTION RELATIONSHIP Remotely Accessible Threads complement blocking IPC primitives by Services (Printers, improving responsiveness, removing deadlocks, Desktops) and allowing certain kinds of distributed applications that would otherwise not be Replicated possible Repositories (Files, e.g. Server waiting for messages from multiple Databases) blocking sockets, NIP selector threads sending read data to read thread, creating a separate Collaborative thread for incoming remote calls Applications (Games, Shared Desktops) Thread decomposition for speedup and replaced by process decomposition and can bee present in Distributed Sensing decomposed processes. (Disaster Prediction) e.g. Single-process: each row of matrix A multiplied with column of A by separate thread Computation Distribution e.g. Multi-process: Each row of matrix assigned to (Number, Matrix a process, which uses different threads to Multiplication) multiply it with different columns of matrix B. 6 ALGORITHMIC CHALLENGE Remotely Accessible Services (Printers, Consistency: How to define and implement Desktops) correct coupling among distributed processes? Replicated Repositories (Files, Databases) Collaborative Applications (Games, Shared Desktops) Distributed Sensing (Disaster Prediction) Computation How to parallelize/distribute single- Distribution thread algorithms? (Number, Matrix Multiplication) 7 CENTRAL MEDIATOR Remotely Accessible Services (Printers, Desktops) Clients often talk to each through a central server whose code is Replicated unaware of specific arbitrary Repositories (Files, client/slave locations and ports Databases) Collaborative Applications (Games, Shared Desktops) Distributed Sensing (Disaster Prediction) A central master process/thread often decomposes problem and Computation combines results computed by slave Distribution agents, but decomposer knows (Number, Matrix about the nature of slaves, which are Multiplication) the service providers. 8 IMPLEMENTATION CHALLENGE Remotely Accessible Services (Printers, How to add to single-process local Desktops) observable/observer, producer- consumer and synchronization Replicated relationships corresponding distributed Repositories (Files, observable/observer, producer- Databases) consumer and synchronization relationships? Collaborative Applications (Games, Shared Distributed separation of concerns! Desktops) Distributed Sensing (Disaster Prediction) Computation How to reuse existing single-thread Distribution code in multi-thread/multi-process (Number, Matrix program? Multiplication) 9 SECURITY ISSUES Remotely Accessible Services (Printers, Communicating processes created Desktops) independently on typically geographically dispersed Replicated autonomous hosts, raising security Repositories (Files, issues Databases) Collaborative Applications (Games, Shared Desktops) Distributed Sensing Communicating threads/processes (Disaster Prediction) created on hosts/processors typically under control of one authority. Computation Though crowd problem solving is Distribution (Number, Matrix an infrequent exception: UW Condor – part of Cverse/XSEDE, Multiplication) 10 Wagstaff primes FAULT TOLERANCE VS SECURITY Byzantine and general Usually less autonomy security problems ∝ Fault Tolerance in HPC autonomy Byzantine Fault Non-Byzantine Tolerance Fault Tolerance Block chain Two-Phase Commit Paxos Algorithm Message Loss, Manipulation Computer Failure 11 (Adversary) LIFETIME OF PROCESSES/THREADS Remotely Accessible Services (Printers, Desktops) Replicated Repositories (Files, Long-lived, processes Databases) need to be explicitly terminated Collaborative Applications (Games, Shared Desktops) Distributed Sensing (Disaster Prediction) Computation Short-lived, terminate Distribution when computation (Number, Matrix complete Multiplication) 12 FAULT TOLERANCE Remotely Accessible Services (Printers, Faults can be long lived and Desktops) propagate as goal is to couple processes Replicated Repositories (Files, Databases) Collaborative Applications (Games, Shared Desktops) Distributed Sensing (Disaster Prediction) More independent work as goal is Computation to divide common work. Thus Distribution faults usually do not propagate. (Number, Matrix Short-term task can be simply Multiplication) completely or partially restarted. 13 COUPLING VS HIGH-PERFORMANCE Coupling/Consistency High Performance Independent, “long-lived” A single short-lived process processes at different created to perform some autonomous locations made computation made multi- dependent for resource threaded and/or distributed sharing, user collaboration, fault tolerance. to increase performance. Include consistency algorithms, Speedup algorithms that possibly in separate threads, to replace logic of existing define dependency that have code, with task distributed producer-consumer, decomposition observer relationship with existing algorithms May involve a central mediating, distributing May use additional mediating master code but it can be server and other infrastructure code unaware of specific aware of and creator of clients. specific slave processes Division of labor between client Division of labor among and infrastructure an issue master and slave and slave (centralized vs replicated) and slaves an issue 14 EXAMPLE: SERVICE + SPEEDUP Comp 401 Slave Client Grader Assignment-level Server Master decomposition Slave Client1 TS GraderGrader Server TG Comp 533 Grader Client2 Slave Master Slave T1 Slave 4 Comp 533 4 533 T 533 T Test-level T3 T4 Grader T5 T6 decomposition Tester Tester T2 15 ABSTRACTIONS Remotely Accessible Services (Printers, Desktops) Threads, IPC, Bounded Buffer Distributed Repositories (Files, Databases) Collaborative Applications (Games, Shared Desktops) Distributed Sensing (Disaster Prediction) Computation Distribution Higher-level abstractions for different (Number, Matrix classes of speedup algorithms? Multiplication) 16 COMPLETE AUTOMATION? Remotely Accessible Modules of Non-Distributed Program Services (Printers, Desktops) Distributed Transparently Loader contacts registry to Distributed determine if local module loaded or Repositories (Files, Databases) remote module accessed Assumes one name space, one Collaborative instance of each service – cannot Applications handle replication (Games, Shared Desktops) Motivates RPC transparency Distributed Sensing (Disaster Prediction) Parallelizing compilers (Kuck and Kennedy) Computation Distribution Halting problem (Number, Matrix Multiplication) Motivates Declarative Abstractions 17 DECLARATIVE ABSTRACTIONS Remotely Accessible Services (Printers, Desktops) Threads, IPC, Bounded Buffer Distributed Repositories (Files, Databases) Collaborative Applications (Games, Shared Desktops) Distributed Sensing (Disaster Prediction) Computation Adding Declarative abstractions for Distribution different classes of speedup (Number, Matrix algorithms Multiplication) 18 DECLARATIVE VS IMPERATIVE: COMPLEMENTING 4.8 4.8 Declarative: Specify what we want 5.2 4.5 5.2 floats[] floats = {4.8f, 5.2f, 4.5f}; Type declaration 4.5 Procedural, Imperative : Implement what we want Functional, O-O, … public static float sum(Float[] aList) { float retVal = (float) 0.0; for (int i = 0; i < aList.length; i++) { retVal += aList[i]; Loop } return retVal; } Locality relevant to PDC abstractions - later 19 DECLARATIVE VS IMPERATIVE: COMPETING Declarative:
Recommended publications
  • Writing Message Passing Parallel Programs with MPI a Two Day Course on MPI Usage
    Writing Message Passing Parallel Programs with MPI A Two Day Course on MPI Usage Course Notes Version 1.8.2 Neil MacDonald, Elspeth Minty, Joel Malard, Tim Harding, Simon Brown, Mario Antonioletti Edinburgh Parallel Computing Centre The University of Edinburgh Table of Contents 1 The MPI Interface ........................................................................1 1.1 Goals and scope of MPI............................................................ 1 1.2 Preliminaries.............................................................................. 2 1.3 MPI Handles .............................................................................. 2 1.4 MPI Errors.................................................................................. 2 1.5 Bindings to C and Fortran 77 .................................................. 2 1.6 Initialising MPI.......................................................................... 3 1.7 MPI_COMM_WORLD and communicators......................... 3 1.8 Clean-up of MPI ........................................................................ 4 1.9 ...................................................................................................... Aborting MPI 4 1.10 A simple MPI program ............................................................ 4 1.11 Exercise: Hello World - the minimal MPI program............. 5 2 What’s in a Message? ...................................................................7 3 Point-to-Point Communication .................................................9 3.1 Introduction
    [Show full text]
  • A Functional Approach for the Automatic Parallelization of Non-Associative Reductions Via Semiring Algebra
    A Functional Approach for the Automatic Parallelization of Non-Associative Reductions via Semiring Algebra Federico Pizzuti I V N E R U S E I T H Y T O H F G E R D I N B U Master of Science by Research Institute of Computing Systems Architecture School of Informatics University of Edinburgh 2017 Abstract Recent developments in computer architecture have yielded to a surge in the number of parallel devices, such as general purpose GPUs and application specific accelerators, making parallel programming more common than ever before.However, parallel pro- gramming poses many challenges to the programmer, as implementing correct solution requires expert knowledge of the target platform, in addition to advanced programming skills. A popular solution to this problem is the adoption of a pattern-based programming model, wherein the code is written using high-level transformations. While automatic parallelisation for some of these transformations is easily performed, this is not the case for others. In particular, the automatic parallelisation of Reduction is commonly supported only for a restricted class of well known associative operators: outside of these cases, the programmer must either resort to handwritten ad-hoc implementations or settle for using a sequential implementation. This document proposes a method for the automatic parallelisation of Reductions which, by taking advantage of properties of purely functional programs and the alge- braic notion of a semiring, is capable of handling cases in which the reduction operator is not necessarily associative, but instead satisfies a much weaker set of properties. It then presents a series of additions to the Lift compiler implementing said methods, and finally proceeds to evaluate the performance of optimised Lift programs, demon- strating that the automatically parallelised code can outperform equivalent handwritten implementations, without requiring any explicit intervention from the programmer.
    [Show full text]
  • Chapter 4. Message-Passing Programming
    CSci 493.65 Parallel Computing Prof. Stewart Weiss Chapter 4 Message-Passing Programming Chapter 4 Message-Passing Programming Perilous to us all are the devices of an art deeper than we possess ourselves. Gandalf, in Lord of the Rings, Part II: The Two Towers [1] 4.1 Introduction This chapter begins our study of parallel programming using a message-passing model. The advantage of using a message-passing model, rather than a shared memory model, as a starting point, is that the message- passing model can be used on any model of multicomputer, whether it is a shared memory multiprocessor or a private memory multicomputer. The next decision we have to make is which parallel programming language we choose for implementing the algorithms we will develop. The Message Passing Interface (MPI ) standard is very widely adopted, and this is our choice. It is not a language, but a library of functions that add a message-passing model of parallel programming to ordinary sequential languages like C, C ++, and Fortran. OpenMPI is a free version of MPI. MPI is available on most commercial parallel computers, making programs that use the standard very portable. In this chapter, we will introduce the following MPI functions: MPI Function Purpose MPI_Init initializes MPI MPI_Comm_rank determines a process's ID number (called its rank) MPI_Comm_size determines the number of processes MPI_Reduce performs a reduction operation MPI_Finalize shuts down MPI and releases resources MPI_Bcast performs a broadcast operation MPI_Barrier performs a barrier synchronization operation MPI_Wtime determines the wall time MPI_Wtick determines the length of a clock tick 4.2 About C and C ++ We will almost exclusively describe only the C syntax of MPI in these notes; sometimes we will show the C ++ syntax as well.
    [Show full text]
  • Block-Parallel Data Analysis with DIY2
    Block-Parallel Data Analysis with DIY2 Dmitriy Morozov∗ Tom Peterka† May 11, 2016 Abstract DIY2 is a programming model and runtime for block-parallel analytics on distributed- memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial, parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines. ∗Lawrence Berkeley National Laboratory, [email protected] †Argonne National Laboratory, [email protected] 1 1 Introduction The rapid growth of computing and sensing capabilities is generating enormous amounts of scientific data. Parallelism can reduce the time required to analyze these data, and distributed memory allows datasets larger than even the largest-memory nodes to be accommodated. The most familiar parallel computing model in distributed-memory environments is arguably data-parallel domain-decomposed message passing. In other words, divide the input data into subdomains, assign subdomains to processors, and communicate between processors with messages.
    [Show full text]
  • Parallel Algorithms for Operations on Multi-Valued Decision Diagrams
    The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) Parallel Algorithms for Operations on Multi-Valued Decision Diagrams Guillaume Perez Jean-Charles Regin´ Dept. of Computer Science Universit Nice Sophia Antipolis, CNRS, I3S Cornell University, Ithaca, NY 14850 USA UMR 7271, 06900 Sophia Antipolis, France [email protected] [email protected] Abstract constraints represented by MDDs correspond to operations performed on MDDs. Conjunction of constraints is accom- Multi-valued Decision Diagrams (MDDs) have been exten- plished by intersecting MDDs and disjunction by union. sively studied in the last ten years. Recently, efficient algo- rithms implementing operators such as reduction, union, in- Recently, efficient operators such as union, difference tersection, difference, etc., have been designed. They directly negation, intersection, etc., have been designed (Perez and deal with the graph structure of the MDD and a time reduction Regin´ 2015; 2016). These are based on algorithms dealing of several orders of magnitude in comparison to other existing directly with the graph structure of the MDD instead of func- algorithms have been observed. These operators have permit- tion decomposition, which requires reaching the leaves of ted a new look at MDDs, because extremely large MDDs can a branch as Shannon or Bryant did (Bryant 1986). By ex- finally be manipulated as shown by the models used to solve plicitly working on the structure, they dramatically accel- complex application in music generation. However, MDDs erated the algorithm and observed a time reduction of sev- become so large (50GB) that minutes are sometimes required eral orders of magnitude in comparison to other existing al- to perform some operations.
    [Show full text]
  • Modeling and Optimizing Mapreduce Programs
    Modeling and Optimizing MapReduce Programs Jens Dörre, Sven Apel, Christian Lengauer University of Passau Germany Technical Report, Number MIP-1304 Department of Computer Science and Mathematics University of Passau, Germany December 2013 Modeling and Optimizing MapReduce Programs Jens Dörre, Sven Apel, Christian Lengauer University of Passau Germany Abstract MapReduce frameworks allow programmers to write distributed, data- parallel programs that operate on multisets. These frameworks offer con- siderable flexibility to support various kinds of programs and data. To understand the essence of the programming model better and to provide a rigorous foundation for optimizations, we present an abstract, functional model of MapReduce along with a number of customization options. We demonstrate that the MapReduce programming model can also represent programs that operate on lists, which differ from multisets in that the or- der of elements matters. Along with the functional model, we offer a cost model that allows programmers to estimate and compare the performance of MapReduce programs. Based on the cost model, we introduce two transformation rules aiming at performance optimization of MapReduce programs, which also demonstrates the usefulness of our model. In an exploratory study, we assess the impact of applying these rules to two applications. The functional model and the cost model provide insights at a proper level of abstraction into why the optimization works. 1 Introduction Since the advent of (cheap) cluster computing with Beowulf Linux clusters in the 1990s [1], Google’s MapReduce programming model [2] has been one of the contributions with highest practical impact in the field of distributed comput- ing. MapReduce is closely related to functional programming, especially to the algebraic theory of list homomorphisms: functions that preserve list structure.
    [Show full text]