Chapter 4. Message-Passing Programming
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
High-Performance Pdc (Parallel and Distributed Computing)
HIGH-PERFORMANCE PDC (PARALLEL AND DISTRIBUTED COMPUTING) Prasun Dewan Department of Computer Science University of North Carolina at Chapel Hill [email protected] GOAL High Performance PDC (Parallel and Distributed Computing) Important for traditional problems when computers were slow Back to the future with emergence of data science Modern popular software abstractions: OpenMP (Parallel Computing and Distributed Shared Memory) Late nineties- MapReduce (Distributed Computing) 2004- OpenMPI (~ Sockets, not covered) Prins 633 course Covers HPC PDC algorithms and OpenMP in detail. Don Smith 590 Course Covers MapReduce uses in detail Connect OpenMP and MapReduce to Each other (reduction) Non HPC PDC Research (with which I am familiar) Derive the design and implementation of both kinds of abstractions From similar but different performance issues that motivate them. 2 A TALE OF TWO DISTRIBUTION KINDS Remotely Accessible Services (Printers, Desktops) Replicated Repositories (Files, Databases) Differences between the Collaborative Applications two groups? (Games, Shared Desktops) Distributed Sensing (Disaster Prediction) Computation Distribution (Number, Matrix Multiplication) 3 PRIMARY/SECONDARY DISTRIBUTION REASON Primary Secondary Remotely Accessible Services (Printers, Desktops) Remote Service Replicated Repositories (Files, Fault Tolerance, Databases) Availabilty Collaborative Applications Collaboration among Speedup (Games, Shared distributed users Desktops) Distributed Sensing Aggregation of (Disaster Prediction) Distributed -
Writing Message Passing Parallel Programs with MPI a Two Day Course on MPI Usage
Writing Message Passing Parallel Programs with MPI A Two Day Course on MPI Usage Course Notes Version 1.8.2 Neil MacDonald, Elspeth Minty, Joel Malard, Tim Harding, Simon Brown, Mario Antonioletti Edinburgh Parallel Computing Centre The University of Edinburgh Table of Contents 1 The MPI Interface ........................................................................1 1.1 Goals and scope of MPI............................................................ 1 1.2 Preliminaries.............................................................................. 2 1.3 MPI Handles .............................................................................. 2 1.4 MPI Errors.................................................................................. 2 1.5 Bindings to C and Fortran 77 .................................................. 2 1.6 Initialising MPI.......................................................................... 3 1.7 MPI_COMM_WORLD and communicators......................... 3 1.8 Clean-up of MPI ........................................................................ 4 1.9 ...................................................................................................... Aborting MPI 4 1.10 A simple MPI program ............................................................ 4 1.11 Exercise: Hello World - the minimal MPI program............. 5 2 What’s in a Message? ...................................................................7 3 Point-to-Point Communication .................................................9 3.1 Introduction -
A Functional Approach for the Automatic Parallelization of Non-Associative Reductions Via Semiring Algebra
A Functional Approach for the Automatic Parallelization of Non-Associative Reductions via Semiring Algebra Federico Pizzuti I V N E R U S E I T H Y T O H F G E R D I N B U Master of Science by Research Institute of Computing Systems Architecture School of Informatics University of Edinburgh 2017 Abstract Recent developments in computer architecture have yielded to a surge in the number of parallel devices, such as general purpose GPUs and application specific accelerators, making parallel programming more common than ever before.However, parallel pro- gramming poses many challenges to the programmer, as implementing correct solution requires expert knowledge of the target platform, in addition to advanced programming skills. A popular solution to this problem is the adoption of a pattern-based programming model, wherein the code is written using high-level transformations. While automatic parallelisation for some of these transformations is easily performed, this is not the case for others. In particular, the automatic parallelisation of Reduction is commonly supported only for a restricted class of well known associative operators: outside of these cases, the programmer must either resort to handwritten ad-hoc implementations or settle for using a sequential implementation. This document proposes a method for the automatic parallelisation of Reductions which, by taking advantage of properties of purely functional programs and the alge- braic notion of a semiring, is capable of handling cases in which the reduction operator is not necessarily associative, but instead satisfies a much weaker set of properties. It then presents a series of additions to the Lift compiler implementing said methods, and finally proceeds to evaluate the performance of optimised Lift programs, demon- strating that the automatically parallelised code can outperform equivalent handwritten implementations, without requiring any explicit intervention from the programmer. -
Block-Parallel Data Analysis with DIY2
Block-Parallel Data Analysis with DIY2 Dmitriy Morozov∗ Tom Peterka† May 11, 2016 Abstract DIY2 is a programming model and runtime for block-parallel analytics on distributed- memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial, parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines. ∗Lawrence Berkeley National Laboratory, [email protected] †Argonne National Laboratory, [email protected] 1 1 Introduction The rapid growth of computing and sensing capabilities is generating enormous amounts of scientific data. Parallelism can reduce the time required to analyze these data, and distributed memory allows datasets larger than even the largest-memory nodes to be accommodated. The most familiar parallel computing model in distributed-memory environments is arguably data-parallel domain-decomposed message passing. In other words, divide the input data into subdomains, assign subdomains to processors, and communicate between processors with messages. -
Parallel Algorithms for Operations on Multi-Valued Decision Diagrams
The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) Parallel Algorithms for Operations on Multi-Valued Decision Diagrams Guillaume Perez Jean-Charles Regin´ Dept. of Computer Science Universit Nice Sophia Antipolis, CNRS, I3S Cornell University, Ithaca, NY 14850 USA UMR 7271, 06900 Sophia Antipolis, France [email protected] [email protected] Abstract constraints represented by MDDs correspond to operations performed on MDDs. Conjunction of constraints is accom- Multi-valued Decision Diagrams (MDDs) have been exten- plished by intersecting MDDs and disjunction by union. sively studied in the last ten years. Recently, efficient algo- rithms implementing operators such as reduction, union, in- Recently, efficient operators such as union, difference tersection, difference, etc., have been designed. They directly negation, intersection, etc., have been designed (Perez and deal with the graph structure of the MDD and a time reduction Regin´ 2015; 2016). These are based on algorithms dealing of several orders of magnitude in comparison to other existing directly with the graph structure of the MDD instead of func- algorithms have been observed. These operators have permit- tion decomposition, which requires reaching the leaves of ted a new look at MDDs, because extremely large MDDs can a branch as Shannon or Bryant did (Bryant 1986). By ex- finally be manipulated as shown by the models used to solve plicitly working on the structure, they dramatically accel- complex application in music generation. However, MDDs erated the algorithm and observed a time reduction of sev- become so large (50GB) that minutes are sometimes required eral orders of magnitude in comparison to other existing al- to perform some operations. -
Modeling and Optimizing Mapreduce Programs
Modeling and Optimizing MapReduce Programs Jens Dörre, Sven Apel, Christian Lengauer University of Passau Germany Technical Report, Number MIP-1304 Department of Computer Science and Mathematics University of Passau, Germany December 2013 Modeling and Optimizing MapReduce Programs Jens Dörre, Sven Apel, Christian Lengauer University of Passau Germany Abstract MapReduce frameworks allow programmers to write distributed, data- parallel programs that operate on multisets. These frameworks offer con- siderable flexibility to support various kinds of programs and data. To understand the essence of the programming model better and to provide a rigorous foundation for optimizations, we present an abstract, functional model of MapReduce along with a number of customization options. We demonstrate that the MapReduce programming model can also represent programs that operate on lists, which differ from multisets in that the or- der of elements matters. Along with the functional model, we offer a cost model that allows programmers to estimate and compare the performance of MapReduce programs. Based on the cost model, we introduce two transformation rules aiming at performance optimization of MapReduce programs, which also demonstrates the usefulness of our model. In an exploratory study, we assess the impact of applying these rules to two applications. The functional model and the cost model provide insights at a proper level of abstraction into why the optimization works. 1 Introduction Since the advent of (cheap) cluster computing with Beowulf Linux clusters in the 1990s [1], Google’s MapReduce programming model [2] has been one of the contributions with highest practical impact in the field of distributed comput- ing. MapReduce is closely related to functional programming, especially to the algebraic theory of list homomorphisms: functions that preserve list structure.