Bulk Synchronous and SPMD Programming CS315B Lecture 2

Total Page:16

File Type:pdf, Size:1020Kb

Bulk Synchronous and SPMD Programming CS315B Lecture 2 Bulk Synchronous and SPMD Programming The Bulk Synchronous Model CS315B Lecture 2 Prof. Aiken CS 315B Lecture 2 1 Prof. Aiken CS 315B Lecture 2 2 Bulk Synchronous Model The Machine • A model – An idealized machine • Originally proposed for analyzing parallel algorithms – Leslie Valiant – “A Bridging Model for Parallel Computatation”, 1990 Prof. Aiken CS 315B Lecture 2 3 Prof. Aiken CS 315B Lecture 2 4 1 Computations • A sequence of supersteps: • Repeat: What are some properties of this machine • All processors do local computation model? • Barrier • All processors communicate • Barrier Prof. Aiken CS 315B Lecture 2 5 Prof. Aiken CS 315B Lecture 2 6 Basic Properties • Uniform – compute nodes – communication costs What are properties of this computational model? • Separate communication and computation • Synchronization is global Prof. Aiken CS 315B Lecture 2 7 Prof. Aiken CS 315B Lecture 2 8 2 The Idea How Does This Work? • Programs are • Roughly – written for v virtual processors – Memory addresses are hashed to a random location – run on p physical processors in the machine – Guarantees that on average, memory accesses have the same cost • If v >= p log p then – The extra log p factor of threads are multiplexed – Managing memory, communication and onto the p processors to hide the latency of synchronization can be done automatically within a memory requests constant factor of optimal – The processors are kept busy and do no more compute than necessary Prof. Aiken CS 315B Lecture 2 9 Prof. Aiken CS 315B Lecture 2 10 Terminology • SIMD – Single Instruction, Multiple Data SPMD • SPMD – Single Program, Multiple Data Prof. Aiken CS 315B Lecture 2 11 Prof. Aiken CS 315B Lecture 2 12 3 SIMD = Vector Processing Picture if (factor == 0) if (factor == 0) factor = 1.0 factor = 1.0 A[1..N] = B[1..N] * factor; A[1] = B[1] * A[2] = B[2] A[3] = B[3] j += factor; factor * factor * factor . j += factor Prof. Aiken CS 315B Lecture 2 13 Prof. Aiken CS 315B Lecture 2 14 Comments SPMD = Single Program, Multiple Data • Single thread of control SIMD SPMD – Global synchronization at each program instruction if (factor == 0) if (factor == 0) • Can exploit fine-grain parallelism factor = 1.0 factor = 1.0 – Assumption of hardware support A[1..N] = B[1..N] * A[myid] = B[myid] * factor; factor; j += factor; j += factor; … … Prof. Aiken CS 315B Lecture 2 15 Prof. Aiken CS 315B Lecture 2 16 4 Picture Comments if (factor == 0) if (factor == 0) • Multiple threads of control factor = 1.0 factor = 1.0 – One (or more) per processor • Asynchronous A[1] = B[1] * A[2] = B[2] – All synchronization is programmer-specified factor * factor . • Threads are distinguished by myid j += factor j += factor • Choice: Are variables local or global? Prof. Aiken CS 315B Lecture 2 17 Prof. Aiken CS 315B Lecture 2 18 Comparison MPI • SIMD • Message Passing Interface – Designed for tightly-coupled, synchronous – A widely used standard hardware – Runs on everything – i.e., vector units • A runtime system • SPMD – Designed for clusters • Most popular way to write SPMD programs – Too expensive to synchronize every statement – Need a model that allows asynchrony Prof. Aiken CS 315B Lecture 2 19 Prof. Aiken CS 315B Lecture 2 20 5 MPI Programs MPI Point-to-Point Routines • Standard sequential programs • MPI_Send(buffer, count, type, dest, …) – All variables are local to a thread • MPI_Recv(buffer, count, type, source, …) • Augmented with calls to the MPI interface – SPMD model – Every thread has a unique identifier – Threads can send/receive messages – Synchronization primitives Prof. Aiken CS 315B Lecture 2 21 Prof. Aiken CS 315B Lecture 2 22 Example MPI Point-to-Point Routines, Non-Blocking for (….) { // p = number of chunks of 1D grid, id = process id, h[] = local chunk of the grid • MPI_ISend( … ) // boundary elements of h[] are copies of neighbors boundary elements • MPI_Irecv(…) .... Local computation ... // exchange with neighbors on a 1-D grid if ( 0 < id ) MPI_Send ( &h[1], 1, MPI_DOUBLE, id-1, 1, MPI_COMM_WORLD ); • MPI_Wait(…) if ( id < p-1 ) MPI_Recv ( &h[n+1], 1, MPI_DOUBLE, id+1, 1, MPI_COMM_WORLD, &status ); if ( id < p-1 ) MPI_Send ( &h[n], 1, MPI_DOUBLE, id+1, 2, MPI_COMM_WORLD ); if ( 0 < id ) MPI_Recv ( &h[0], 1, MPI_DOUBLE, id-1, 2, MPI_COMM_WORLD, &status ); … More local computation ... } Prof. Aiken CS 315B Lecture 2 23 Prof. Aiken CS 315B Lecture 2 24 6 Example MPI Collective Communication Routines for (….) { // p = number of chunks of 1D grid, id = process id, h[] = local chunk of the grid • MPI_Barrier(…) // boundary elements of h[] are copies of neighbors boundary elements • MPI_Bcast(...) .... Local computation ... // exchange with neighbors on a 1-D grid • MPI_Scatter(…) if ( 0 < id ) MPI_ISend ( &h[1], 1, MPI_DOUBLE, id-1, 1, MPI_COMM_WORLD ); • MPI_Gather(…) if ( id < p-1 ) MPI_IRecv ( &h[n+1], 1, MPI_DOUBLE, id+1, 1, MPI_COMM_WORLD, &status ); • MPI_Reduce(…) if ( id < p-1 ) MPI_ISend ( &h[n], 1, MPI_DOUBLE, id+1, 2, MPI_COMM_WORLD ); if ( 0 < id ) MPI_IRecv ( &h[0], 1, MPI_DOUBLE, id-1, 2, MPI_COMM_WORLD, &status ); MPI_Wait(…1...) MPI_Wait(...2...) … More local computation ... } Prof. Aiken CS 315B Lecture 2 25 Prof. Aiken CS 315B Lecture 2 26 Typical Structure PGAS Model • PGAS = Partitioned Global Address Space communicate_get_work_to_do(); • There is one global address space barrier; // not always needed do_local_work(); • But each thread owns a partition of the address space barrier; that is more efficient to access communicate_write_results(); – i.e., the local memory of a processor • Equivalent in functionality to MPI What does this remind you of? – But typically presented as a programming language – Examples: Split-C, UPC, Titanium Prof. Aiken CS 315B Lecture 2 27 Prof. Aiken CS 315B Lecture 2 28 7 PGAS Languages PGAS Languages • No library calls for communication • Also provide collective communication • Instead, variables can name memory locations • Barrier on other machines • Broadcast/Reduce // Assume y points to a remote location – 1-many // The following is equivalent to a send/receive x = *y • Exchange – All-to-all Prof. Aiken CS 315B Lecture 2 29 Prof. Aiken CS 315B Lecture 2 30 PGAS vs. MPI Summary • Programming model very similar • SPMD is well-matched to cluster programming – Both provide SPMD – Also works well on shared memory machines • One thread per core • From a pragmatic point of view, MPI rules – No need for compiler to discover parallelism – Easy to add MPI to an existing sequential language – No danger of overwhelming # of threads • For productivity, PGAS is better • Model exposes memory architecture – Programs filled with low-level details of MPI calls – Local vs. Global variables – Local computation vs. sends/receives – PGAS programs easier to modify – PGAS compilers can know more/do better job Prof. Aiken CS 315B Lecture 2 31 Prof. Aiken CS 315B Lecture 2 32 8 Control SIMD SPMD if (factor == 0) if (factor == 0) Analysis factor = 1.0 factor = 1.0 forall (i = 1..N) A[myid] = B[myid] * A[i] = B[i] * factor; factor; j += factor; j += factor; … … Prof. Aiken CS 315B Lecture 2 33 Prof. Aiken CS 315B Lecture 2 34 Control, Cont. Global Synchronization Revisited • SPMD replicates the sequential part of the • In the presence of non-blocking global memory SIMD computation operations, we also need memory fence – Across all threads! operations • Why? • Two choices – Often cheaper to replicate computation in parallel – Have a separate classes of memory and control than compute in one place and broadcast synchronization operations – A general principle . • E.g., barrier and memory_fence – Have a single set of operations • E.g., barrier implies memory and control synchronization Prof. Aiken CS 315B Lecture 2 35 Prof. Aiken CS 315B Lecture 2 36 9 Message Passing Implementations Bulk Synchronous/SPMD Model • Idea: A memory fence is a special message • Easy to understand sent on the network; when it arrives, all the memory operations are complete • Phase structure guarantees no data races – Barrier synchronization also easy to understand • To work, underlying message system must deliver messages in order • Fits many problems well • This is one of the key properties of MPI – And most message systems Prof. Aiken CS 315B Lecture 2 37 Prof. Aiken CS 315B Lecture 2 38 But … Hierarchy • Assumes 2-level memory hierarchy • Current & future machines are more – Local/global, a flat collection of homogenous hierarchical sequential processors – 3-4 levels, not 2 • No overlap of communication and computation • Leads to programs written in a mix of – MPI (network level) • Barriers scale poorly with machine size – OpenMP (node level) + vectorization – (# of operations lost) * (# of processors) – CUDA (GPU level) • Each is a different programming model Prof. Aiken CS 315B Lecture 2 39 Prof. Aiken CS 315B Lecture 2 40 10 No Overlap of Computation/Communication Global Operations • Leaves major portion of the machine idle in • Global operations (such as barriers) are bad each phase – Require synchronization across the machine – Especially bad when there is performance variation among participating threads • And this potential is lost at many scales – Hierarchical machines again • Need a model that favors asynchrony • Increasingly, communication is key – Couple as few things together as possible – Data movement is what matters – Most of the execution time, most of the energy Prof. Aiken CS 315B Lecture 2 41 Prof. Aiken CS 315B Lecture 2 42 Global Operations Continued I/O • MPI has evolved to include more asynchronous • How do programs get initial data?
Recommended publications
  • 2.5 Classification of Parallel Computers
    52 // Architectures 2.5 Classification of Parallel Computers 2.5 Classification of Parallel Computers 2.5.1 Granularity In parallel computing, granularity means the amount of computation in relation to communication or synchronisation Periods of computation are typically separated from periods of communication by synchronization events. • fine level (same operations with different data) ◦ vector processors ◦ instruction level parallelism ◦ fine-grain parallelism: – Relatively small amounts of computational work are done between communication events – Low computation to communication ratio – Facilitates load balancing 53 // Architectures 2.5 Classification of Parallel Computers – Implies high communication overhead and less opportunity for per- formance enhancement – If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation. • operation level (different operations simultaneously) • problem level (independent subtasks) ◦ coarse-grain parallelism: – Relatively large amounts of computational work are done between communication/synchronization events – High computation to communication ratio – Implies more opportunity for performance increase – Harder to load balance efficiently 54 // Architectures 2.5 Classification of Parallel Computers 2.5.2 Hardware: Pipelining (was used in supercomputers, e.g. Cray-1) In N elements in pipeline and for 8 element L clock cycles =) for calculation it would take L + N cycles; without pipeline L ∗ N cycles Example of good code for pipelineing: §doi =1 ,k ¤ z ( i ) =x ( i ) +y ( i ) end do ¦ 55 // Architectures 2.5 Classification of Parallel Computers Vector processors, fast vector operations (operations on arrays). Previous example good also for vector processor (vector addition) , but, e.g. recursion – hard to optimise for vector processors Example: IntelMMX – simple vector processor.
    [Show full text]
  • CS650 Computer Architecture Lecture 10 Introduction to Multiprocessors
    NJIT Computer Science Dept CS650 Computer Architecture CS650 Computer Architecture Lecture 10 Introduction to Multiprocessors and PC Clustering Andrew Sohn Computer Science Department New Jersey Institute of Technology Lecture 10: Intro to Multiprocessors/Clustering 1/15 12/7/2003 A. Sohn NJIT Computer Science Dept CS650 Computer Architecture Key Issues Run PhotoShop on 1 PC or N PCs Programmability • How to program a bunch of PCs viewed as a single logical machine. Performance Scalability - Speedup • Run PhotoShop on 1 PC (forget the specs of this PC) • Run PhotoShop on N PCs • Will it run faster on N PCs? Speedup = ? Lecture 10: Intro to Multiprocessors/Clustering 2/15 12/7/2003 A. Sohn NJIT Computer Science Dept CS650 Computer Architecture Types of Multiprocessors Key: Data and Instruction Single Instruction Single Data (SISD) • Intel processors, AMD processors Single Instruction Multiple Data (SIMD) • Array processor • Pentium MMX feature Multiple Instruction Single Data (MISD) • Systolic array • Special purpose machines Multiple Instruction Multiple Data (MIMD) • Typical multiprocessors (Sun, SGI, Cray,...) Single Program Multiple Data (SPMD) • Programming model Lecture 10: Intro to Multiprocessors/Clustering 3/15 12/7/2003 A. Sohn NJIT Computer Science Dept CS650 Computer Architecture Shared-Memory Multiprocessor Processor Prcessor Prcessor Interconnection network Main Memory Storage I/O Lecture 10: Intro to Multiprocessors/Clustering 4/15 12/7/2003 A. Sohn NJIT Computer Science Dept CS650 Computer Architecture Distributed-Memory Multiprocessor Processor Processor Processor IO/S MM IO/S MM IO/S MM Interconnection network IO/S MM IO/S MM IO/S MM Processor Processor Processor Lecture 10: Intro to Multiprocessors/Clustering 5/15 12/7/2003 A.
    [Show full text]
  • What Is SPMD? Messages
    1 2 Outline Motivation for MPI Overview of PVM and MPI The pro cess that pro duced MPI What is di erent ab out MPI? { the \usual" send/receive Jack Dongarra { the MPI send/receive { simple collective op erations Computer Science Department New in MPI: Not in MPI UniversityofTennessee Some simple complete examples, in Fortran and C and Communication mo des, more on collective op erations Implementation status Mathematical Sciences Section Oak Ridge National Lab oratory MPICH - a free, p ortable implementation MPI resources on the Net MPI-2 http://www.netlib.org/utk/p eople/JackDongarra.html 3 4 Messages What is SPMD? 2 Messages are packets of data moving b etween sub-programs. 2 Single Program, Multiple Data 2 The message passing system has to b e told the 2 Same program runs everywhere. following information: 2 Restriction on the general message-passing mo del. { Sending pro cessor 2 Some vendors only supp ort SPMD parallel programs. { Source lo cation { Data typ e 2 General message-passing mo del can b e emulated. { Data length { Receiving pro cessors { Destination lo cation { Destination size 5 6 Access Point-to-Point Communication 2 A sub-program needs to b e connected to a message passing 2 Simplest form of message passing. system. 2 One pro cess sends a message to another 2 A message passing system is similar to: 2 Di erenttyp es of p oint-to p oint communication { Mail b ox { Phone line { fax machine { etc. 7 8 Synchronous Sends Asynchronous Sends Provide information about the completion of the Only know when the message has left.
    [Show full text]
  • Exploiting Automatic Vectorization to Employ SPMD on SIMD Registers
    Exploiting automatic vectorization to employ SPMD on SIMD registers Stefan Sprenger Steffen Zeuch Ulf Leser Department of Computer Science Intelligent Analytics for Massive Data Department of Computer Science Humboldt-Universitat¨ zu Berlin German Research Center for Artificial Intelligence Humboldt-Universitat¨ zu Berlin Berlin, Germany Berlin, Germany Berlin, Germany [email protected] [email protected] [email protected] Abstract—Over the last years, vectorized instructions have multi-threading with SIMD instructions1. For these reasons, been successfully applied to accelerate database algorithms. How- vectorization is essential for the performance of database ever, these instructions are typically only available as intrinsics systems on modern CPU architectures. and specialized for a particular hardware architecture or CPU model. As a result, today’s database systems require a manual tai- Although modern compilers, like GCC [2], provide auto loring of database algorithms to the underlying CPU architecture vectorization [1], typically the generated code is not as to fully utilize all vectorization capabilities. In practice, this leads efficient as manually-written intrinsics code. Due to the strict to hard-to-maintain code, which cannot be deployed on arbitrary dependencies of SIMD instructions on the underlying hardware, hardware platforms. In this paper, we utilize ispc as a novel automatically transforming general scalar code into high- compiler that employs the Single Program Multiple Data (SPMD) execution model, which is usually found on GPUs, on the SIMD performing SIMD programs remains a (yet) unsolved challenge. lanes of modern CPUs. ispc enables database developers to exploit To this end, all techniques for auto vectorization have focused vectorization without requiring low-level details or hardware- on enhancing conventional C/C++ programs with SIMD instruc- specific knowledge.
    [Show full text]
  • Polynomial-Time Algorithms for Enforcing Sequential Consistency in SPMD Programs with Arrays
    Polynomial-time Algorithms for Enforcing Sequential Consistency in SPMD Programs with Arrays ¡ Wei-Yu Chen , Arvind Krishnamurthy , and Katherine Yelick ¢ Computer Science Division, University of California, Berkeley £ wychen, yelick ¤ @cs.berkeley.edu ¥ Department of Computer Science, Yale University [email protected] Abstract. The simplest semantics for parallel shared memory programs is se- quential consistency in which memory operations appear to take place in the or- der specified by the program. But many compiler optimizations and hardware fea- tures explicitly reorder memory operations or make use of overlapping memory operations which may violate this constraint. To ensure sequential consistency while allowing for these optimizations, traditional data dependence analysis is augmented with a parallel analysis called cycle detection. In this paper, we present new algorithms to enforce sequential consistency for the special case of the Single Program Multiple Data (SPMD) model of parallelism. First, we present an algo- rithm for the basic cycle detection problem, which lowers the running time from ¥ ¦¨§ © ¦¨§ © to . Next, we present three polynomial-time methods that more ac- curately support programs with array accesses. These results are a step toward making sequentially consistent shared memory programming a practical model across a wide range of languages and hardware platforms. 1 Introduction In a uniprocessor environment, compiler and hardware transformations must adhere to a simple data dependency constraint: the orders of all pairs of conflicting accesses (accesses to the same memory location, with at least one a write) must be preserved. The execution model for parallel programs is considerably more complicated, since each thread executes its own portion of the program asynchronously, and there is no predetermined ordering among accesses issued by different threads to shared memory locations.
    [Show full text]
  • Data Management and Control-Flow Aspects of an SIMD/SPMD Parallel Language/Compiler
    222 lltt TRANSA('TI0NS ON PARALLEL AND DISTRIBUTED SYSTEMS. VOL. 4. NO. 2. FEBRUARY lYY3 Data Management and Control-Flow Aspects of an SIMD/SPMD Parallel Language/Compiler Mark A. Nichols, Member, IEEE, Howard Jay Siegel, Fellow, IEEE, and Henry G. Dietz, Member, IEEE Abstract-Features of an explicitly parallel programming lan- map more closely to different modes of parallelism [ 191, [23]. guage targeted for reconfigurable parallel processing systems, For instance, most parallel systems designed to exploit data where the machine's -1-processing elements (PE's) are capable of parallelism operate solely in the SlMD mode of parallelism. operating in both the SIMD and SPMD modes of parallelism, are described. The SPMD (Single Program-Multiple Data) mode of Because many data-parallel applications require a significant parallelism is a subset of the MIMD mode where all processors ex- number of data-dependent conditionals, SIMD mode is un- ecute the same program. By providing all aspects of the language necessarily restrictive. These types of applications are usually with an SIMD mode version and an SPMD mode version that are better served when using the SPMD mode of parallelism. syntactically and semantically equivalent, the language facilitates Several parallel machines have been built that are capable of experimentation with and exploitation of hybrid SlMDiSPMD machines. Language constructs (and their implementations) for operating in both the SIMD and SPMD modes of parallelism. data management, data-dependent control-flow, and PE-address Both PASM [9], [17], [51], [52] and TRAC [30], [46] are dependent control-flow are presented. These constructs are based hybrid SIMDiMIMD machines, and OPSILA [2], [3],[16] is on experience gained from programming a parallel machine a hybrid SIMDiSPMD machine.
    [Show full text]
  • Regent: a High-Productivity Programming Language for Implicit Parallelism with Logical Regions
    REGENT: A HIGH-PRODUCTIVITY PROGRAMMING LANGUAGE FOR IMPLICIT PARALLELISM WITH LOGICAL REGIONS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Elliott Slaughter August 2017 © 2017 by Elliott David Slaughter. All Rights Reserved. Re-distributed by Stanford University under license with the author. This work is licensed under a Creative Commons Attribution- Noncommercial 3.0 United States License. http://creativecommons.org/licenses/by-nc/3.0/us/ This dissertation is online at: http://purl.stanford.edu/mw768zz0480 ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Alex Aiken, Primary Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Philip Levis I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Oyekunle Olukotun Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice Provost for Graduate Education This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file in University Archives. iii Abstract Modern supercomputers are dominated by distributed-memory machines. State of the art high-performance scientific applications targeting these machines are typically written in low-level, explicitly parallel programming models that enable maximal performance but expose the user to programming hazards such as data races and deadlocks.
    [Show full text]
  • Message Passing Interface Part - I
    Message Passing Interface Part - I Dheeraj Bhardwaj Department of Computer Science & Engineering Indian Institute of Technology, Delhi – 110016 India http://www.cse.iitd.ac.in/~dheerajb Message Passing Interface Dheeraj Bhardwaj <[email protected]> 1 Message Passing Interface Outlines ? Basics of MPI ? How to compile and execute MPI programs? ? MPI library calls used in Example program ? MPI point-to-point communication ? MPI advanced point-to-point communication ? MPI Collective Communication and Computations ? MPI Datatypes ? MPI Communication Modes ? MPI special features Message Passing Interface Dheeraj Bhardwaj <[email protected]> 2 What is MPI? ? A message-passing library specification • Message-passing model • Not a compiler specification • Not a specific product ? Used for parallel computers, clusters, and heterogeneous networks as a message passing library. ? Designed to permit the development of parallel software libraries Message Passing Interface Dheeraj Bhardwaj <[email protected]> 3 Information about MPI Where to use MPI ? ? You need a portable parallel program ? You are writing a parallel Library ? You have irregular data relationships that do not fit a data parallel model Why learn MPI? ? Portable & Expressive ? Good way to learn about subtle issues in parallel computing ? Universal acceptance Message Passing Interface Dheeraj Bhardwaj <[email protected]> 4 Information about MPI MPI Resources ? The MPI Standard : http://www.mcs.anl.gov/mpi ? Using MPI by William Gropp, Ewing Lusk and Anthony Skjellum
    [Show full text]
  • 2.1 Flynn's Taxonomy of Parallel Architectures-Twoslidesxpage
    Flynn’s Taxonomy of Parallel Architectures Stefano Markidis, Erwin Laure, Niclas Jansson, Sergio Rivas-Gomez and Steven Wei Der Chien 1 Sequential Architecture • The von Neumann architecture was conceived in the mid-1940s • Arithmetic logic unit (ALU) is the heart of the computer, performing the actual computation. • Registers can be read and written to at the speed of the surrounding logic. A bank of several registers allow for simultaneous reads and writes • The processor accesses the main memory of the computer system to store and use the values of the program variables making up the state of the calculation. • The operation of the processor is managed by the controller, which creates a sequence of signals to the hardware. • Originally this was done as a series of phases: fetch instructions, execute operation, and write back to register (or memory). • This would be repeated for each succeeding operation of the instruction stream. 2 3/20/17 Flynn's Taxonomy • In the 1966, Michael Flynn proposed a Flynns taxonomy (1966) taxonomy that simplified categorization of distinct classes of parallel architecture and n {Single, Multiple} {Instructions, Data} control methods based on the relationships of data and instruction (control) • Comprising four characters, it divides the SISD SIMD world of computing structures into four Single Instruction, Single Data Single Instruction, Multiple Data classes in a 2D space. • One dimension concerns the data stream, D, • whether there is one such stream, S, or multiple MISD MIMD data streams, M. Multiple Instruction, Single Data Multiple Instruction, Multiple Data • The other dimension relates to the control or instruction stream, I • whether there is one instruction stream, S, or 3 multiple instructions streams, M.
    [Show full text]
  • Introduction to Parallel Programming with MPI Lecture #1
    Introduction to Parallel Programming with MPI Lecture #1: Introduction Andrea Mignone1 1Dipartimento di Fisica- Turin University, Torino (TO), Italy Course Requisites § In order to follow these lecture notes and the course material you will need to have some acquaintance with • Linux shell • C / C++ or Fortran compiler • Basic knowledge of numerical methods § Further Reading & Links: • The latest reference of the MPI standard: https://www.mpi-forum.org/docs/ Huge - but indispensible to understand what lies beyond the surface of the API • Online tutorials: • https://mpitutorial.com (by Wes Kendall) • https://www.codingame.com/playgrounds/349/introduction-to-mpi/introduction- to-distributed-computing • http://adl.stanford.edu/cme342/Lecture_Notes.html The Need for Parallel Computing § Memory- and CPU-intensive computations can be carried out using parallelism. § Parallel programming methods on parallel computers provides access to increased memory and CPU resources not available on serial computers. This allows large problems to be solved with greater speed or not even feasible when compared to the typical execution time on a single processor. § Serial application (codes) can be turned into parallel ones by fulfilling some requirements which are typically hardware-dependent. § Parallel programming paradigms rely on the usage of message passing libraries. These libraries manage transfer of data between instances of a parallel program unit on multiple processors in a parallel computing architecture. Parallel Programming Models § Serial applications will not run automatically on parallel architectures (no such thing as automatic parallelism !). § Any parallel programming model must specify how parallelism is achieved through a set of program abstractions for fitting parallel activities from the application to the underlying parallel hardware.
    [Show full text]
  • Introduction to Matlab Parallel Computing Toolbox
    INTRODUCTION TO MATLAB PARALLEL COMPUTING TOOLBOX Keith Ma ---------------------------------------- [email protected] Research Computing Services ----------- [email protected] Boston University ---------------------------------------------------- MATLAB Parallel Computing Toolbox 2 Overview • Goals: 1. Basic understanding of parallel computing concepts 2. Familiarity with MATLAB parallel computing tools • Outline: • Parallelism, defined • Parallel “speedup” and its limits • Types of MATLAB parallelism • multi-threaded/implicit, distributed, explicit) • Tools: parpool, SPMD, parfor, gpuArray, etc MATLAB Parallel Computing Toolbox 3 Parallel Computing Definition: The use of two or more processors in combination to solve a single problem. • Serial performance improvements have slowed, while parallel hardware has become ubiquitous • Parallel programs are typically harder to write and debug than serial programs. Select features of Intel CPUs over time, Sutter, H. (2005). The free lunch is over. Dr. Dobb’s Journal, 1–9. MATLAB Parallel Computing Toolbox 4 Parallel speedup, and its limits (1) • “Speedup” is a measure of performance improvement 푡푖푚푒 speedup = 표푙푑 푡푖푚푒푛푒푤 • For a parallel program, we can with an arbitrary number of cores, n. • Parallel speedup is a function of the number of cores 푡푖푚푒 speedup(p) = 표푙푑 푡푖푚푒푛푒푤(푝) MATLAB Parallel Computing Toolbox 5 Parallel speedup, its limits (2) • Amdahl’s law: Ideal speedup for a problem of fixed size • Let: p = number of processors/cores α = fraction of the program that is strictly serial T = execution
    [Show full text]
  • Computer Architecture: SIMD and Gpus (Part III) (And Briefly VLIW, DAE, Systolic Arrays)
    Computer Architecture: SIMD and GPUs (Part III) (and briefly VLIW, DAE, Systolic Arrays) Prof. Onur Mutlu Carnegie Mellon University A Note on This Lecture n These slides are partly from 18-447 Spring 2013, Computer Architecture, Lecture 20: GPUs, VLIW, DAE, Systolic Arrays n Video of the part related to only SIMD and GPUs: q http://www.youtube.com/watch? v=vr5hbSkb1Eg&list=PL5PHm2jkkXmidJOd59REog9jDnPDTG6IJ &index=20 2 Last Lecture n SIMD Processing n GPU Fundamentals 3 Today n Wrap up GPUs n VLIW n If time permits q Decoupled Access Execute q Systolic Arrays q Static Scheduling 4 Approaches to (Instruction-Level) Concurrency n Pipelined execution n Out-of-order execution n Dataflow (at the ISA level) n SIMD Processing n VLIW n Systolic Arrays n Decoupled Access Execute 5 Graphics Processing Units SIMD not Exposed to Programmer (SIMT) Review: High-Level View of a GPU 7 Review: Concept of “Thread Warps” and SIMT n Warp: A set of threads that execute the same instruction (on different data elements) à SIMT (Nvidia-speak) n All threads run the same kernel n Warp: The threads that run lengthwise in a woven fabric … Thread Warp 3 Thread Warp 8 Thread Warp Common PC Scalar Scalar Scalar Scalar Thread Warp 7 ThreadThread Thread Thread W X Y Z SIMD Pipeline 8 Review: Loop Iterations as Threads for (i=0; i < N; i++) C[i] = A[i] + B[i]; Scalar Sequential Code Vectorized Code load load load Iter. 1 load load load add Time add add store store store load Iter. Iter. Iter. 2 load 1 2 Vector Instruction add store Slide credit: Krste Asanovic 9 Review:
    [Show full text]