Task Level Parallelism

Total Page:16

File Type:pdf, Size:1020Kb

Task Level Parallelism Task Level Parallelism The topic of this chapter is thread-level parallelism. While, thread-level parallelism falls within the textbook’s classification of ILP and data parallelism. It also falls into a broader topic of parallel and distributed computing. In the next set of slides, I will attempt to place you in the context of this broader computation space that is called task level parallelism. Of course a proper treatment of parallel computing or distributed computing is worthy of an entire semester (or two) course of study. I can only give you a brief exposure to this topic. The text highlighted in green in these slides contain external hyperlinks. 1 / 14 Classification of Parallelism Software Sequential Concurrent Serial Some problem written as a se- Some problem written as a quential program (the MATLAB concurrent program (the O/S example from the textbook). example from the textbook). Execution on a serial platform. Execution on a serial platform. Parallel Some problem written as a se- Some problem written as a quential program (the MATLAB concurrent program (the O/S Hardware example from the textbook). example from the textbook). Execution on a parallel plat- Execution on a parallel plat- form. form. 2 / 14 Flynn’s Classification of Parallelism CU: control unit SM: shared memory DS1 PU MM PU: processor unit IS: instruction stream 1 1 MM: memory unit DS: data stream DS2 PU MM IS 2 2 CU IS SM IS DS CU PU MM DSn PUn MMm (a) SISD computer IS (b) SIMD computer IS1 IS1 IS1 IS1 IS1 DS1 CU PU DS CU PU MM 1 1 1 1 1 SM IS2 IS2 IS2 IS2 DS2 CU PU CU PU MM IS2 2 2 2 2 2 MM MM MM 1 2 m SM ISn ISn ISn ISn ISn DSn IS CUnPU n DS 2 CUnPU n MMm IS1 ISn (c) MISD computer (d) MIMD computer 3 / 14 Task Level Parallelism I Task Level Parallelism: organizing a program or computing solution into a set of processes/tasks/threads for simultaneous execution. Thread level parallelism is a form of task level parallelism. Task level parallelism generally breaks down into one of two forms: I Running various steps of the algorithm as different (communicating) tasks/threads I SPMD style: Single Program Multiple Data I Conventionally one might think of task level parallelism (and the MIMD processing paradigm) as being used for a single program or operation, however, request level parallelism( e.g., serving http requests for a website) is also generally addressed/studied by hardware solutions in this same space. I Request level processing and related problems with independent transactions (e.g., bitcoin mining, web page requests) fall into a class of problems called embarrassingly parallel. Embarrassingly because they have virtually no synchronization requirements and thus show linear (measured by number of compute nodes) speedup. 4 / 14 Parallel, Distributed, & Concurrent Computing I Parallel Computing: a form of computation in which many calculations are carried out simultaneously; operating on the principle that large problems can often be divided into discrete parts that can be solved concurrently (“in parallel”). There are several different forms of parallel computing: bit-level, instruction level, data, and task parallelism. I Distributed Computing/Distributed Systems: a system in which components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal. I Concurrent Computing a form of computing in which programs are designed as collections of interacting computational processes that may be executed in parallel. Concurrent programs (processes or threads) can be executed: (i) on a single processor by time-slicing, or (ii) in parallel by assigning each computational process to one of a set of processors. The main challenges in designing concurrent programs are ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions. 5 / 14 Parallel, Distributed, & Concurrent Computing The terms concurrent computing, parallel computing, and distributed computing have a lot of overlap, and no clear distinction exists between them. The same system may be characterized both as “parallel” and “distributed”; the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a particular tightly coupled form of distributed computing, and distributed computing may be seen as a loosely coupled form of parallel computing. Nevertheless, it is possible to roughly classify concurrent systems as “parallel” or “distributed” using the following criteria: I In parallel computing, all processors may have access to a shared memory to exchange information between processors. I In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors. 6 / 14 Decomposing MIMD: Multiprocessors I Multiprocessors: computers consisting of tightly coupled processors that typically present a shared memory space. The principle topic of this chapter. I Symmetric (shared-memory) Multiprocessors (SMP): small scale multiprocessors with a shared memory space providing mostly uniform memory access (UMA). Example: single processor multicore x86 machines. I Distributed Shared Memory (DSM) multiprocessors: generally larger solutions with a distributed memory solution that provides nonuniform memory access (NUMA). Much trickier to program effectively as non-local memory references (that present to the programmer as just another memory location in their address space) can be surprisingly costly to access. Actually all parallel programming is much harder to exploit for speedup than it appears. The difficulty of balancing the computation and managing synchronization costs between the parallel tasks is quite difficult. 7 / 14 Decomposing MIMD: Multicomputers I Multicomputers: computers consisting of loosely coupled processors that typically present a distributed memory space. Not to be confused with a distributed memory multiprocessor. I Beowulf Clusters: a collection of (general purpose) compute nodes that are networked together for parallel computing. Often presented as a rack of blade computers, but also existing as a collection of independent computers networked together for the purpose of parallel computing. I Networking support generally provided by standard networking hardware such as Ethernet or Infiniband. 8 / 14 Decomposing MIMD: Massively Parallel Processing (MPP) Large scale (generally above 1K nodes) parallel computing. All forms, SIMD, MIMD, GPGPUs, Clusters, Tightly Coupled Multicomputers (e.g., IBM Blue Gene family). Very interesting problem space. Fault tolerance and fault recovery become far more important than in other spaces. As the size and scale of processing hardware increases, its not clear that MPP as a designation is/will continue to be relevant/significant. 9 / 14 Decomposing MIMD: Warehouse-Scale Clusters of 10s of thousands (and beyond) of independent compute nodes. Generally providing a compute platform for supporting large scale request level parallelism. The topic of the next chapter. 10 / 14 Speedup and Scaling Strong Scaling: increasing parallelism in hardware achieves increased speedup. Weak Scaling: increased parallelism is achieved only by increasing the problem size with the hardware parallelism size increases. 11 / 14 Parallelism is Hard/Amdhal’s Law Amdhal’s Law: Idealized parallelism 20 95% parallel 90% parallel 18 75% parallel 50% parallel 16 14 12 10 Speedup 8 6 4 2 0 1 4 16 64 256 1024 4096 16384 65536 Number of Processors The speedup just isn’t there by conventional Speedup = 1 Fractionenhanced approaches. Gaining speedup is very difficult. In fact, (1−Fractionenhanced )+ Speedupenhanced this graph shows an idealized speedup without any Restating consideration for synchronization costs. = 1 Speedup (1−% affected+% left after optimization) 12 / 14 Amdhal’s Law for Parallelism with Overhead, Naive Amdhal’s Law: Introducing Overhead 100 original Adding a fixed 0.5% overhead 90 2x the runtime of the parallel portion 80 70 60 50 40 30 Speedup assuming 99% parallel 20 10 0 1 4 16 64 256 1024 4096 16384 65536 Number of Processors Neither the fixed or the variable overhead So we look at both fixed and variable overheads. The first costs are accurate, but they can give us line adds a fixed overhead to the parallel portion equal to some curves to consider. In reality, the 0.5% of the original computation. The second line shows costs of synchronization will be very a variable overhead equal to doubling the runtime costs of difficult to establish. I cannot really give the parallel components (not an unreasonable possibility). you much direction here, sorry. = 1 Speedup %parallel (1−%parallel)+( +:05) processors = 1 Speedup %parallel (1−%parallel)+( ∗2) processors 13 / 14 Gustafson’s Law Gustafson’s Law: Scaled speedup 140 95% parallel 90% parallel 75% parallel 120 50% parallel 100 80 Speedup 60 40 20 0 20 40 60 80 100 120 Number of Processors From wikipedia: Gustafson called his metric scaled speedup, because in the above S(P) = P − α · (P − 1) expression S(P) is the ratio of the total, single-process execution time to the per-process parallel execution time; the former scales with P, while the latter is assumed fixed or nearly S: Speedup so. This is in contrast to Amdahl’s Law, which takes the single-process execution time to P: number of processors be the fixed quantity, and compares it to a shrinking per-process parallel execution time. α: the non-parallelizable Thus, Amdahl’s law is based on the assumption of a fixed problem size: it assumes the fraction of any parallel overall workload of a program does not change with respect to machine size (i.e., the process number of processors). Both laws assume the parallelizable part is evenly distributed over P processors. 14 / 14.
Recommended publications
  • 2.5 Classification of Parallel Computers
    52 // Architectures 2.5 Classification of Parallel Computers 2.5 Classification of Parallel Computers 2.5.1 Granularity In parallel computing, granularity means the amount of computation in relation to communication or synchronisation Periods of computation are typically separated from periods of communication by synchronization events. • fine level (same operations with different data) ◦ vector processors ◦ instruction level parallelism ◦ fine-grain parallelism: – Relatively small amounts of computational work are done between communication events – Low computation to communication ratio – Facilitates load balancing 53 // Architectures 2.5 Classification of Parallel Computers – Implies high communication overhead and less opportunity for per- formance enhancement – If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation. • operation level (different operations simultaneously) • problem level (independent subtasks) ◦ coarse-grain parallelism: – Relatively large amounts of computational work are done between communication/synchronization events – High computation to communication ratio – Implies more opportunity for performance increase – Harder to load balance efficiently 54 // Architectures 2.5 Classification of Parallel Computers 2.5.2 Hardware: Pipelining (was used in supercomputers, e.g. Cray-1) In N elements in pipeline and for 8 element L clock cycles =) for calculation it would take L + N cycles; without pipeline L ∗ N cycles Example of good code for pipelineing: §doi =1 ,k ¤ z ( i ) =x ( i ) +y ( i ) end do ¦ 55 // Architectures 2.5 Classification of Parallel Computers Vector processors, fast vector operations (operations on arrays). Previous example good also for vector processor (vector addition) , but, e.g. recursion – hard to optimise for vector processors Example: IntelMMX – simple vector processor.
    [Show full text]
  • Cimple: Instruction and Memory Level Parallelism a DSL for Uncovering ILP and MLP
    Cimple: Instruction and Memory Level Parallelism A DSL for Uncovering ILP and MLP Vladimir Kiriansky, Haoran Xu, Martin Rinard, Saman Amarasinghe MIT CSAIL {vlk,haoranxu510,rinard,saman}@csail.mit.edu Abstract Processors have grown their capacity to exploit instruction Modern out-of-order processors have increased capacity to level parallelism (ILP) with wide scalar and vector pipelines, exploit instruction level parallelism (ILP) and memory level e.g., cores have 4-way superscalar pipelines, and vector units parallelism (MLP), e.g., by using wide superscalar pipelines can execute 32 arithmetic operations per cycle. Memory and vector execution units, as well as deep buffers for in- level parallelism (MLP) is also pervasive with deep buffering flight memory requests. These resources, however, often ex- between caches and DRAM that allows 10+ in-flight memory hibit poor utilization rates on workloads with large working requests per core. Yet, modern CPUs still struggle to extract sets, e.g., in-memory databases, key-value stores, and graph matching ILP and MLP from the program stream. analytics, as compilers and hardware struggle to expose ILP Critical infrastructure applications, e.g., in-memory databases, and MLP from the instruction stream automatically. key-value stores, and graph analytics, characterized by large In this paper, we introduce the IMLP (Instruction and working sets with multi-level address indirection and pointer Memory Level Parallelism) task programming model. IMLP traversals push hardware to its limits: large multi-level caches tasks execute as coroutines that yield execution at annotated and branch predictors fail to keep processor stalls low. The long-latency operations, e.g., memory accesses, divisions, out-of-order windows of hundreds of instructions are also or unpredictable branches.
    [Show full text]
  • Introduction to Multi-Threading and Vectorization Matti Kortelainen Larsoft Workshop 2019 25 June 2019 Outline
    Introduction to multi-threading and vectorization Matti Kortelainen LArSoft Workshop 2019 25 June 2019 Outline Broad introductory overview: • Why multithread? • What is a thread? • Some threading models – std::thread – OpenMP (fork-join) – Intel Threading Building Blocks (TBB) (tasks) • Race condition, critical region, mutual exclusion, deadlock • Vectorization (SIMD) 2 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading Image courtesy of K. Rupp 3 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading • One process on a node: speedups from parallelizing parts of the programs – Any problem can get speedup if the threads can cooperate on • same core (sharing L1 cache) • L2 cache (may be shared among small number of cores) • Fully loaded node: save memory and other resources – Threads can share objects -> N threads can use significantly less memory than N processes • If smallest chunk of data is so big that only one fits in memory at a time, is there any other option? 4 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization What is a (software) thread? (in POSIX/Linux) • “Smallest sequence of programmed instructions that can be managed independently by a scheduler” [Wikipedia] • A thread has its own – Program counter – Registers – Stack – Thread-local memory (better to avoid in general) • Threads of a process share everything else, e.g. – Program code, constants – Heap memory – Network connections – File handles
    [Show full text]
  • Computer Hardware Architecture Lecture 4
    Computer Hardware Architecture Lecture 4 Manfred Liebmann Technische Universit¨atM¨unchen Chair of Optimal Control Center for Mathematical Sciences, M17 [email protected] November 10, 2015 Manfred Liebmann November 10, 2015 Reading List • Pacheco - An Introduction to Parallel Programming (Chapter 1 - 2) { Introduction to computer hardware architecture from the parallel programming angle • Hennessy-Patterson - Computer Architecture - A Quantitative Approach { Reference book for computer hardware architecture All books are available on the Moodle platform! Computer Hardware Architecture 1 Manfred Liebmann November 10, 2015 UMA Architecture Figure 1: A uniform memory access (UMA) multicore system Access times to main memory is the same for all cores in the system! Computer Hardware Architecture 2 Manfred Liebmann November 10, 2015 NUMA Architecture Figure 2: A nonuniform memory access (UMA) multicore system Access times to main memory differs form core to core depending on the proximity of the main memory. This architecture is often used in dual and quad socket servers, due to improved memory bandwidth. Computer Hardware Architecture 3 Manfred Liebmann November 10, 2015 Cache Coherence Figure 3: A shared memory system with two cores and two caches What happens if the same data element z1 is manipulated in two different caches? The hardware enforces cache coherence, i.e. consistency between the caches. Expensive! Computer Hardware Architecture 4 Manfred Liebmann November 10, 2015 False Sharing The cache coherence protocol works on the granularity of a cache line. If two threads manipulate different element within a single cache line, the cache coherency protocol is activated to ensure consistency, even if every thread is only manipulating its own data.
    [Show full text]
  • Scheduling Task Parallelism on Multi-Socket Multicore Systems
    Scheduling Task Parallelism" on Multi-Socket Multicore Systems" Stephen Olivier, UNC Chapel Hill Allan Porterfield, RENCI Kyle Wheeler, Sandia National Labs Jan Prins, UNC Chapel Hill The University of North Carolina at Chapel Hill Outline" Introduction and Motivation Scheduling Strategies Evaluation Closing Remarks The University of North Carolina at Chapel Hill ! Outline" Introduction and Motivation Scheduling Strategies Evaluation Closing Remarks The University of North Carolina at Chapel Hill ! Task Parallel Programming in a Nutshell! • A task consists of executable code and associated data context, with some bookkeeping metadata for scheduling and synchronization. • Tasks are significantly more lightweight than threads. • Dynamically generated and terminated at run time • Scheduled onto threads for execution • Used in Cilk, TBB, X10, Chapel, and other languages • Our work is on the recent tasking constructs in OpenMP 3.0. The University of North Carolina at Chapel Hill ! 4 Simple Task Parallel OpenMP Program: Fibonacci! int fib(int n)! {! fib(10)! int x, y;! if (n < 2) return n;! #pragma omp task! fib(9)! fib(8)! x = fib(n - 1);! #pragma omp task! y = fib(n - 2);! #pragma omp taskwait! fib(8)! fib(7)! return x + y;! }! The University of North Carolina at Chapel Hill ! 5 Useful Applications! • Recursive algorithms cilksort cilksort cilksort cilksort cilksort • E.g. Mergesort • List and tree traversal cilkmerge cilkmerge cilkmerge cilkmerge cilkmerge cilkmerge • Irregular computations cilkmerge • E.g., Adaptive Fast Multipole cilkmerge cilkmerge
    [Show full text]
  • Task Parallelism Bit-Level Parallelism
    Parallel languages as extensions of sequential ones Alexey A. Romanenko [email protected] What this section about? ● Computers. History. Trends. ● What is parallel program? ● What is parallel programming for? ● Features of parallel programs. ● Development environment. ● etc. Agenda 1. Sequential program 2. Applications, required computational power. 3. What does parallel programming for? 4. Parallelism inside ordinary PC. 5. Architecture of modern CPUs. 6. What is parallel program? 7. Types of parallelism. Agenda 8. Types of computational installations. 9. Specificity of parallel programs. 10.Amdahl's law 11.Development environment 12.Approaches to development of parallel programs. Cost of development. 13.Self-test questions History George Boole Claude Elwood Shannon Alan Turing Charles Babbage John von Neumann Norbert Wiener Henry Edward Roberts Sciences ● Computer science is the study of the theoretical foundations of information and computation, and of practical techniques for their implementation and application in computer systems. ● Cybernetics is the interdisciplinary study of the structure of regulatory system Difference machine Arithmometer Altair 8800 Computer with 8-inch floppy disk system Sequential program A program perform calculation of a function F = G(X) for example: a*x2+b*x+c=0, a != 0. x1=(-b-sqrt(b2-4ac))/(2a), x2=(-b+sqrt(b2-4ac))/(2a) Turing machine Plasma modeling N ~ 106 dX ~ F dT2 j j F ~ sum(q, q ) j i i j Complexity ~ O(N*N) more then 1012 * 100...1000 operations Resource consumable calculations ● Nuclear/Gas/Hydrodynamic
    [Show full text]
  • Threading SIMD and MIMD in the Multicore Context the Ultrasparc T2
    Overview SIMD and MIMD in the Multicore Context Single Instruction Multiple Instruction ● (note: Tute 02 this Weds - handouts) ● Flynn’s Taxonomy Single Data SISD MISD ● multicore architecture concepts Multiple Data SIMD MIMD ● for SIMD, the control unit and processor state (registers) can be shared ■ hardware threading ■ SIMD vs MIMD in the multicore context ● however, SIMD is limited to data parallelism (through multiple ALUs) ■ ● T2: design features for multicore algorithms need a regular structure, e.g. dense linear algebra, graphics ■ SSE2, Altivec, Cell SPE (128-bit registers); e.g. 4×32-bit add ■ system on a chip Rx: x x x x ■ 3 2 1 0 execution: (in-order) pipeline, instruction latency + ■ thread scheduling Ry: y3 y2 y1 y0 ■ caches: associativity, coherence, prefetch = ■ memory system: crossbar, memory controller Rz: z3 z2 z1 z0 (zi = xi + yi) ■ intermission ■ design requires massive effort; requires support from a commodity environment ■ speculation; power savings ■ massive parallelism (e.g. nVidia GPGPU) but memory is still a bottleneck ■ OpenSPARC ● multicore (CMT) is MIMD; hardware threading can be regarded as MIMD ● T2 performance (why the T2 is designed as it is) ■ higher hardware costs also includes larger shared resources (caches, TLBs) ● the Rock processor (slides by Andrew Over; ref: Tremblay, IEEE Micro 2009 ) needed ⇒ less parallelism than for SIMD COMP8320 Lecture 2: Multicore Architecture and the T2 2011 ◭◭◭ • ◮◮◮ × 1 COMP8320 Lecture 2: Multicore Architecture and the T2 2011 ◭◭◭ • ◮◮◮ × 3 Hardware (Multi)threading The UltraSPARC T2: System on a Chip ● recall concurrent execution on a single CPU: switch between threads (or ● OpenSparc Slide Cast Ch 5: p79–81,89 processes) requires the saving (in memory) of thread state (register values) ● aggressively multicore: 8 cores, each with 8-way hardware threading (64 virtual ■ motivation: utilize CPU better when thread stalled for I/O (6300 Lect O1, p9–10) CPUs) ■ what are the costs? do the same for smaller stalls? (e.g.
    [Show full text]
  • Computer Architecture: Parallel Processing Basics
    Computer Architecture: Parallel Processing Basics Onur Mutlu & Seth Copen Goldstein Carnegie Mellon University 9/9/13 Today What is Parallel Processing? Why? Kinds of Parallel Processing Multiprocessing and Multithreading Measuring success Speedup Amdhal’s Law Bottlenecks to parallelism 2 Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Cloud Computing EC2 Tashi PDL'09 © 2007-9 Goldstein5 Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Cloud Computing EC2 Tashi Parallel PDL'09 © 2007-9 Goldstein6 Concurrent Systems Physical Geographical Cloud Parallel Geophysical +++ ++ --- --- location Relative +++ +++ + - location Faults ++++ +++ ++++ -- Number of +++ +++ + - Processors + Network varies varies fixed fixed structure Network --- --- + + connectivity 7 Concurrent System Challenge: Programming The old joke: How long does it take to write a parallel program? One Graduate Student Year 8 Parallel Programming Again?? Increased demand (multicore) Increased scale (cloud) Improved compute/communicate Change in Application focus Irregular Recursive data structures PDL'09 © 2007-9 Goldstein9 Why Parallel Computers? Parallelism: Doing multiple things at a time Things: instructions,
    [Show full text]
  • Parallel Processing! 1! CSE 30321 – Lecture 23 – Introduction to Parallel Processing! 2! Suggested Readings! •! Readings! –! H&P: Chapter 7! •! (Over Next 2 Weeks)!
    CSE 30321 – Lecture 23 – Introduction to Parallel Processing! 1! CSE 30321 – Lecture 23 – Introduction to Parallel Processing! 2! Suggested Readings! •! Readings! –! H&P: Chapter 7! •! (Over next 2 weeks)! Lecture 23" Introduction to Parallel Processing! University of Notre Dame! University of Notre Dame! CSE 30321 – Lecture 23 – Introduction to Parallel Processing! 3! CSE 30321 – Lecture 23 – Introduction to Parallel Processing! 4! Processor components! Multicore processors and programming! Processor comparison! vs.! Goal: Explain and articulate why modern microprocessors now have more than one core andCSE how software 30321 must! adapt to accommodate the now prevalent multi- core approach to computing. " Introduction and Overview! Writing more ! efficient code! The right HW for the HLL code translation! right application! University of Notre Dame! University of Notre Dame! CSE 30321 – Lecture 23 – Introduction to Parallel Processing! CSE 30321 – Lecture 23 – Introduction to Parallel Processing! 6! Pipelining and “Parallelism”! ! Load! Mem! Reg! DM! Reg! ALU ! Instruction 1! Mem! Reg! DM! Reg! ALU ! Instruction 2! Mem! Reg! DM! Reg! ALU ! Instruction 3! Mem! Reg! DM! Reg! ALU ! Instruction 4! Mem! Reg! DM! Reg! ALU Time! Instructions execution overlaps (psuedo-parallel)" but instructions in program issued sequentially." University of Notre Dame! University of Notre Dame! CSE 30321 – Lecture 23 – Introduction to Parallel Processing! CSE 30321 – Lecture 23 – Introduction to Parallel Processing! Multiprocessing (Parallel) Machines! Flynn#s
    [Show full text]
  • A CPU/GPU Task-Parallel Runtime with Explicit Epoch Synchronization
    TREES: A CPU/GPU Task-Parallel Runtime with Explicit Epoch Synchronization Blake A. Hechtman, Andrew D. Hilton, and Daniel J. Sorin Department of Electrical and Computer Engineering Duke University Abstract —We have developed a task-parallel runtime targeting CPUs are a poor fit for GPUs. To understand system, called TREES, that is designed for high why this mismatch exists, we must first understand the performance on CPU/GPU platforms. On platforms performance of an idealized task-parallel application with multiple CPUs, Cilk’s “work-first” principle (with no runtime) and then how the runtime’s overhead underlies how task-parallel applications can achieve affects it. The performance of a task-parallel application performance, but work-first is a poor fit for GPUs. We is a function of two characteristics: its total amount of build upon work-first to create the “work-together” work to be performed (T1, the time to execute on 1 principle that addresses the specific strengths and processor) and its critical path (T∞, the time to execute weaknesses of GPUs. The work-together principle on an infinite number of processors). Prior work has extends work-first by stating that (a) the overhead on shown that the runtime of a system with P processors, the critical path should be paid by the entire system at TP, is bounded by = ( ) + ( ) due to the once and (b) work overheads should be paid co- greedy o ff line scheduler bound [3][10]. operatively. We have implemented the TREES runtime A task-parallel runtime introduces overheads and, for in OpenCL, and we experimentally evaluate TREES purposes of performance analysis, we distinguish applications on a CPU/GPU platform.
    [Show full text]
  • An Overview of Parallel Ccomputing
    An Overview of Parallel Ccomputing Marc Moreno Maza University of Western Ontario, London, Ontario (Canada) CS2101 Plan 1 Hardware 2 Types of Parallelism 3 Concurrency Platforms: Three Examples Cilk CUDA MPI Hardware Plan 1 Hardware 2 Types of Parallelism 3 Concurrency Platforms: Three Examples Cilk CUDA MPI Hardware von Neumann Architecture In 1945, the Hungarian mathematician John von Neumann proposed the above organization for hardware computers. The Control Unit fetches instructions/data from memory, decodes the instructions and then sequentially coordinates operations to accomplish the programmed task. The Arithmetic Unit performs basic arithmetic operation, while Input/Output is the interface to the human operator. Hardware von Neumann Architecture The Pentium Family. Hardware Parallel computer hardware Most computers today (including tablets, smartphones, etc.) are equipped with several processing units (control+arithmetic units). Various characteristics determine the types of computations: shared memory vs distributed memory, single-core processors vs multicore processors, data-centric parallelism vs task-centric parallelism. Historically, shared memory machines have been classified as UMA and NUMA, based upon memory access times. Hardware Uniform memory access (UMA) Identical processors, equal access and access times to memory. In the presence of cache memories, cache coherency is accomplished at the hardware level: if one processor updates a location in shared memory, then all the other processors know about the update. UMA architectures were first represented by Symmetric Multiprocessor (SMP) machines. Multicore processors follow the same architecture and, in addition, integrate the cores onto a single circuit die. Hardware Non-uniform memory access (NUMA) Often made by physically linking two or more SMPs (or multicore processors).
    [Show full text]
  • Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures
    Computer Architecture A Quantitative Approach, Fifth Edition Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures Copyright © 2012, Elsevier Inc. All rights reserved. 1 Contents 1. SIMD architecture 2. Vector architectures optimizations: Multiple Lanes, Vector Length Registers, Vector Mask Registers, Memory Banks, Stride, Scatter-Gather, 3. Programming Vector Architectures 4. SIMD extensions for media apps 5. GPUs – Graphical Processing Units 6. Fermi architecture innovations 7. Examples of loop-level parallelism 8. Fallacies Copyright © 2012, Elsevier Inc. All rights reserved. 2 Classes of Computers Classes Flynn’s Taxonomy SISD - Single instruction stream, single data stream SIMD - Single instruction stream, multiple data streams New: SIMT – Single Instruction Multiple Threads (for GPUs) MISD - Multiple instruction streams, single data stream No commercial implementation MIMD - Multiple instruction streams, multiple data streams Tightly-coupled MIMD Loosely-coupled MIMD Copyright © 2012, Elsevier Inc. All rights reserved. 3 Introduction Advantages of SIMD architectures 1. Can exploit significant data-level parallelism for: 1. matrix-oriented scientific computing 2. media-oriented image and sound processors 2. More energy efficient than MIMD 1. Only needs to fetch one instruction per multiple data operations, rather than one instr. per data op. 2. Makes SIMD attractive for personal mobile devices 3. Allows programmers to continue thinking sequentially SIMD/MIMD comparison. Potential speedup for SIMD twice that from MIMID! x86 processors expect two additional cores per chip per year SIMD width to double every four years Copyright © 2012, Elsevier Inc. All rights reserved. 4 Introduction SIMD parallelism SIMD architectures A. Vector architectures B. SIMD extensions for mobile systems and multimedia applications C.
    [Show full text]