Enabling Openmp Task Parallelism on Multi-Fpgas

Total Page:16

File Type:pdf, Size:1020Kb

Enabling Openmp Task Parallelism on Multi-Fpgas Enabling OpenMP Task Parallelism on Multi-FPGAs Ramon Nepomuceno, Renan Sterle, Guilherme Valarini, Marcio Pereira, Herve´ Yviquel, and Guido Araujo Institute of Computing, University of Campinas, Brazil framon.nepomuceno, mpereira, herve, [email protected] fr176536, [email protected] Abstract—FPGA-based hardware accelerators have received increasing attention mainly due to their ability to accelerate deep pipelined applications, thus resulting in higher computational performance and energy efficiency. Nevertheless, the amount of resources available on even the most powerful FPGA is still not enough to speed up very large modern workloads. To achieve that, FPGAs need to be interconnected in a Multi-FPGA architecture capable of accelerating a single application. However, programming such architecture is a challenging endeavor that still requires additional research. This paper extends the OpenMP task-based computation offloading model to enable a number of FPGAs to work together as a single Multi-FPGA architecture. Experimental results for a set of OpenMP stencil applications running on a Multi-FPGA platform consisting of 6 Xilinx VC709 boards interconnected through fiber-optic links have shown close to linear speedups as the number of FPGAs and IP-cores per Fig. 1: An optical-link interconnected Multi-FPGA architec- FPGA increase. ture running an OpenMP pipelined application. I. INTRODUCTION With the limits imposed by the power density of semi- Unfortunately, programming such Multi-FPGA architecture conductor technology, heterogeneous systems became a design is a challenging endeavor that still requires additional re- alternative that combines CPUs with domain-specific acceler- search [14], [15]. Synchronizing the accelerators inside the ators to improve power-performance efficiency [1]. A mod- FPGA and seamlessly managing data transfers between them ern heterogeneous system typically combines general-purpose are still significant design challenges that restrict the adoption CPUs and GPUs to speedup complex scientific applications[2]. of such architectures. This paper addresses this problem by However, for many specialized applications that can benefit extending the LLVM OpenMP task programming model [16] from pipelined parallelism (e.g. FFT, Networking), FPGA- to Multi-FPGA architectures. based hardware accelerators have shown to produce improved At a higher abstraction level, the programming model pro- power-performance numbers [3], [4], [5], [6]. Moreover, posed in this paper enables the programmer to see the FPGA FPGA’s reconfigurability facilitates the adaptation of the accel- as a regular OpenMP device and the IP-cores accelerators (IPs) erator to distinct types of workloads and applications. In order as OpenMP tasks. In the proposed approach, the OpenMP to leverage on this, cloud service companies like Microsoft task dependence mechanism transparently coordinates the IPs’ Azure [7] and Amazon AWS [8] are offering heterogeneous work to run the application. For example, consider the simple arXiv:2103.10573v2 [cs.DC] 22 Mar 2021 computing nodes with integrated FPGAs. problem of processing the elements of a vector V in a pipeline Given its small external memory bandwidth [9] FPGAs fashion using four IPs (IPO-IP3) programmed in two FPGAs. do not perform well for applications that require intense Each IPi (i = 0-3) performs some specific computation foo(V,i). memory accesses. Pipelined FPGA accelerators [10], [11], The Multi-FPGA architecture used in this example is shown [12] have been designed to address this problem but such Figure 1 and contains two VC7091 FPGA boards intercon- designs are constrained to the boundaries of a single FPGA nected by two fiber-optics links. Each FPGA is programmed or are limited by the number of FPGAs that it can handle. By with: (a) a module for communication with the PCIE interface connecting multiple FPGAs, one can design deep pipelines (DMA/PCIE); (b) two IPs from the set IP0-IP3; (c) a NET that go beyond the border of one FPGA, thus allowing data to module for communication with the optical fibers; (d) a Virtual be transferred through optical-links from one FPGA to another FIFO module (VFIFO) for communication with memory; (e) without using external memory as temporal storage. Such deep a MAC frame handler (MFH) to pack/unpack data; and (f) a pipeline accelerators can considerably expand the application packet switch (A-SWT) module capable of moving data among of FPGAs, thus enabling increasing speedups as more FPGAs are added to the system [13]. 1A similar approach can also be used for modern Alveo FPGAs. the IPs, even if they seat on FPGAs from two distinct boards. 1 i n t V[M] ; As shown in the figure, the vector is initially moved from 2 bool deps[N+1]; 3 #pragma omp parallel the host memory (left of Figure 1) and then pushed through 4 #pragma omp single the IP0-IP3 pipeline, returning the final result into the host 5 f o r(int i=0; i <N; i ++)f memory. 6 #pragma omp task n 7 depend(in:deps[i]) depend(out:deps[i+1]) The OpenMP program required to execute this simple appli- 8 foo (V, i ) ; cation uses just a few code lines (see Listing 3). Its simplicity 9 g is possible due to the techniques proposed in this paper, which Listing 1: OpenMP tasks in CPUs. leverage on the OpenMP task dependence and computation offloading abstraction to hide the complexity needed to move the vector across the four IPs. The proposed techniques extend 1 i n t V[M] ; the OpenMP runtime with an FPGA device plugin which: (a) 2 bool deps[N+1]; maps task data to/from the FPGAs; (b) transparently handles 3 #pragma omp parallel 4 #pragma omp single the data dependencies between IPs located in distinct FPGAs; 5 f o r(int i=0; i < N; i ++)f and (c) eases the synchronization of IPs’ execution. 6 #pragma omp target device (GPU) n The main contributions of this paper are summarized below: 7 depend(in:deps[i]) depend(out:deps[i+1]) n 8 map(tofrom:V[:M]) nowait • A new Clang/LLVM plugin that understands FPGA 9 foo (V, i ) ; boards as OpenMP devices, and uses OpenMP declare 10 g variant directive to specify hardware IPs; Listing 2: Offloading OpenMP tasks to GPU. • A mechanism based on the OpenMP task dependence and computation offloading model that enables transparent communication of IPs in a Multi-FPGA architecture; • A programming model based on OpenMP task paral- between tasks are specified by the depend clause, which the lelism, which makes it simple to move data among FP- programmer uses to define the set of input variables that GAs, CPUs or other acceleration devices (e.g. GPUs), and the task depends on and the output variables that the task that allows the programmer to use a single programming writes to. Whenever the dependencies of a given task are model to run its application on a truly heterogeneous satisfied, the OpenMP runtime dispatches it to a ready queue architecture. which feeds a pool of worker threads. The OpenMP runtime The rest of the paper is organized as follows. Section II manages the task graph, handles data management, creates and discusses background material and Section III details the synchronizes threads, among other activities. software and hardware mechanisms that were designed to en- Listing 1 shows an example of a simple program that uses able OpenMP task parallelism on a Multi-FPGA architecture. OpenMP tasks to perform a sequence of increments in the Section IV shows how this architecture can be used to design vector V, similarly as discussed in Figure 1. First, the program a scalable stencil pipeline application. Section V describes creates a pool of worker threads. This is done using the the experimental setup and analyzes their results. Section VI #pragma omp parallel directive (line 3). It is with this set of discusses related works and finally Section VII concludes the threads that an OpenMP program runs in parallel. Next, the work. #pragma omp single directive (line 4) is used to select one of the worker threads. This selected thread is called control II. BACKGROUND thread and is responsible for creating the tasks. The #pragma This section provides background material required for omp task directive (line 6) is used to effectively build N understanding the proposed approach. Section II-A reviews worker tasks, each one encapsulating a call to function foo(V,i) the basic concepts of the OpenMP accelerator programming (line 8) that performs some specific operation on the vector model, while Section II-B details the Xilinx modules used to V depending on the task (loop) index. Worker threads are assemble the testing platform. created by the OpenMP runtime to run on the cores of the system. In order to assure the execution order of the tasks the A. OpenMP Accelerator Model depend clause (line 7) is used to specify the input and output The OpenMP specification [17] defines a programming dependencies of each task. In the specific case of the example model based on code annotations that use minimal modifica- of Listing 1 we have to introduce a deps vector which assures tions of the program to expose potential parallelism. OpenMP that the foo(V,i) tasks are executed in a pipeline order. has been successfully used to parallelize programs on CPUs Now assume the user wants to accelerate the above de- and GPUs residing on a single computer node. scribed tasks in a GPU. For this purpose, consider the code in Tasks were introduced to OpenMP in version 3.0 [18], to Listing 2 which is the version of Listing 1 using a GPU as an expose a higher degree of parallelism through the non-blocking acceleration device. In Listing 2 the target directive (line 6) execution of code fragments annotated with task directives.
Recommended publications
  • Designing an Ultra Low-Overhead Multithreading Runtime for Nim
    Designing an ultra low-overhead multithreading runtime for Nim Mamy Ratsimbazafy Weave [email protected] https://github.com/mratsim/weave Hello! I am Mamy Ratsimbazafy During the day blockchain/Ethereum 2 developer (in Nim) During the night, deep learning and numerical computing developer (in Nim) and data scientist (in Python) You can contact me at [email protected] Github: mratsim Twitter: m_ratsim 2 Where did this talk came from? ◇ 3 years ago: started writing a tensor library in Nim. ◇ 2 threading APIs at the time: OpenMP and simple threadpool ◇ 1 year ago: complete refactoring of the internals 3 Agenda ◇ Understanding the design space ◇ Hardware and software multithreading: definitions and use-cases ◇ Parallel APIs ◇ Sources of overhead and runtime design ◇ Minimum viable runtime plan in a weekend 4 Understanding the 1 design space Concurrency vs parallelism, latency vs throughput Cooperative vs preemptive, IO vs CPU 5 Parallelism is not 6 concurrency Kernel threading 7 models 1:1 Threading 1 application thread -> 1 hardware thread N:1 Threading N application threads -> 1 hardware thread M:N Threading M application threads -> N hardware threads The same distinctions can be done at a multithreaded language or multithreading runtime level. 8 The problem How to schedule M tasks on N hardware threads? Latency vs 9 Throughput - Do we want to do all the work in a minimal amount of time? - Numerical computing - Machine learning - ... - Do we want to be fair? - Clients-server - Video decoding - ... Cooperative vs 10 Preemptive Cooperative multithreading:
    [Show full text]
  • Fiber Context As a first-Class Object
    Document number: P0876R6 Date: 2019-06-17 Author: Oliver Kowalke ([email protected]) Nat Goodspeed ([email protected]) Audience: SG1 fiber_context - fibers without scheduler Revision History . .1 abstract . .2 control transfer mechanism . .2 std::fiber_context as a first-class object . .3 encapsulating the stack . .4 invalidation at resumption . .4 problem: avoiding non-const global variables and undefined behaviour . .5 solution: avoiding non-const global variables and undefined behaviour . .6 inject function into suspended fiber . 11 passing data between fibers . 12 termination . 13 exceptions . 16 std::fiber_context as building block for higher-level frameworks . 16 interaction with STL algorithms . 18 possible implementation strategies . 19 fiber switch on architectures with register window . 20 how fast is a fiber switch . 20 interaction with accelerators . 20 multi-threading environment . 21 acknowledgments . 21 API ................................................................ 22 33.7 Cooperative User-Mode Threads . 22 33.7.1 General . 22 33.7.2 Empty vs. Non-Empty . 22 33.7.3 Explicit Fiber vs. Implicit Fiber . 22 33.7.4 Implicit Top-Level Function . 22 33.7.5 Header <experimental/fiber_context> synopsis . 23 33.7.6 Class fiber_context . 23 33.7.7 Function unwind_fiber() . 27 references . 29 Revision History This document supersedes P0876R5. Changes since P0876R5: • std::unwind_exception removed: stack unwinding must be performed by platform facilities. • fiber_context::can_resume_from_any_thread() renamed to can_resume_from_this_thread(). • fiber_context::valid() renamed to empty() with inverted sense. • Material has been added concerning the top-level wrapper logic governing each fiber. The change to unwinding fiber stacks using an anonymous foreign exception not catchable by C++ try / catch blocks is in response to deep discussions in Kona 2019 of the surprisingly numerous problems surfaced by using an ordinary C++ exception for that purpose.
    [Show full text]
  • Cimple: Instruction and Memory Level Parallelism a DSL for Uncovering ILP and MLP
    Cimple: Instruction and Memory Level Parallelism A DSL for Uncovering ILP and MLP Vladimir Kiriansky, Haoran Xu, Martin Rinard, Saman Amarasinghe MIT CSAIL {vlk,haoranxu510,rinard,saman}@csail.mit.edu Abstract Processors have grown their capacity to exploit instruction Modern out-of-order processors have increased capacity to level parallelism (ILP) with wide scalar and vector pipelines, exploit instruction level parallelism (ILP) and memory level e.g., cores have 4-way superscalar pipelines, and vector units parallelism (MLP), e.g., by using wide superscalar pipelines can execute 32 arithmetic operations per cycle. Memory and vector execution units, as well as deep buffers for in- level parallelism (MLP) is also pervasive with deep buffering flight memory requests. These resources, however, often ex- between caches and DRAM that allows 10+ in-flight memory hibit poor utilization rates on workloads with large working requests per core. Yet, modern CPUs still struggle to extract sets, e.g., in-memory databases, key-value stores, and graph matching ILP and MLP from the program stream. analytics, as compilers and hardware struggle to expose ILP Critical infrastructure applications, e.g., in-memory databases, and MLP from the instruction stream automatically. key-value stores, and graph analytics, characterized by large In this paper, we introduce the IMLP (Instruction and working sets with multi-level address indirection and pointer Memory Level Parallelism) task programming model. IMLP traversals push hardware to its limits: large multi-level caches tasks execute as coroutines that yield execution at annotated and branch predictors fail to keep processor stalls low. The long-latency operations, e.g., memory accesses, divisions, out-of-order windows of hundreds of instructions are also or unpredictable branches.
    [Show full text]
  • Fiber-Based Architecture for NFV Cloud Databases
    Fiber-based Architecture for NFV Cloud Databases Vaidas Gasiunas, David Dominguez-Sal, Ralph Acker, Aharon Avitzur, Ilan Bronshtein, Rushan Chen, Eli Ginot, Norbert Martinez-Bazan, Michael Muller,¨ Alexander Nozdrin, Weijie Ou, Nir Pachter, Dima Sivov, Eliezer Levy Huawei Technologies, Gauss Database Labs, European Research Institute fvaidas.gasiunas,david.dominguez1,fi[email protected] ABSTRACT to a specific workload, as well as to change this alloca- The telco industry is gradually shifting from using mono- tion on demand. Elasticity is especially attractive to op- lithic software packages deployed on custom hardware to erators who are very conscious about Total Cost of Own- using modular virtualized software functions deployed on ership (TCO). NFV meets databases (DBs) in more than cloudified data centers using commodity hardware. This a single aspect [20, 30]. First, the NFV concepts of modu- transformation is referred to as Network Function Virtual- lar software functions accelerate the separation of function ization (NFV). The scalability of the databases (DBs) un- (business logic) and state (data) in the design and imple- derlying the virtual network functions is the cornerstone for mentation of network functions (NFs). This in turn enables reaping the benefits from the NFV transformation. This independent scalability of the DB component, and under- paper presents an industrial experience of applying shared- lines its importance in providing elastic state repositories nothing techniques in order to achieve the scalability of a DB for various NFs. In principle, the NFV-ready DBs that are in an NFV setup. The special combination of requirements relevant to our discussion are sophisticated main-memory in NFV DBs are not easily met with conventional execution key-value (KV) stores with stringent high availability (HA) models.
    [Show full text]
  • A Thread Performance Comparison: Windows NT and Solaris on a Symmetric Multiprocessor
    The following paper was originally published in the Proceedings of the 2nd USENIX Windows NT Symposium Seattle, Washington, August 3–4, 1998 A Thread Performance Comparison: Windows NT and Solaris on A Symmetric Multiprocessor Fabian Zabatta and Kevin Ying Brooklyn College and CUNY Graduate School For more information about USENIX Association contact: 1. Phone: 510 528-8649 2. FAX: 510 548-5738 3. Email: [email protected] 4. WWW URL:http://www.usenix.org/ A Thread Performance Comparison: Windows NT and Solaris on A Symmetric Multiprocessor Fabian Zabatta and Kevin Ying Logic Based Systems Lab Brooklyn College and CUNY Graduate School Computer Science Department 2900 Bedford Avenue Brooklyn, New York 11210 {fabian, kevin}@sci.brooklyn.cuny.edu Abstract Manufacturers now have the capability to build high performance multiprocessor machines with common CPU CPU CPU CPU cache cache cache cache PC components. This has created a new market of low cost multiprocessor machines. However, these machines are handicapped unless they have an oper- Bus ating system that can take advantage of their under- lying architectures. Presented is a comparison of Shared Memory I/O two such operating systems, Windows NT and So- laris. By focusing on their implementation of Figure 1: A basic symmetric multiprocessor architecture. threads, we show each system's ability to exploit multiprocessors. We report our results and inter- pretations of several experiments that were used to compare the performance of each system. What 1.1 Symmetric Multiprocessing emerges is a discussion on the performance impact of each implementation and its significance on vari- Symmetric multiprocessing (SMP) is the primary ous types of applications.
    [Show full text]
  • Introduction to Multi-Threading and Vectorization Matti Kortelainen Larsoft Workshop 2019 25 June 2019 Outline
    Introduction to multi-threading and vectorization Matti Kortelainen LArSoft Workshop 2019 25 June 2019 Outline Broad introductory overview: • Why multithread? • What is a thread? • Some threading models – std::thread – OpenMP (fork-join) – Intel Threading Building Blocks (TBB) (tasks) • Race condition, critical region, mutual exclusion, deadlock • Vectorization (SIMD) 2 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading Image courtesy of K. Rupp 3 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading • One process on a node: speedups from parallelizing parts of the programs – Any problem can get speedup if the threads can cooperate on • same core (sharing L1 cache) • L2 cache (may be shared among small number of cores) • Fully loaded node: save memory and other resources – Threads can share objects -> N threads can use significantly less memory than N processes • If smallest chunk of data is so big that only one fits in memory at a time, is there any other option? 4 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization What is a (software) thread? (in POSIX/Linux) • “Smallest sequence of programmed instructions that can be managed independently by a scheduler” [Wikipedia] • A thread has its own – Program counter – Registers – Stack – Thread-local memory (better to avoid in general) • Threads of a process share everything else, e.g. – Program code, constants – Heap memory – Network connections – File handles
    [Show full text]
  • Scheduling Task Parallelism on Multi-Socket Multicore Systems
    Scheduling Task Parallelism" on Multi-Socket Multicore Systems" Stephen Olivier, UNC Chapel Hill Allan Porterfield, RENCI Kyle Wheeler, Sandia National Labs Jan Prins, UNC Chapel Hill The University of North Carolina at Chapel Hill Outline" Introduction and Motivation Scheduling Strategies Evaluation Closing Remarks The University of North Carolina at Chapel Hill ! Outline" Introduction and Motivation Scheduling Strategies Evaluation Closing Remarks The University of North Carolina at Chapel Hill ! Task Parallel Programming in a Nutshell! • A task consists of executable code and associated data context, with some bookkeeping metadata for scheduling and synchronization. • Tasks are significantly more lightweight than threads. • Dynamically generated and terminated at run time • Scheduled onto threads for execution • Used in Cilk, TBB, X10, Chapel, and other languages • Our work is on the recent tasking constructs in OpenMP 3.0. The University of North Carolina at Chapel Hill ! 4 Simple Task Parallel OpenMP Program: Fibonacci! int fib(int n)! {! fib(10)! int x, y;! if (n < 2) return n;! #pragma omp task! fib(9)! fib(8)! x = fib(n - 1);! #pragma omp task! y = fib(n - 2);! #pragma omp taskwait! fib(8)! fib(7)! return x + y;! }! The University of North Carolina at Chapel Hill ! 5 Useful Applications! • Recursive algorithms cilksort cilksort cilksort cilksort cilksort • E.g. Mergesort • List and tree traversal cilkmerge cilkmerge cilkmerge cilkmerge cilkmerge cilkmerge • Irregular computations cilkmerge • E.g., Adaptive Fast Multipole cilkmerge cilkmerge
    [Show full text]
  • Concurrent Cilk: Lazy Promotion from Tasks to Threads in C/C++
    Concurrent Cilk: Lazy Promotion from Tasks to Threads in C/C++ Christopher S. Zakian, Timothy A. K. Zakian Abhishek Kulkarni, Buddhika Chamith, and Ryan R. Newton Indiana University - Bloomington, fczakian, tzakian, adkulkar, budkahaw, [email protected] Abstract. Library and language support for scheduling non-blocking tasks has greatly improved, as have lightweight (user) threading packages. How- ever, there is a significant gap between the two developments. In previous work|and in today's software packages|lightweight thread creation incurs much larger overheads than tasking libraries, even on tasks that end up never blocking. This limitation can be removed. To that end, we describe an extension to the Intel Cilk Plus runtime system, Concurrent Cilk, where tasks are lazily promoted to threads. Concurrent Cilk removes the overhead of thread creation on threads which end up calling no blocking operations, and is the first system to do so for C/C++ with legacy support (standard calling conventions and stack representations). We demonstrate that Concurrent Cilk adds negligible overhead to existing Cilk programs, while its promoted threads remain more efficient than OS threads in terms of context-switch overhead and blocking communication. Further, it enables development of blocking data structures that create non-fork-join dependence graphs|which can expose more parallelism, and better supports data-driven computations waiting on results from remote devices. 1 Introduction Both task-parallelism [1, 11, 13, 15] and lightweight threading [20] libraries have become popular for different kinds of applications. The key difference between a task and a thread is that threads may block|for example when performing IO|and then resume again.
    [Show full text]
  • Task Parallelism Bit-Level Parallelism
    Parallel languages as extensions of sequential ones Alexey A. Romanenko [email protected] What this section about? ● Computers. History. Trends. ● What is parallel program? ● What is parallel programming for? ● Features of parallel programs. ● Development environment. ● etc. Agenda 1. Sequential program 2. Applications, required computational power. 3. What does parallel programming for? 4. Parallelism inside ordinary PC. 5. Architecture of modern CPUs. 6. What is parallel program? 7. Types of parallelism. Agenda 8. Types of computational installations. 9. Specificity of parallel programs. 10.Amdahl's law 11.Development environment 12.Approaches to development of parallel programs. Cost of development. 13.Self-test questions History George Boole Claude Elwood Shannon Alan Turing Charles Babbage John von Neumann Norbert Wiener Henry Edward Roberts Sciences ● Computer science is the study of the theoretical foundations of information and computation, and of practical techniques for their implementation and application in computer systems. ● Cybernetics is the interdisciplinary study of the structure of regulatory system Difference machine Arithmometer Altair 8800 Computer with 8-inch floppy disk system Sequential program A program perform calculation of a function F = G(X) for example: a*x2+b*x+c=0, a != 0. x1=(-b-sqrt(b2-4ac))/(2a), x2=(-b+sqrt(b2-4ac))/(2a) Turing machine Plasma modeling N ~ 106 dX ~ F dT2 j j F ~ sum(q, q ) j i i j Complexity ~ O(N*N) more then 1012 * 100...1000 operations Resource consumable calculations ● Nuclear/Gas/Hydrodynamic
    [Show full text]
  • The Impact of Process Placement and Oversubscription on Application Performance: a Case Study for Exascale Computing
    Takustraße 7 Konrad-Zuse-Zentrum D-14195 Berlin-Dahlem fur¨ Informationstechnik Berlin Germany FLORIAN WENDE,THOMAS STEINKE,ALEXANDER REINEFELD The Impact of Process Placement and Oversubscription on Application Performance: A Case Study for Exascale Computing ZIB-Report 15-05 (January 2015) Herausgegeben vom Konrad-Zuse-Zentrum fur¨ Informationstechnik Berlin Takustraße 7 D-14195 Berlin-Dahlem Telefon: 030-84185-0 Telefax: 030-84185-125 e-mail: [email protected] URL: http://www.zib.de ZIB-Report (Print) ISSN 1438-0064 ZIB-Report (Internet) ISSN 2192-7782 The Impact of Process Placement and Oversubscription on Application Performance: A Case Study for Exascale Computing Florian Wende, Thomas Steinke, Alexander Reinefeld Abstract: With the growing number of hardware components and the increasing software complexity in the upcoming exascale computers, system failures will become the norm rather than an exception for long-running applications. Fault-tolerance can be achieved by the creation of checkpoints dur- ing the execution of a parallel program. Checkpoint/Restart (C/R) mechanisms allow for both task migration (even if there were no hardware faults) and restarting of tasks after the occurrence of hard- ware faults. Affected tasks are then migrated to other nodes which may result in unfortunate process placement and/or oversubscription of compute resources. In this paper we analyze the impact of unfortunate process placement and oversubscription of compute resources on the performance and scalability of two typical HPC application workloads, CP2K and MOM5. Results are given for a Cray XC30/40 with Aries dragonfly topology. Our results indicate that unfortunate process placement has only little negative impact while oversubscription substantially degrades the performance.
    [Show full text]
  • A CPU/GPU Task-Parallel Runtime with Explicit Epoch Synchronization
    TREES: A CPU/GPU Task-Parallel Runtime with Explicit Epoch Synchronization Blake A. Hechtman, Andrew D. Hilton, and Daniel J. Sorin Department of Electrical and Computer Engineering Duke University Abstract —We have developed a task-parallel runtime targeting CPUs are a poor fit for GPUs. To understand system, called TREES, that is designed for high why this mismatch exists, we must first understand the performance on CPU/GPU platforms. On platforms performance of an idealized task-parallel application with multiple CPUs, Cilk’s “work-first” principle (with no runtime) and then how the runtime’s overhead underlies how task-parallel applications can achieve affects it. The performance of a task-parallel application performance, but work-first is a poor fit for GPUs. We is a function of two characteristics: its total amount of build upon work-first to create the “work-together” work to be performed (T1, the time to execute on 1 principle that addresses the specific strengths and processor) and its critical path (T∞, the time to execute weaknesses of GPUs. The work-together principle on an infinite number of processors). Prior work has extends work-first by stating that (a) the overhead on shown that the runtime of a system with P processors, the critical path should be paid by the entire system at TP, is bounded by = ( ) + ( ) due to the once and (b) work overheads should be paid co- greedy o ff line scheduler bound [3][10]. operatively. We have implemented the TREES runtime A task-parallel runtime introduces overheads and, for in OpenCL, and we experimentally evaluate TREES purposes of performance analysis, we distinguish applications on a CPU/GPU platform.
    [Show full text]
  • And Computer Systems I: Introduction Systems
    Welcome to Computer Team! Noah Singer Introduction Welcome to Computer Team! Units Computer and Computer Systems I: Introduction Systems Computer Parts Noah Singer Montgomery Blair High School Computer Team September 14, 2017 Overview Welcome to Computer Team! Noah Singer Introduction 1 Introduction Units Computer Systems 2 Units Computer Parts 3 Computer Systems 4 Computer Parts Welcome to Computer Team! Noah Singer Introduction Units Computer Systems Computer Parts Section 1: Introduction Computer Team Welcome to Computer Team! Computer Team is... Noah Singer A place to hang out and eat lunch every Thursday Introduction A cool discussion group for computer science Units Computer An award-winning competition programming group Systems A place for people to learn from each other Computer Parts Open to people of all experience levels Computer Team isn’t... A programming club A highly competitive environment A rigorous course that requires commitment Structure Welcome to Computer Team! Noah Singer Introduction Units Four units, each with five to ten lectures Computer Each lecture comes with notes that you can take home! Systems Computer Guided activities to help you understand more Parts difficult/technical concepts Special presentations and guest lectures on interesting topics Conventions Welcome to Computer Team! Noah Singer Vocabulary terms are marked in bold. Introduction Definition Units Computer Definitions are in boxes (along with theorems, etc.). Systems Computer Parts Important statements are highlighted in italics. Formulas are written
    [Show full text]