A Review of CUDA, Mapreduce, and Pthreads Parallel Computing Models

Total Page:16

File Type:pdf, Size:1020Kb

A Review of CUDA, Mapreduce, and Pthreads Parallel Computing Models A Review of CUDA, MapReduce, and Pthreads Parallel Computing Models Kato Mivule1, Benjamin Harvey2, Crystal Cobb3, and Hoda El Sayed4 1 Computer Science Department, Bowie State University, Bowie, MD, USA [email protected] [email protected] [email protected] [email protected] Abstract – The advent of high performance computing models to assist high performance computing users (HPC) and graphics processing units (GPU), present an with a faster understanding of parallel programming enormous computation resource for Large data concepts and thus better implementation of big data transactions (big data) that require parallel processing for parallel programming projects. Although a number of robust and prompt data analysis. While a number of HPC models could have been chosen for this overview, we frameworks have been proposed, parallel programming focused on CUDA, MapReduce, and Pthreads models present a number of challenges – for instance, how to fully utilize features in the different programming models because of the underlying features in the three to implement and manage parallelism via multi-threading models that could be used to for multi-threading. The in both CPUs and GPUs. In this paper, we take an rest of this paper is organized as follows: In Section overview of three parallel programming models, CUDA, 2, a review of the latest literature on parallel MapReduce, and Pthreads. The goal is to explore literature computing models and determinism is done. In on the subject and provide a high level view of the features Section 3, an overview of CUDA is given, while in presented in the programming models to assist high Section a review of MapReduce features is performance users with a concise understanding of parallel presented. In Section 5, a topographical exploration programming concepts and thus faster implementation of of features in Pthreads is given. Finally, in Section 6, big data projects using high performance computing. a conclusion is done. Keywords: Parallel programming models, GPUs, Big data, CUDA, MapReduce, Pthreads 2. BACKGROUND AND RELATED WORK As HPC systems increasingly becomes a necessity 1. INTRODUCTION in mainstream computing, a number of researchers The increasing volume of data (big data) generated have done work on documenting various features in by entities unquestionably, require high performance parallel computing models. Although a number of parallel processing models for robust and speedy data features in parallel programming models are analysis. The need for parallel computing has discussed in literature, CUDA, MapReduce, and resulted in a number of programming models Pthreads models, utilize multi-threading and, as such, proposed for high performance computing. However, a look at abstraction and determinism in multi- parallel programming models that interface between threading, is given consideration in this paper. In the high performance machines and programmers, their survey on the parallel programming models, present challenges; with one of the difficulties being, Kasim, March, Zhang, and See (2008), described how to fully utilize features in the different how parallelism is abstracted and presented to programming models to implement computational programmers, by investigating six parallel units, such as, multi-threads, on both CPUs and programming models used by the HPC community, GPUs efficiently. Yet still, with the advent of GPUs, namely, Pthreads, OpenMP, CUDA, MPI, UPC, and additional computation resources for the big data Fortress [1]. Furthermore, Kasim et al, proposed a set challenge are presented. However, fully utilizing of criteria to evaluate how parallelism is abstracted GPU parallelization resources in translating and presented to programmers, namely, (i) system sequential code for HPC implementation is a architecture, (ii) programming methodologies, (iii) challenge. Therefore, in this paper, we take an worker management, (iv) workload partitioning overview of three parallel programming models, scheme, (v) task-to-worker mapping, (vi) CUDA, MapReduce, and Pthreads. Our goal in this synchronization, and (vii) communication model [1]. study is to give an overall high level view of the On the feature of determinism in parallel features presented in the parallel programming programming, Bocchino, Adve, Adve, and Snir (2009), noted that with most parallel programming developers to express simple computations while models, subtle coding errors could lead to inadvertent hiding the messy details of communication, non-deterministic actions and bugs that are hard to parallelization, fault tolerance, and load balancing find and debug [2]. Therefore, Bocchino et al, argued [33]. On fully utilizing the GPU and MapReduce for a parallel programming model that was model, Ji and Ma (2011), explored the prospective deterministic by default unless the programmer value of allowing for a GPU MapReduce structure unambiguously decided to use non-deterministic that uses multiple levels of the GPU memory constructs [2]. Non-determinism is a condition in hierarchy [8]. To achieve this goal, Ji and Ma (2011) multi-threading parallel processing, in which erratic proposed a resourceful deployment using small and unpredictable behavior, bugs, and errors occur shared but fast memory, which included, a shared among threads due to their random interleaving and memory staging area management, thread-role implicit communication when accessing shared partitioning, and intra-block thread synchronization memory, thus making it more problematic to program [8]. in parallel model than the sequential von Neumann Furthermore, Fang, He, Luo, and Govindaraju model [3] [4]. (2011) proposed the Mars model, which combines Additionally, on the feature of determinism in GPU and MapReduce features to attain: (i) ease of parallel programming languages, McCool (2010) programming GPUs, (ii) portability such that the argued that a more structured approach in the design system can be applied on different technologies such and implementation of parallel algorithms was as NVIDIA CUDA, or Pthreads, and (iii) effective needed in order to reduce the complexity in use of high performance using GPUs [9]. To achieve developing software [5]. Moreover, McCool (2010) these goals, the Mars model hides the programming suggested the utilization of deterministic algorithmic complexity of GPUs by: (i) use of a simple and skeletons and patterns in the design of parallel conversant MapReduce interface, (ii) routinely programs [5]. McCool (2010) generally proposed that managing subdividing tasks, (iii) managing the any system that maintains a specification and dissemination of data, and (iv) dealing with process composition of a high-quality pattern can be used as a parallelization [9]. On the other hand, for efficient template to steer developers in designing more utilization of resources, Stuart and Owens (2011), dependable deterministic parallel programs [5]. proposed GPMR, a stand-alone MapReduce library However, Yang, Cui, Wu, Tang, and Hu (2013) that takes advantage of the power of GPU clusters for contended that determinism in parallel programming extensive computing by merging bulky quantities of was difficult to achieve since threads are non- map and reduce items into slices and using fractional deterministic by default and therefore making parallel reductions and accruals to complete computations programs reliable with stable multi-threading was [10]. more practical [6]. On the other hand, Garland, Although the Pthreads model has been around for Kudlur, and Zheng (2012), observed that while some time [11], application of the MapReduce model heterogeneous architectures and massive multi- using Pthreads is accomplished via the Phoenix threading concepts have been accepted in the larger model – a programming paradigm that is built on HPC community, modern programming models Pthreads for MapReduce implementation [12]. Of suffer from a defect of limited concurrency, recent, the Phoenix++ MapReduce model, a variation undifferentiated flat memory storage, and a of Phoenix model, was created based on the C++ homogenous processing of elements [7]. Garland et programming prototype [13]. Yet still, on the issue of al, then, proposed a unified programming model for determinism in Pthreads, Liu, Curtsinger, and Berger heterogeneous machines that provides constructs for (2011), observed that enforcing determinism in massive parallelism, synchronization, and data threads was still problematic and they proposed the assignment, executed on the whole machine [7]. DTHREADS multi-threading model, built on C/C++ Dean and Ghemawat [33] presented a run-time to replace the Pthreads library and alleviate the non- parallel, distributed, scalable programming model, determinism problem [14]. which encompassed a map and reduce function in DTHREADS model imposes determinism by order to map and reduce key-value pairs [33]. separating multi-thread programs into several Though there were many parallel, distributed, processes; using private and copy-on-write maps to scalable programming models that currently existed, shared memory, employing standard virtual memory there was no model in existence whose goal through protection to keep track of writes, and then development was to create a programming model that deterministically ordering updates on each thread, was easy to use. The innate ability of MapReduce to and as such, eliminating false sharing [14]. However, do its parallel and distributed computation across the issue of multi-thread non-determinism
Recommended publications
  • Other Apis What’S Wrong with Openmp?
    Threaded Programming Other APIs What’s wrong with OpenMP? • OpenMP is designed for programs where you want a fixed number of threads, and you always want the threads to be consuming CPU cycles. – cannot arbitrarily start/stop threads – cannot put threads to sleep and wake them up later • OpenMP is good for programs where each thread is doing (more-or-less) the same thing. • Although OpenMP supports C++, it’s not especially OO friendly – though it is gradually getting better. • OpenMP doesn’t support other popular base languages – e.g. Java, Python What’s wrong with OpenMP? (cont.) Can do this Can do this Can’t do this Threaded programming APIs • Essential features – a way to create threads – a way to wait for a thread to finish its work – a mechanism to support thread private data – some basic synchronisation methods – at least a mutex lock, or atomic operations • Optional features – support for tasks – more synchronisation methods – e.g. condition variables, barriers,... – higher levels of abstraction – e.g. parallel loops, reductions What are the alternatives? • POSIX threads • C++ threads • Intel TBB • Cilk • OpenCL • Java (not an exhaustive list!) POSIX threads • POSIX threads (or Pthreads) is a standard library for shared memory programming without directives. – Part of the ANSI/IEEE 1003.1 standard (1996) • Interface is a C library – no standard Fortran interface – can be used with C++, but not OO friendly • Widely available – even for Windows – typically installed as part of OS – code is pretty portable • Lots of low-level control over behaviour of threads • Lacks a proper memory consistency model Thread forking #include <pthread.h> int pthread_create( pthread_t *thread, const pthread_attr_t *attr, void*(*start_routine, void*), void *arg) • Creates a new thread: – first argument returns a pointer to a thread descriptor.
    [Show full text]
  • CUDA by Example
    CUDA by Example AN INTRODUCTION TO GENERAL-PURPOSE GPU PROGRAMMING JASON SaNDERS EDWARD KANDROT Upper Saddle River, NJ • Boston • Indianapolis • San Francisco New York • Toronto • Montreal • London • Munich • Paris • Madrid Capetown • Sydney • Tokyo • Singapore • Mexico City Sanders_book.indb 3 6/12/10 3:15:14 PM Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed with initial capital letters or in all capitals. The authors and publisher have taken care in the preparation of this book, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. NVIDIA makes no warranty or representation that the techniques described herein are free from any Intellectual Property claims. The reader assumes all risk of any such claims based on his or her use of these techniques. The publisher offers excellent discounts on this book when ordered in quantity for bulk purchases or special sales, which may include electronic versions and/or custom covers and content particular to your business, training goals, marketing focus, and branding interests. For more information, please contact: U.S. Corporate and Government Sales (800) 382-3419 [email protected] For sales outside the United States, please contact: International Sales [email protected] Visit us on the Web: informit.com/aw Library of Congress Cataloging-in-Publication Data Sanders, Jason.
    [Show full text]
  • Bitfusion Guide to CUDA Installation Bitfusion Guides Bitfusion: Bitfusion Guide to CUDA Installation
    WHITE PAPER–OCTOBER 2019 Bitfusion Guide to CUDA Installation Bitfusion Guides Bitfusion: Bitfusion Guide to CUDA Installation Table of Contents Purpose 3 Introduction 3 Some Sources of Confusion 4 So Many Prerequisites 4 Installing the NVIDIA Repository 4 Installing the NVIDIA Driver 6 Installing NVIDIA CUDA 6 Installing cuDNN 7 Speed It Up! 8 Upgrading CUDA 9 WHITE PAPER | 2 Bitfusion: Bitfusion Guide to CUDA Installation Purpose Bitfusion FlexDirect provides a GPU virtualization solution. It allows you to use the GPUs, or even partial GPUs (e.g., half of a GPU, or a quarter, or one-third of a GPU), on different servers as if they were attached to your local machine, a machine on which you can now run a CUDA application even though it has no GPUs of its own. Since Bitfusion FlexDirect software and ML/AI (machine learning/artificial intelligence) applications require other CUDA libraries and drivers, this document describes how to install the prerequisite software for both Bitfusion FlexDirect and for various ML/AI applications. If you already have your CUDA drivers and libraries installed, you can skip ahead to the chapter on installing Bitfusion FlexDirect. This document gives instruction and examples for Ubuntu, Red Hat, and CentOS systems. Introduction Bitfusion’s software tool is called FlexDirect. FlexDirect easily installs with a single command, but installing the third-party prerequisite drivers and libraries, plus any application and its prerequisite software, can seem a Gordian Knot to users new to CUDA code and ML/AI applications. We will begin untangling the knot with a table of the components involved.
    [Show full text]
  • Thread and Multitasking
    11/6/17 Thread and multitasking Alberto Bosio, Associate Professor – UM Microelectronic Departement [email protected] Agenda l POSIX Threads Programming l http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html l Compiler: http://cygwin.com/index.html 1 11/6/17 Process vs Thread process thread Picture source: https://computing.llnl.gov/tutorials/pthreads/ Shared Memory Model Picture source: https://computing.llnl.gov/tutorials/pthreads/ 2 11/6/17 Simple Thread Example void *func ( ) { /* define local data */ - - - - - - - - - - - - - - - - - - - - - - /* function code */ - - - - - - - - - - - pthread_exit(exit_value); } int main ( ) { pthread_t tid; int exit_value; - - - - - - - - - - - pthread_create (0, 0, func (), 0); - - - - - - - - - - - pthread_join (tid, &exit_value); - - - - - - - - - - - } Basic Functions Purpose Process Model Threads Model Creation of a new thread fork ( ) thr_create( ) Start execution of a new thread exec( ) [ thr_create() builds the new thread and starts the execution Wait for completion of thread wait( ) thr_join() Exit and destroy the thread exit( ) thr_exit() 3 11/6/17 Code comparison main ( ) main() { { fork ( ); thread_create(0,0,func(),0); fork ( ); thread_create(0,0,func(),0); fork ( ); thread_create(0,0,func(),0); } } Creation #include <pthread.h> int pthread_create( pthread_t *thread, const pthread_attr_t *attr, void *(*start_routine)(void*), void *arg); 4 11/6/17 Exit #include <pthread.h> void pthread_exit(void *retval); join #include <pthread.h> int pthread_join( pthread_t thread, void *retval); 5 11/6/17 Joining Passing argument (wrong example) int rc; long t; for(t=0; t<NUM_THREADS; t++) { printf("Creating thread %ld\n", t); rc = pthread_create(&threads[t], NULL, PrintHello, (void *) &t); ... } 6 11/6/17 Passing argument (good example) long *taskids[NUM_THREADS]; for(t=0; t<NUM_THREADS; t++) { taskids[t] = (long *) malloc(sizeof(long)); *taskids[t] = t; printf("Creating thread %ld\n", t); rc = pthread_create(&threads[t], NULL, PrintHello, (void *) taskids[t]); ..
    [Show full text]
  • Parallel Architectures and Algorithms for Large-Scale Nonlinear Programming
    Parallel Architectures and Algorithms for Large-Scale Nonlinear Programming Carl D. Laird Associate Professor, School of Chemical Engineering, Purdue University Faculty Fellow, Mary Kay O’Connor Process Safety Center Laird Research Group: http://allthingsoptimal.com Landscape of Scientific Computing 10$ 1$ 50%/year 20%/year Clock&Rate&(GHz)& 0.1$ 0.01$ 1980$ 1985$ 1990$ 1995$ 2000$ 2005$ 2010$ Year& [Steven Edwards, Columbia University] 2 Landscape of Scientific Computing Clock-rate, the source of past speed improvements have stagnated. Hardware manufacturers are shifting their focus to energy efficiency (mobile, large10" data centers) and parallel architectures. 12" 10" 1" 8" 6" #"of"Cores" Clock"Rate"(GHz)" 0.1" 4" 2" 0.01" 0" 1980" 1985" 1990" 1995" 2000" 2005" 2010" Year" 3 “… over a 15-year span… [problem] calculations improved by a factor of 43 million. [A] factor of roughly 1,000 was attributable to faster processor speeds, … [yet] a factor of 43,000 was due to improvements in the efficiency of software algorithms.” - attributed to Martin Grotschel [Steve Lohr, “Software Progress Beats Moore’s Law”, New York Times, March 7, 2011] Landscape of Computing 5 Landscape of Computing High-Performance Parallel Architectures Multi-core, Grid, HPC clusters, Specialized Architectures (GPU) 6 Landscape of Computing High-Performance Parallel Architectures Multi-core, Grid, HPC clusters, Specialized Architectures (GPU) 7 Landscape of Computing data VM iOS and Android device sales High-Performance have surpassed PC Parallel Architectures iPhone performance
    [Show full text]
  • Introduction to Multi-Threading and Vectorization Matti Kortelainen Larsoft Workshop 2019 25 June 2019 Outline
    Introduction to multi-threading and vectorization Matti Kortelainen LArSoft Workshop 2019 25 June 2019 Outline Broad introductory overview: • Why multithread? • What is a thread? • Some threading models – std::thread – OpenMP (fork-join) – Intel Threading Building Blocks (TBB) (tasks) • Race condition, critical region, mutual exclusion, deadlock • Vectorization (SIMD) 2 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading Image courtesy of K. Rupp 3 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading • One process on a node: speedups from parallelizing parts of the programs – Any problem can get speedup if the threads can cooperate on • same core (sharing L1 cache) • L2 cache (may be shared among small number of cores) • Fully loaded node: save memory and other resources – Threads can share objects -> N threads can use significantly less memory than N processes • If smallest chunk of data is so big that only one fits in memory at a time, is there any other option? 4 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization What is a (software) thread? (in POSIX/Linux) • “Smallest sequence of programmed instructions that can be managed independently by a scheduler” [Wikipedia] • A thread has its own – Program counter – Registers – Stack – Thread-local memory (better to avoid in general) • Threads of a process share everything else, e.g. – Program code, constants – Heap memory – Network connections – File handles
    [Show full text]
  • (GPU) Computing
    Graphics Processing Unit (GPU) computing This section describes the graphics processing unit (GPU) computing feature of OptiSystem. Note: The GPU computing feature is only configurable with OptiSystem Version 11 (or higher) What is GPU computing? GPU computing or GPGPU takes advantage of a computer’s grahics processing card to augment the speed of general purpose scientific and engineering computing tasks. Compute Unified Device Architecture (CUDA) implementation for OptiSystem NVIDIA revolutionized the GPGPU and accelerated computing when it introduced a new parallel computing architecture: Compute Unified Device Architecture (CUDA). CUDA is both a hardware and software architecture for issuing and managing computations within the GPU, thus allowing it to operate as a generic data-parallel computing device. CUDA allows the programmer to take advantage of the parallel computing power of an NVIDIA graphics card to perform general purpose computations. OptiSystem CUDA implementation The OptiSystem model for GPU computing involves using a central processing unit (CPU) and GPU together in a heterogeneous co-processing computing model. The sequential part of the application runs on the CPU and the computationally-intensive part is accelerated by the GPU. In the OptiSystem GPU programming model, the application has been modified to map the compute-intensive kernels to the GPU. The remainder of the application remains within the CPU. CUDA parallel computing architecture The NVIDIA CUDA parallel computing architecture is enabled on GeForce®, Quadro®, and Tesla™ products. Whereas GeForce and Quadro are designed for consumer graphics and professional visualization respectively, the Tesla product family is designed ground-up for parallel computing and offers exclusive computing 3 GRAPHICS PROCESSING UNIT (GPU) COMPUTING features, and is the recommended choice for the OptiSystem GPU.
    [Show full text]
  • Massively Parallel Computing with CUDA
    Massively Parallel Computing with CUDA Antonino Tumeo Politecnico di Milano 1 GPUs have evolved to the point where many real world applications are easily implemented on them and run significantly faster than on multi-core systems. Future computing architectures will be hybrid systems with parallel-core GPUs working in tandem with multi-core CPUs. Jack Dongarra Professor, University of Tennessee; Author of “Linpack” Why Use the GPU? • The GPU has evolved into a very flexible and powerful processor: • It’s programmable using high-level languages • It supports 32-bit and 64-bit floating point IEEE-754 precision • It offers lots of GFLOPS: • GPU in every PC and workstation What is behind such an Evolution? • The GPU is specialized for compute-intensive, highly parallel computation (exactly what graphics rendering is about) • So, more transistors can be devoted to data processing rather than data caching and flow control ALU ALU Control ALU ALU Cache DRAM DRAM CPU GPU • The fast-growing video game industry exerts strong economic pressure that forces constant innovation GPUs • Each NVIDIA GPU has 240 parallel cores NVIDIA GPU • Within each core 1.4 Billion Transistors • Floating point unit • Logic unit (add, sub, mul, madd) • Move, compare unit • Branch unit • Cores managed by thread manager • Thread manager can spawn and manage 12,000+ threads per core 1 Teraflop of processing power • Zero overhead thread switching Heterogeneous Computing Domains Graphics Massive Data GPU Parallelism (Parallel Computing) Instruction CPU Level (Sequential
    [Show full text]
  • CUDA Flux: a Lightweight Instruction Profiler for CUDA Applications
    CUDA Flux: A Lightweight Instruction Profiler for CUDA Applications Lorenz Braun Holger Froning¨ Institute of Computer Engineering Institute of Computer Engineering Heidelberg University Heidelberg University Heidelberg, Germany Heidelberg, Germany [email protected] [email protected] Abstract—GPUs are powerful, massively parallel processors, hardware, it lacks verbosity when it comes to analyzing the which require a vast amount of thread parallelism to keep their different types of instructions executed. This problem is aggra- thousands of execution units busy, and to tolerate latency when vated by the recently frequent introduction of new instructions, accessing its high-throughput memory system. Understanding the behavior of massively threaded GPU programs can be for instance floating point operations with reduced precision difficult, even though recent GPUs provide an abundance of or special function operations including Tensor Cores for ma- hardware performance counters, which collect statistics about chine learning. Similarly, information on the use of vectorized certain events. Profiling tools that assist the user in such analysis operations is often lost. Characterizing workloads on PTX for their GPUs, like NVIDIA’s nvprof and cupti, are state-of- level is desirable as PTX allows us to determine the exact types the-art. However, instrumentation based on reading hardware performance counters can be slow, in particular when the number of the instructions a kernel executes. On the contrary, nvprof of metrics is large. Furthermore, the results can be inaccurate as metrics are based on a lower-level representation called SASS. instructions are grouped to match the available set of hardware Still, even if there was an exact mapping, some information on counters.
    [Show full text]
  • GPU: Power Vs Performance
    IT 14 015 Examensarbete 30 hp June 2014 GPU: Power vs Performance Siddhartha Sankar Mondal Institutionen för informationsteknologi Department of Information Technology Abstract GPU: Power vs Performance Siddhartha Sankar Mondal Teknisk- naturvetenskaplig fakultet UTH-enheten GPUs are widely being used to meet the ever increasing demands of High performance computing. High-end GPUs are one of the highest consumers of power Besöksadress: in a computer. Power dissipation has always been a major concern area for computer Ångströmlaboratoriet Lägerhyddsvägen 1 architects. Due to power efficiency demands modern CPUs have moved towards Hus 4, Plan 0 multicore architectures. GPUs are already massively parallel architectures. There has been some encouraging results for power efficiency in CPUs by applying DVFS . The Postadress: vision is that a similar approach would also provide encouraging results for power Box 536 751 21 Uppsala efficiency in GPUs. Telefon: In this thesis we have analyzed the power and performance characteristics of GPU at 018 – 471 30 03 different frequencies. To help us achieve that, we have made a set of Telefax: microbenchmarks with different levels of memory boundedness and threads. We have 018 – 471 30 00 also used benchmarks from CUDA SDK and Parboil. We have used a GTX580 Fermi based GPU. We have also made a hardware infrastructure that accurately measures Hemsida: the power being consumed by the GPU. http://www.teknat.uu.se/student Handledare: Stefanos Kaxiras Ämnesgranskare: Stefanos Kaxiras Examinator: Ivan Christoff IT 14 015 Sponsor: UPMARC Tryckt av: Reprocentralen ITC Ahëuouu± I would like to thank my supervisor Prof. Stefanos Kaxiras for giving me the wonderful opportunity to work on this interesting topic.
    [Show full text]
  • Map, Reduce and Mapreduce, the Skeleton Way$
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Procedia Computer Science ProcediaProcedia Computer Computer Science Science 00 1 (2010)(2012) 1–9 2095–2103 www.elsevier.com/locate/procedia International Conference on Computational Science, ICCS 2010 Map, Reduce and MapReduce, the skeleton way$ D. Buonoa, M. Daneluttoa, S. Lamettia aDept. of Computer Science, Univ. of Pisa, Italy Abstract Composition of Map and Reduce algorithmic skeletons have been widely studied at the end of the last century and it has demonstrated effective on a wide class of problems. We recall the theoretical results motivating the intro- duction of these skeletons, then we discuss an experiment implementing three algorithmic skeletons, a map,areduce and an optimized composition of a map followed by a reduce skeleton (map+reduce). The map+reduce skeleton implemented computes the same kind of problems computed by Google MapReduce, but the data flow through the skeleton is streamed rather than relying on already distributed (and possibly quite large) data items. We discuss the implementation of the three skeletons on top of ProActive/GCM in the MareMare prototype and we present some experimental obtained on a COTS cluster. ⃝c 2012 Published by Elsevier Ltd. Open access under CC BY-NC-ND license. Keywords: Algorithmic skeletons, map, reduce, list theory, homomorphisms 1. Introduction Algorithmic skeletons have been around since Murray Cole’s PhD thesis [1]. An algorithmic skeleton is a paramet- ric, reusable, efficient and composable programming construct that exploits a well know, efficient parallelism exploita- tion pattern. A skeleton programmer may build parallel applications simply picking up a (composition of) skeleton from the skeleton library available, providing the required parameters and running some kind of compiler+run time library tool specific of the target architecture at hand [2].
    [Show full text]
  • From Sequential to Parallel Programming with Patterns
    From sequential to parallel programming with patterns Inverted CERN School of Computing 2018 Plácido Fernández [email protected] 07 Mar 2018 Table of Contents Introduction Sequential vs Parallel patterns Patterns Data parallel vs streaming patterns Control patterns (sequential and parallel Streaming parallel patterns Composing parallel patterns Using the patterns Real world use case GrPPI with NUMA Conclusions Acknowledgements and references 07 Mar 2018 3 Table of Contents Introduction Sequential vs Parallel patterns Patterns Data parallel vs streaming patterns Control patterns (sequential and parallel Streaming parallel patterns Composing parallel patterns Using the patterns Real world use case GrPPI with NUMA Conclusions Acknowledgements and references 07 Mar 2018 4 Source: Herb Sutter in Dr. Dobb’s Journal Parallel hardware history I Before ~2005, processor manufacturers increased clock speed. I Then we hit the power and memory wall, which limits the frequency at which processors can run (without melting). I Manufacturers response was to continue improving their chips performance by adding more parallelism. To fully exploit modern processors capabilities, programmers need to put effort into the source code. 07 Mar 2018 6 Parallel Hardware Processors are naturally parallel: Programmers have plenty of options: I Caches I Multi-threading I Different processing units (floating I SIMD / Vectorization (Single point, arithmetic logic...) Instruction Multiple Data) I Integrated GPU I Heterogeneous hardware 07 Mar 2018 7 Source: top500.org Source: Intel Source: CERN Parallel hardware Xeon Xeon 5100 Xeon 5500 Sandy Bridge Haswell Broadwell Skylake Year 2005 2006 2009 2012 2015 2016 2017 Cores 1 2 4 8 18 24 28 Threads 2 2 8 16 36 48 56 SIMD Width 128 128 128 256 256 256 512 Figure: Intel Xeon processors evolution 0Source: ark.intel.com 07 Mar 2018 11 Parallel considerations When designing parallel algorithms where are interested in speedup.
    [Show full text]