Models for Parallel Computing: Review and Perspectives

Total Page:16

File Type:pdf, Size:1020Kb

Models for Parallel Computing: Review and Perspectives Models for Parallel Computing: Review and Perspectives Christoph Kessler and Jörg Keller Journal Article N.B.: When citing this work, cite the original article. Original Publication: Christoph Kessler and Jörg Keller, Models for Parallel Computing: Review and Perspectives, Mitteilungen - Gesellschaft für Informatik e.V., Parallel-Algorithmen und Rechnerstrukturen, 2007. 24, pp. 13-29. Postprint available at: Linköping University Electronic Press http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-40734 PARSMitteilungen Dec ISSN GIITG PARS Mo dels for Parallel Computing Review and Persp ectives Christoph Kessler and Jrg Keller PELAB Dept of Computer Science IDA Linkping university Sweden chrkeidaliuse Dept of Mathematics and Computer Science Fernuniversitt Hagen Germany joergkellerFernUni Hagen de Abstract As parallelism on dierent levels b ecomes ubiquitous in to days computers it seems worthwhile to provide a review of the wealth of mo dels for parallel computation that have evolved over the last decades We refrain from a comprehensive survey and con centrate on mo dels with some practical relevance together with a p ersp ective on mo dels with potential future relevance Besides presenting the mo dels we also refer to languages implementations and to ols Key words Parallel Computational Mo del SurveyParallel Programming Language Par allel Cost Mo del Intro duction Recent desktop and highp erformance pro cessors provide multiple hardware threads tech nically realized by hardware multithreading and multiple pro cessor cores on a single chip Very so on programmers will b e faced with hundreds of hardware threads p er pro cessor chip As exploitable instructionlevel parallelism in applications is limited and the pro cessor clo ck frequency cannot be increased any further due to power consumption and heat problems exploiting threadlevel parallelism b ecomes unavoidable if further improvement in pro ces sor p erformance is requiredand there is no doubt that our requirements and exp ectations of machine p erformance will increase further This means that parallel programming will actually concern a ma jority of application and system programmers in the foreseeable fu ture even in the desktop and emb edded domain and no longer only a few working in the comparably small HPC market niche A model of paral lel computation consists of a parallel programming mo del and a corre sp onding cost mo del A paral lel programming model describ es an abstract parallel machine by its basic op erations such as arithmetic op erations spawning of tasks reading from and writing to shared memory or sending and receiving messages their eects on the state of the computation the constraints of when and where these can be applied and how they can be comp osed In particular a parallel programming mo del also contains at least for shared memory programming mo dels a memory model that describ es how and when mem ory accesses can b ecome visible to the dierent parts of a parallel computer The memory mo del sometimes is given implicitlyA paral lel cost model asso ciates a cost which usually describ es parallel execution time and resource o ccupation with each basic op eration and describ es how to predict the accumulated cost of comp osed op erations up to entire parallel programs A parallel programming mo del is often asso ciated with one or several paral lel programming languages or libraries that realize the mo del Parallel algorithms are usually formulated in terms of a particular parallel program ming mo del In contrast to sequential programming where the von Neumann mo del is the predominant programming mo del notable alternatives are eg dataow and declarative programming there are several comp eting parallel programming mo dels This heterogene ity is partly caused by the fact that some mo dels are closer to certain existing hardware architectures than others and partly b ecause some parallel algorithms are easier to express in one mo del than in another one Programming mo dels abstract to some degree from details of the underlying hardware which increases p ortabilityof parallel algorithms across a wider range of parallel program ming languages and systems In the following we give a brief survey of parallel programming mo dels and comment on their merits and p ersp ectives from b oth a theoretical and practical view While we fo cus on the in our opinion most relevant mo dels this pap er is not aimed at providing a comprehensive presentation nor a thorough classication of parallel mo dels and languages Instead we refer to survey articles and b o oks in the literature such as by Bal et al Skillicorn Giloi Maggs et al Skillicorn and Talia Hambrusch Lengauer and Leop old Mo del Survey Before going through the various parallel programming mo dels we rst recall in Section two fundamental issues in parallel program execution that o ccur in implementations of several mo dels Parallel Execution Styles There exist several dierent parallel execution styles which describ e dierent ways how the parallel activities eg pro cesses threads executing a parallel program are created and terminated from the programmers point of view The two most prominent ones are fork joinstyle and SPMDstyle parallel execution Execution in Forkjoin style spawns parallel activities dynamically at certain points fork in the program that mark the b eginning of parallel computation and collects and terminates them at another p oint join At the b eginning and the end of program execution only one activity is executing but the number of parallel activities can vary considerably during execution and thereby adapt to the currently available parallelism The mapping of activities to physical pro cessors needs to b e done at runtime by the op erating system bya thread package or by the languages runtime system While it is p ossible to use individual library calls for fork and join functionality as in the pthreads package most parallel pro gramming languages use scoping of structured programming constructs to implicitly mark fork and join p oints for instance in Op enMP they are given bytheentry and exit from an omp parallel region in Cilk they are given by the spawn construct that frames a function call Nestedparal lelism can b e achieved with forkjoin style execution by statically or dynam ically nesting such forkjoin sections within each other such that eg an inner fork spawns several parallel activities from each activity spawned by an outer fork leading to a higher degree of parallelism Execution in SPMD style single program multiple data creates a xed number p usually known only from program start of parallel activities physical or virtual pro cessors at the b eginning of program execution ie at entry to main and this number will be kept constant throughout program execution ie no new parallel activities can be spawned In contrast to forkjoin style execution the programmer is resp onsible for mapping the parallel tasks in the program to the p available pro cessors Accordingly the programmer has the resp onsibility for load balancing while it is provided automatically by the dynamic scheduling in the forkjoin style While this is more cumb ersome at program design time and limits exibility it leads to reduced runtime overhead as dynamic scheduling is no longer needed SPMD is used eg in the MPI message passing standard and in some parallel programming languages such as UPC Nested parallelism can be achieved with SPMD style as well namely if a group of p P pro cessors is sub divided into s subgroups of p pro cessors each where p p Each i i i subgroup takes care of one subtask in parallel After all subgroups are nished with their subtask they are discarded and the parent group resumes execution As group splitting can b e nested the group hierarchy forms a tree at any time during program execution with the leaf groups b eing the currently active ones This schema is analogous to exploiting nested parallelism in forkjoin style if one regards the original group G of p pro cessors as one pthreaded pro cess which may spawn s new p threaded pro cesses G i s such that the total number of active threads is not i i increased The parent pro cess waits until all child pro cesses subgroups have terminated and reclaims their threads Parallel Random Access Machine The Paral lel Random Access Machine PRAM mo del was prop osed byFortune and Wyllie as a simple extension of the Random Access Machine RAM mo del used in the design and analysis of sequential algorithms The PRAM assumes a set of pro cessors connected to a shared memory There is a global clo ck that feeds b oth pro cessors and memory and execution of any instruction including memory accesses takes exactly one unit of time indep endent of the executing pro cessor and the p ossibly accessed memory address Also there is no limit on the number of pro cessors accessing shared memory simultaneously The memory mo del of the PRAM is strict consistency the strongest consistency mo del known whichsays that a write in clo ck cycle t b ecomes globally visible to all pro cessors in the b eginning of clo ck cycle t not earlier and not later The PRAM mo del also determines the eect of multiple pro cessors writing or reading the same memory lo cation in the same clo ck cycle An EREW PRAM allows a memory lo cation to be exclusively read or written by at most one pro cessor at a time the CREW PRAM allows concurrent reading but exclusive writing and CRCW even allows simultaneous write accesses by several pro cessors to the same memory lo cation
Recommended publications
  • Technical Report Aaron Councilman
    Extensible Parallel Programming in ableC Aaron Councilman Department of Computer Science and Engineering University of Minnesota, Twin Cities May 23, 2019 1 Introduction There are many different manners of parallelizing code, and many different languages that provide such features. Different types of computations are best suited by different types of parallelism. Simply whether a computation is compute bound or I/O bound determines whether the computation will benefit from being run with more threads than the machine has cores, and other properties of a computation will similarly affect how it performs when run in parallel. Thus, to provide parallel programmers the ability to deliver the best performance for their programs, the ability to choose the parallel programming abstractions they use is important. The ability to combine these abstractions however they need is also important, since different parts of a program will have different performance properties, and therefore may perform best using different abstractions. Unfortunately, parallel programming languages are often designed monolithically, built as an entire language with a specific set of features. Because of this, programmer's choice of parallel programming abstractions is generally limited to the choice of the language to use. Beyond limiting the available abstracts, this also means, that the choice of abstractions must be made ahead of time, since any attempt to change the parallel programming language at a later time is likely to be be prohibitive as it may require rewriting large portions of the codebase, if not the entire codebase. Extensible programming languages can offer a solution to these problems. With an extensible compiler, the programmer chooses a base programming language and can then select the set of \extensions" for that language that best fit their needs.
    [Show full text]
  • Intel® Cilk™ Plus
    Overview: Programming Environment for Intel® Xeon Phi™ Coprocessor One Source Base, Tuned to many Targets Source Compilers, Libraries, Parallel Models Multicore Many-core Cluster Multicore Multicore CPU CPU Intel® MIC Multicore Multicore and Architecture Cluster Many-core Cluster Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. Intel® Parallel Studio XE 2013 and Intel® Cluster Studio XE 2013 Phase Product Feature Benefit Intel® Threading design assistant • Simplifies, demystifies, and speeds Advisor XE (Studio products only) parallel application design • C/C++ and Fortran compilers • Intel® Threading Building Blocks • Enabling solution to achieve the Intel® • Intel® Cilk™ Plus application performance and Composer XE • Intel® Integrated Performance scalability benefits of multicore and Build Primitives forward scale to many-core • Intel® Math Kernel Library • Enabling High Performance Scalability, Interconnect Intel® High Performance Message Independence, Runtime Fabric † MPI Library Passing (MPI) Library Selection, and Application Tuning Capability ® Intel Performance Profiler for • Remove guesswork, saves time, VTune™ optimizing application makes it easier to find performance Amplifier XE performance and scalability and scalability bottlenecks Memory & threading dynamic • Increased productivity, code quality, ® Intel analysis for code quality and lowers cost, finds memory, Verify Inspector XE threading , and security defects & Tune Static Analysis for code quality
    [Show full text]
  • Part V Some Broad Topic
    Part V Some Broad Topic Winter 2021 Parallel Processing, Some Broad Topics Slide 1 About This Presentation This presentation is intended to support the use of the textbook Introduction to Parallel Processing: Algorithms and Architectures (Plenum Press, 1999, ISBN 0-306-45970-1). It was prepared by the author in connection with teaching the graduate-level course ECE 254B: Advanced Computer Architecture: Parallel Processing, at the University of California, Santa Barbara. Instructors can use these slides in classroom teaching and for other educational purposes. Any other use is strictly prohibited. © Behrooz Parhami Edition Released Revised Revised Revised First Spring 2005 Spring 2006 Fall 2008 Fall 2010 Winter 2013 Winter 2014 Winter 2016 Winter 2019* Winter 2021* *Chapters 17-18 only Winter 2021 Parallel Processing, Some Broad Topics Slide 2 V Some Broad Topics Study topics that cut across all architectural classes: • Mapping computations onto processors (scheduling) • Ensuring that I/O can keep up with other subsystems • Storage, system, software, and reliability issues Topics in This Part Chapter 17 Emulation and Scheduling Chapter 18 Data Storage, Input, and Output Chapter 19 Reliable Parallel Processing Chapter 20 System and Software Issues Winter 2021 Parallel Processing, Some Broad Topics Slide 3 17 Emulation and Scheduling Mapping an architecture or task system onto an architecture • Learn how to achieve algorithm portability via emulation • Become familiar with task scheduling in parallel systems Topics in This Chapter 17.1 Emulations Among Architectures 17.2 Distributed Shared Memory 17.3 The Task Scheduling Problem 17.4 A Class of Scheduling Algorithms 17.5 Some Useful Bounds for Scheduling 17.6 Load Balancing and Dataflow Systems Winter 2021 Parallel Processing, Some Broad Topics Slide 4 17.1 Emulations Among Architectures Need for scheduling: a.
    [Show full text]
  • An Introduction to Parallel Programming
    In Praise of An Introduction to Parallel Programming With the coming of multicore processors and the cloud, parallel computing is most cer- tainly not a niche area off in a corner of the computing world. Parallelism has become central to the efficient use of resources, and this new textbook by Peter Pacheco will go a long way toward introducing students early in their academic careers to both the art and practice of parallel computing. Duncan Buell Department of Computer Science and Engineering University of South Carolina An Introduction to Parallel Programming illustrates fundamental programming principles in the increasingly important area of shared-memory programming using Pthreads and OpenMP and distributed-memory programming using MPI. More important, it empha- sizes good programming practices by indicating potential performance pitfalls. These topics are presented in the context of a variety of disciplines, including computer science, physics, and mathematics. The chapters include numerous programming exercises that range from easy to very challenging. This is an ideal book for students or professionals looking to learn parallel programming skills or to refresh their knowledge. Leigh Little Department of Computational Science The College at Brockport, The State University of New York An Introduction to Parallel Programming is a well-written, comprehensive book on the field of parallel computing. Students and practitioners alike will appreciate the rele- vant, up-to-date information. Peter Pacheco’s very accessible writing style, combined with numerous interesting examples, keeps the reader’s attention. In a field that races forward at a dizzying pace, this book hangs on for the wild ride covering the ins and outs of parallel hardware and software.
    [Show full text]
  • Lecture Notes : Intel Xeon Phi Coprocessor - an Overview
    C-DAC Four Days Technology Workshop ON Hybrid Computing – Coprocessors/Accelerators Power-Aware Computing – Performance of Application Kernels hyPACK-2013 Mode 3 : Intel Xeon Phi Coprocessors Lecture Notes : Intel Xeon Phi Coprocessor - An Overview Venue : CMSD, UoHYD ; Date : October 15-18, 2013 C-DAC hyPACK-2013 Xeon-Phi Coprocessors : An Overview 1 An Overview of Prog. Env on Intel Xeon-Phi Lecture Outline Following topics will be discussed Understanding of Intel Xeon-Phi Coprocessor Architecture Programming on Intel Xeon-Phi Coprocessor Performance Issues on Intel Xeon-Phi Coprocessor C-DAC hyPACK-2013 Xeon-Phi Coprocessors : An Overview 2 Intel Xeon Host : An Overview of Xeon - Multi-Core and Systems with Devices Part-I Background : Xeon Host - Multi-Core & Devices C-DAC hyPACK-2013 Xeon-Phi Coprocessors : An Overview 3 Programming paradigms-Challenges Large scale data Computing – Current trends HowTrends to run Computing Programs - Observationsfaster ? You require Super Computer Era of Single - Multi-to-Many Core - Heterogeneous Computing Sequential Computing How to run Programs faster ? Fetch/Store Fast Access of data Compute Fast Processor More Memory to Manage data C-DAC hyPACK-2013 Xeon-Phi Coprocessors : An Overview 4 Multi-threaded Processing using Hyper-Threading Technology Time taken to process n threads on a single processor is significantly more than a single processor system with HT technology enabled. Multithreading Hyper-threading Technology App 0 App 1 App 2 App 0 App 1 App 2 T0 T1 T2 T3 T4 T5 T0 T1 T2 T3 T4 T5 Thread Pool Thread Pool CPU CPU CPU T0 T1 T2 T3 T4 T5 LP0 T0 T2 T4 LP1 Time T1 T3 T5 CPU 2 Threads per Processor Time Source : http://www.intel.com ; Reference : [6], [29], [31] P/C : Microprocessor and cache; SM : Shared memory C-DAC hyPACK-2013 Xeon-Phi Coprocessors : An Overview 5 Relationship among Processors, Processes, &Threads Processors Processors µPi Processes .
    [Show full text]
  • The Relevance of Opencl to HPC
    The Relevance of OpenCL to HPC . Paul Preney, OCT, M.Sc., B.Ed., B.Sc. [email protected] School of Computer Science University of Windsor Windsor, Ontario, Canada Copyright © 2015 Paul Preney. All Rights Reserved. March 4, 2015 The Relevance of OpenCL to HPC Paul Preney 2 Announced Abstract Today’s high performance computing programs are evolving to be increas- ingly more parallel, increasingly more wait-free, increasingly deployed on many different kinds of hardware including general purpose CPUs, GPGPUs, FPGAs, other custom specific-purpose hardware, etc. The OpenCL standards are platforms providing interfaces that enable deployment of programs to vir- tually any heterogeneous computing device. The OpenCL standard defines a highly-vectorizable programming language, OpenCL C, which enables the deployment of programming logic to arbitrary hardware without requiring low-level, “machine-coding” knowledge of such. The OpenCL standard is a critical component of exascale initiatives given that it is hardware neu- tral, with significant support and participation from all the major processor vendors. Unfortunately the main source of information about OpenCL is in the form of its final specifications so there is a lot of misinformation about it. This talk will explain the relevance of the OpenCL standard to the HPC community, and offer a glimpse into what high-level abstractions for OpenCL, under development by software engineers, might look like. The Relevance of OpenCL to HPC Paul Preney 3 Presentation Overview 1. HPC and OpenCL 2. The Relevance Of OpenCL To HPC 3. A Glimpse Into Possible High-Level Abstractions 4. The Future and Discussion 5. References The Relevance of OpenCL to HPC Paul Preney 4 Table of Contents 1.
    [Show full text]
  • Speculative Parallelism in Cilk++
    Speculative Parallelism in Cilk++ Ruben Perez Gregory Malecha MIT Harvard University SEAS [email protected] [email protected] ABSTRACT and are based around backtracking search, often optimized Backtracking search algorithms are useful in many domains, with heuristics. Algorithms such as these can benefit tremen- from SAT solvers to artificial intelligences for playing games dously from parallel processing because separate threads can such as chess. Searching disjoint branches can, inherently be used to search independent paths in parallel. We call be done in parallel though it can considerably increase the this strategy speculative parallelism because only one solu- amount of work that the algorithm does. Such parallelism is tion matters and as soon as that is found, all other work can speculative, once a solution is found additional work is irrele- be stopped. vant, but the individual branches each have their own poten- tial to find the solution. In sequential algorithms, heuristics While simple to describe, this use of parallelism is not always are used to prune regions of the search space. In parallel easy to implement. One parallel programming platform, implementations this pruning often corresponds to abort- cilk5 [FLR98], included language support for this type of ing existing computations that can be shown to be pursuing parallelism, but it has since been removed in more recent dead-ends. While some systems provide native support for versions of the platform adapted to C++, cilk++ [Lei09]. aborting work, Intel's current parallel extensions to C++, Our goal is to regain the power of speculative execution in a implemented in the cilk++ compiler [Lei09], do not.
    [Show full text]
  • Parallel Programming with Matlabmpi ∗
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by CERN Document Server Parallel Programming with MatlabMPI ∗ Jeremy Kepner ([email protected]) MIT Lincoln Laboratory, Lexington, MA 02420 July 23, 2001 Abstract MatlabMPI is a Matlab implementation of the Message Passing Interface (MPI) standard and allows any Matlab program to exploit multiple processors. MatlabMPI currently implements the basic six functions that are the core of the MPI point-to-point communications standard. The key technical innovation of MatlabMPI is that it implements the widely used MPI \look and feel" on top of standard Matlab file I/O, resulting in an extremely compact ( 100 lines) and \pure" implementation which runs anywhere Matlab runs. The performance has been tested on∼ both shared and distributed memory parallel computers. MatlabMPI can match the bandwidth of C based MPI at large message sizes. A test image filtering application using MatlabMPI achieved a speedup of 70 on a parallel computer. ∼ 1 Introduction Matlab [1] is the dominant programming language for implementing numerical computations and is widely used for algorithm development, simulation, data reduction, testing and system evaluation. Many of these computations can benefit from faster execution on a parallel computer. There have been many previous attempts to provide an efficient mechanism for running Matlab programs on parallel computers [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. These efforts of have faced numerous challenges and none have received widespread acceptance. In the world of parallel computing the Message Passing Interface (MPI) [2] is the de facto standard for implementing programs on multiple processors.
    [Show full text]
  • (12) United States Patent (10) Patent No.: US 8.209,664 B2 Yu Et Al
    USOO8209.664B2 (12) United States Patent (10) Patent No.: US 8.209,664 B2 Yu et al. (45) Date of Patent: Jun. 26, 2012 (54) HIGH LEVEL PROGRAMMING 2008, OO86442 A1 4/2008 Dasdan et al. 2008.OO98375 A1 4/2008 Isard EXTENSIONS FOR DISTRIBUTED DATA 2008/0120314 A1 5/2008 Yang et al. PARALLEL PROCESSING 2008/O127200 A1 5/2008 Richards et al. 2008/0250227 A1 10, 2008 Linderman et al. (75) Inventors: Yuan Yu, Cupertino, CA (US); Ulfar 2009/0043803 A1* 2/2009 Frishberg et al. ............. 707/102 Erlingsson, Reykjavik (IS); Michael A 2009/0300656 A1* 12/2009 Bosworth et al. ............. T19,320 Isard, San Francisco, CA (US); Frank OTHER PUBLICATIONS McSherry, San Francisco, CA (US) Blelloch, Guy E., et al., “Implementation of a Portable Nested Data (73) Assignee: Microsoft Corporation, Redmond, WA Parallel Language.” School of Computer Science, Carnegie Mellon (US) University, Pittsburgh, PA, Feb. 1993, Abstract, pp. 1-28. Isard, Michael, et al., “Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks.” Microsoft Research, Sillicon Val (*) Notice: Subject to any disclaimer, the term of this ley, http://research.microsoft.com/research/dv/dryad/eurosysO7. patent is extended or adjusted under 35 pdf). Mar. 23, 2007, 14 pages. U.S.C. 154(b) by 704 days. Ferreira, Renato, et al., “Data parallel language and compiler Support for data intensive applications*1.” Elsevier Science B.V., Parallel (21) Appl. No.: 12/406,826 Computing, vol. 28, Issue 5, May, 2002, 3 pages. Carpenter, Bryan, et al., “HPJava: Data Parallel Extensions to Java.” (22) Filed: Mar 18, 2009 NPAC at Syracuse University, Syracuse, NY, Feb.
    [Show full text]
  • Parallel Programming in Openmp About the Authors
    Parallel Programming in OpenMP About the Authors Rohit Chandra is a chief scientist at NARUS, Inc., a provider of internet business infrastructure solutions. He previously was a principal engineer in the Compiler Group at Silicon Graphics, where he helped design and implement OpenMP. Leonardo Dagum works for Silicon Graphics in the Linux Server Platform Group, where he is responsible for the I/O infrastructure in SGI’s scalable Linux server systems. He helped define the OpenMP Fortran API. His research interests include parallel algorithms and performance modeling for parallel systems. Dave Kohr is a member of the technical staff at NARUS, Inc. He previ- ously was a member of the technical staff in the Compiler Group at Silicon Graphics, where he helped define and implement the OpenMP. Dror Maydan is director of software at Tensilica, Inc., a provider of appli- cation-specific processor technology. He previously was an engineering department manager in the Compiler Group of Silicon Graphics, where he helped design and implement OpenMP. Jeff McDonald owns SolidFX, a private software development company. As the engineering department manager at Silicon Graphics, he proposed the OpenMP API effort and helped develop it into the industry standard it is today. Ramesh Menon is a staff engineer at NARUS, Inc. Prior to NARUS, Ramesh was a staff engineer at SGI, representing SGI in the OpenMP forum. He was the founding chairman of the OpenMP Architecture Review Board (ARB) and supervised the writing of the first OpenMP specifica- tions. Parallel Programming in OpenMP Rohit Chandra Leonardo Dagum Dave Kohr Dror Maydan Jeff McDonald Ramesh Menon Senior Editor Denise E.
    [Show full text]
  • Dryadlinq for Scientific Analyses
    DryadLINQ for Scientific Analyses Jaliya Ekanayake1,a, Atilla Soner Balkirc, Thilina Gunarathnea, Geoffrey Foxa,b, Christophe Poulaind, Nelson Araujod, Roger Bargad aSchool of Informatics and Computing, Indiana University Bloomington bPervasive Technology Institute, Indiana University Bloomington cDepartment of Computer Science, University of Chicago dMicrosoft Research {jekanaya, tgunarat, gcf}@indiana.edu, [email protected],{cpoulain,nelson,barga}@microsoft.com Abstract data-centered approach to parallel processing. In these frameworks, the data is staged in data/compute nodes Applying high level parallel runtimes to of clusters or large-scale data centers and the data/compute intensive applications is becoming computations are shipped to the data in order to increasingly common. The simplicity of the MapReduce perform data processing. HDFS allows Hadoop to programming model and the availability of open access data via a customized distributed storage system source MapReduce runtimes such as Hadoop, attract built on top of heterogeneous compute nodes, while more users around MapReduce programming model. DryadLINQ and CGL-MapReduce access data from Recently, Microsoft has released DryadLINQ for local disks and shared file systems. The simplicity of academic use, allowing users to experience a new these programming models enables better support for programming model and a runtime that is capable of quality of services such as fault tolerance and performing large scale data/compute intensive monitoring. analyses. In this paper, we present our experience in Although DryadLINQ comes with standard samples applying DryadLINQ for a series of scientific data such as Terasort, word count, its applicability for large analysis applications, identify their mapping to the scale data/compute intensive scientific applications is DryadLINQ programming model, and compare their not studied well.
    [Show full text]
  • Automatic and Explicit Parallelization Approaches for Mathematical Simulation Models
    Link¨opingStudies in Science and Technology Licentiate Thesis No. 1716 Automatic and Explicit Parallelization Approaches for Mathematical Simulation Models by Mahder Gebremedhin Department of Computer and Information Science Link¨opingUniversity SE-581 83 Link¨oping,Sweden Link¨oping2015 This is a Swedish Licentiate’s Thesis Swedish postgraduate education leads to a doctor’s degree and/or a licentiate’s degree. A doctor’s degree comprises 240 ECTS credits (4 year of full-time studies). A licentiate’s degree comprises 120 ECTS credits. Copyright c 2015 June Mahder Gebremedhin ISBN 978-91-7519-048-8 ISSN 0280–7971 Printed by LiU Tryck 2015 URL: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-117338 Abstract The move from single-core processor systems to multi-core and many-processor systems comes with the requirement of implementing computations in a way that can utilize these multiple computational units efficiently. This task of writing efficient parallel algorithms will not be possible without improving programming languages and compilers to provide the supporting mecha- nisms. Computer aided mathematical modeling and simulation is one of the most computationally intensive areas of computer science. Even simplified models of physical systems can impose a considerable computational load on the processors at hand. Being able to take advantage of the potential computational power provided by multi-core systems is vital in this area of application. This thesis tries to address how to take advantage of the poten- tial computational power provided by these modern processors in order to improve the performance of simulations, especially for models in the Mod- elica modeling language compiled and simulated using the OpenModelica compiler and run-time environment.
    [Show full text]