Parallel Programming with Microsoft \ Visual C++, 1E

Total Page:16

File Type:pdf, Size:1020Kb

Parallel Programming with Microsoft \ Visual C++, 1E SPINE= .47” P PARALLEL PROGRAMMING WITH MICROSOFT VISUAL C++® ARALLEL Your CPU meter shows a problem. One core is running at 100 percent, but all the patterns & practices other cores are idle. Your application is CPU-bound, but you are using only a frac- tion of the computing power of your multicore system. Is there a way to get better Proven practices for predictable results performance? Save time and reduce risk on your The answer, in a nutshell, is parallel programming. Where you once would have writ- P ten the kind of sequential code that is familiar to all programmers, you now fi nd that software development projects by ROGRAMMING P ARALLEL this no longer meets your performance goals. To use your system’s CPU resources incorporating patterns & practices, effi ciently, you need to split your application into pieces that can run at the same time. Microsoft’s applied engineering guidance that includes both production Of course, this is easier said than done. Parallel programming has a reputation for being the domain of experts and a minefi eld of subtle, hard-to-reproduce software quality source code and documentation. P ROGRAMMING defects. Everyone seems to have a favorite story about a parallel program that did not behave as expected because of a mysterious bug. The guidance is designed to help software development teams: WITH These stories should inspire a healthy respect for the diffi culty of the problems you will face in writing your own parallel programs. Fortunately, help has arrived. The Make critical design and technology Parallel Patterns Library (PPL) and the Asynchronous Agents Library introduce a selection decisions by highlighting new programming model for parallelism that signifi cantly simplifi es the job. Behind the appropriate solution architectures, ICROSOFT M the scenes are sophisticated algorithms that dynamically distribute computations on technologies, and Microsoft products WITH multicore architectures. In addition, Microsoft® Visual Studio® 2010 development for common scenarios system includes debugging and analysis tools to support the new parallel program- ming model. Understand the most important M V ISUAL C++ Proven design patterns are another source of help. This guide introduces you to the concepts needed for success by most important and frequently used patterns of parallel programming and provides explaining the relevant patterns and ICROSOFT executable code samples for them, using PPL. When thinking about where to begin, a prescribing the important practices good place to start is to review the patterns in this book. See if your problem has any attributes that match the six patterns presented in the following chapters. If it does, Design Patterns for delve more deeply into the relevant pattern or patterns and study the sample code. Get started with a proven code base by providing thoroughly tested software and source that embodies Decomposition and Coordination Microsoft’s recommendations V on Multicore Architectures The patterns & practices team consists of experienced architects, developers, ISUAL writers, and testers. We work openly with the developer community and industry experts, on every project, to Colin Campbell C++ ensure that some of the best minds in Ade Miller the industry have contributed to and reviewed the guidance as it is being developed. ® Forewords by We also love our role as the bridge Tony Hey between the real world needs of our Herb Sutter customers and the wide range of products and technologies that Microsoft provides. • • • • • • • • • • • • • • U.S.A. $29.99 [Recommended] • • • • • • • • • • • • For more information explore: msdn.microsoft.com/practices Software Development Download from Wow! eBook <www.wowebook.com> parallel programming with microsoft visual c++ visual microsoft with programming parallel ® Parallel Programming with Microsoft Visual C++® Design Patterns for Decomposition and Coordination on Multicore Architectures Colin Campbell Ade Miller ISBN 978-0-7356-5175-3 This document is provided “as-is.” Information and views expressed in this document, including URL and other Internet website references, may change without notice. You bear the risk of using it. Unless otherwise noted, the companies, organizations, products, domain names, email addresses, logos, people, places, and events depicted in examples herein are fictitious. No association with any real company, organization, product, domain name, email address, logo, person, place, or event is intended or should be inferred. Comply- ing with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property. © 2011 Microsoft Corporation. All rights reserved. Microsoft, MSDN, Visual Basic, Visual C++, Visual C#, Visual Studio, Windows, Windows Live, Windows Server, and Windows Vista are trademarks of the Microsoft group of companies. All other trademarks are property of their respective owners. Contents foreword xi Tony Hey foreword xiii Herb Sutter preface xv Who This Book Is For xv Why This Book Is Pertinent Now xvi What You Need to Use the Code xvi How to Use This Book xvii Introduction xviii Parallelism with Control Dependencies Only xviii Parallelism with Control and Data Dependencies xviii Dynamic Task Parallelism and Pipelines xviii Supporting Material xix What Is Not Covered xx Goals xx acknowledgments xxi 1 Introduction 1 The Importance of Potential Parallelism 2 Decomposition, Coordination, and Scalable Sharing 3 Understanding Tasks 3 Coordinating Tasks 4 Scalable Sharing of Data 5 Design Approaches 6 Selecting the Right Pattern 7 A Word about Terminology 8 The Limits of Parallelism 8 A Few Tips 10 Exercises 11 For More Information 11 vi 2 Parallel Loops 13 The Basics 14 Parallel for Loops 14 parallel_for_each 15 What to Expect 16 An Example 17 Sequential Credit Review Example 18 Credit Review Example Using parallel_for_each 18 Performance Comparison 19 Variations 19 Breaking out of Loops Early 19 Exception Handling 20 Special Handling of Small Loop Bodies 21 Controlling the Degree of Parallelism 22 Anti-Patterns 23 Hidden Loop Body Dependencies 23 Small Loop Bodies with Few Iterations 23 Duplicates in the Input Enumeration 23 Scheduling Interactions with Cooperative Blocking 24 Related Patterns 24 Exercises 24 Further Reading 25 3 Parallel Tasks 27 The Basics 28 An Example 29 Variations 31 Coordinating Tasks with Cooperative Blocking 31 Canceling a Task Group 33 Handling Exceptions 35 Speculative Execution 36 Anti-Patterns 37 Variables Captured by Closures 37 Unintended Propagation of Cancellation Requests 38 The Cost of Synchronization 39 Design Notes 39 Task Group Calling Conventions 39 Tasks and Threads 40 How Tasks Are Scheduled 40 Structured Task Groups and Task Handles 41 Lightweight Tasks 41 Exercises 42 Further Reading 42 vii 4 Parallel Aggregation 45 The Basics 46 An Example 49 Variations 55 Considerations for Small Loop Bodies 55 Other Uses for Combinable Objects 55 Design Notes 55 Related Patterns 57 Exercises 58 Further Reading 58 5 Futures 61 The Basics 62 Futures 63 Example: The Adatum Financial Dashboard 65 The Business Objects 66 The Analysis Engine 67 Variations 70 Canceling Futures 70 Removing Bottlenecks 70 Modifying the Graph at Run Time 71 Design Notes 72 Decomposition into Futures 72 Functional Style 72 Related Patterns 72 Pipeline Pattern 73 Master/Worker Pattern 73 Dynamic Task Parallelism Pattern 73 Discrete Event Pattern 73 Exercises 73 6 Dynamic Task Parallelism 75 The Basics 75 An Example 77 Variations 80 Parallel While-Not-Empty 80 Adding Tasks to a Pending Wait Context 81 Exercises 83 Further Reading 83 7 Pipelines 85 Types of Messaging Blocks 86 The Basics 86 viii An Example 92 Sequential Image Processing 92 The Image Pipeline 94 Performance Characteristics 96 Variations 97 Asynchronous Pipelines 97 Canceling a Pipeline 101 Handling Pipeline Exceptions 102 Load Balancing Using Multiple Producers 104 Pipelines and Streams 106 Anti-Patterns 107 Copying Large Amounts of Data between Pipeline Stages 107 Pipeline Stages that Are Too Small 107 Forgetting to Use Message Passing for Isolation 107 Infinite Waits 107 Unbounded Queue Growth 107 More Information 107 Design Notes 108 Related Patterns 109 Exercises 109 Further Reading 109 appendices a the task scheduler and resource manager 111 Resource Manager 113 Why It’s Needed 113 How Resource Management Works 113 Dynamic Resource Management 115 Oversubscribing Cores 116 Querying the Environment 116 Kinds of Tasks 116 Lightweight Tasks 117 Tasks Created Using PPL 117 Task Schedulers 118 Managing Task Schedulers 118 Creating and Attaching a Task Scheduler 119 Detaching a Task Scheduler 120 Destroying a Task Scheduler 120 Scenarios for Using Multiple Task Schedulers 120 Implementing a Custom Scheduling Component 121 ix The Scheduling Algorithm 121 Schedule Groups 121 Adding Tasks 122 Running Tasks 123 Enhanced Locality Mode 124 Forward
Recommended publications
  • Technical Report Aaron Councilman
    Extensible Parallel Programming in ableC Aaron Councilman Department of Computer Science and Engineering University of Minnesota, Twin Cities May 23, 2019 1 Introduction There are many different manners of parallelizing code, and many different languages that provide such features. Different types of computations are best suited by different types of parallelism. Simply whether a computation is compute bound or I/O bound determines whether the computation will benefit from being run with more threads than the machine has cores, and other properties of a computation will similarly affect how it performs when run in parallel. Thus, to provide parallel programmers the ability to deliver the best performance for their programs, the ability to choose the parallel programming abstractions they use is important. The ability to combine these abstractions however they need is also important, since different parts of a program will have different performance properties, and therefore may perform best using different abstractions. Unfortunately, parallel programming languages are often designed monolithically, built as an entire language with a specific set of features. Because of this, programmer's choice of parallel programming abstractions is generally limited to the choice of the language to use. Beyond limiting the available abstracts, this also means, that the choice of abstractions must be made ahead of time, since any attempt to change the parallel programming language at a later time is likely to be be prohibitive as it may require rewriting large portions of the codebase, if not the entire codebase. Extensible programming languages can offer a solution to these problems. With an extensible compiler, the programmer chooses a base programming language and can then select the set of \extensions" for that language that best fit their needs.
    [Show full text]
  • Intel® Cilk™ Plus
    Overview: Programming Environment for Intel® Xeon Phi™ Coprocessor One Source Base, Tuned to many Targets Source Compilers, Libraries, Parallel Models Multicore Many-core Cluster Multicore Multicore CPU CPU Intel® MIC Multicore Multicore and Architecture Cluster Many-core Cluster Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. Intel® Parallel Studio XE 2013 and Intel® Cluster Studio XE 2013 Phase Product Feature Benefit Intel® Threading design assistant • Simplifies, demystifies, and speeds Advisor XE (Studio products only) parallel application design • C/C++ and Fortran compilers • Intel® Threading Building Blocks • Enabling solution to achieve the Intel® • Intel® Cilk™ Plus application performance and Composer XE • Intel® Integrated Performance scalability benefits of multicore and Build Primitives forward scale to many-core • Intel® Math Kernel Library • Enabling High Performance Scalability, Interconnect Intel® High Performance Message Independence, Runtime Fabric † MPI Library Passing (MPI) Library Selection, and Application Tuning Capability ® Intel Performance Profiler for • Remove guesswork, saves time, VTune™ optimizing application makes it easier to find performance Amplifier XE performance and scalability and scalability bottlenecks Memory & threading dynamic • Increased productivity, code quality, ® Intel analysis for code quality and lowers cost, finds memory, Verify Inspector XE threading , and security defects & Tune Static Analysis for code quality
    [Show full text]
  • Part V Some Broad Topic
    Part V Some Broad Topic Winter 2021 Parallel Processing, Some Broad Topics Slide 1 About This Presentation This presentation is intended to support the use of the textbook Introduction to Parallel Processing: Algorithms and Architectures (Plenum Press, 1999, ISBN 0-306-45970-1). It was prepared by the author in connection with teaching the graduate-level course ECE 254B: Advanced Computer Architecture: Parallel Processing, at the University of California, Santa Barbara. Instructors can use these slides in classroom teaching and for other educational purposes. Any other use is strictly prohibited. © Behrooz Parhami Edition Released Revised Revised Revised First Spring 2005 Spring 2006 Fall 2008 Fall 2010 Winter 2013 Winter 2014 Winter 2016 Winter 2019* Winter 2021* *Chapters 17-18 only Winter 2021 Parallel Processing, Some Broad Topics Slide 2 V Some Broad Topics Study topics that cut across all architectural classes: • Mapping computations onto processors (scheduling) • Ensuring that I/O can keep up with other subsystems • Storage, system, software, and reliability issues Topics in This Part Chapter 17 Emulation and Scheduling Chapter 18 Data Storage, Input, and Output Chapter 19 Reliable Parallel Processing Chapter 20 System and Software Issues Winter 2021 Parallel Processing, Some Broad Topics Slide 3 17 Emulation and Scheduling Mapping an architecture or task system onto an architecture • Learn how to achieve algorithm portability via emulation • Become familiar with task scheduling in parallel systems Topics in This Chapter 17.1 Emulations Among Architectures 17.2 Distributed Shared Memory 17.3 The Task Scheduling Problem 17.4 A Class of Scheduling Algorithms 17.5 Some Useful Bounds for Scheduling 17.6 Load Balancing and Dataflow Systems Winter 2021 Parallel Processing, Some Broad Topics Slide 4 17.1 Emulations Among Architectures Need for scheduling: a.
    [Show full text]
  • An Introduction to Parallel Programming
    In Praise of An Introduction to Parallel Programming With the coming of multicore processors and the cloud, parallel computing is most cer- tainly not a niche area off in a corner of the computing world. Parallelism has become central to the efficient use of resources, and this new textbook by Peter Pacheco will go a long way toward introducing students early in their academic careers to both the art and practice of parallel computing. Duncan Buell Department of Computer Science and Engineering University of South Carolina An Introduction to Parallel Programming illustrates fundamental programming principles in the increasingly important area of shared-memory programming using Pthreads and OpenMP and distributed-memory programming using MPI. More important, it empha- sizes good programming practices by indicating potential performance pitfalls. These topics are presented in the context of a variety of disciplines, including computer science, physics, and mathematics. The chapters include numerous programming exercises that range from easy to very challenging. This is an ideal book for students or professionals looking to learn parallel programming skills or to refresh their knowledge. Leigh Little Department of Computational Science The College at Brockport, The State University of New York An Introduction to Parallel Programming is a well-written, comprehensive book on the field of parallel computing. Students and practitioners alike will appreciate the rele- vant, up-to-date information. Peter Pacheco’s very accessible writing style, combined with numerous interesting examples, keeps the reader’s attention. In a field that races forward at a dizzying pace, this book hangs on for the wild ride covering the ins and outs of parallel hardware and software.
    [Show full text]
  • Lecture Notes : Intel Xeon Phi Coprocessor - an Overview
    C-DAC Four Days Technology Workshop ON Hybrid Computing – Coprocessors/Accelerators Power-Aware Computing – Performance of Application Kernels hyPACK-2013 Mode 3 : Intel Xeon Phi Coprocessors Lecture Notes : Intel Xeon Phi Coprocessor - An Overview Venue : CMSD, UoHYD ; Date : October 15-18, 2013 C-DAC hyPACK-2013 Xeon-Phi Coprocessors : An Overview 1 An Overview of Prog. Env on Intel Xeon-Phi Lecture Outline Following topics will be discussed Understanding of Intel Xeon-Phi Coprocessor Architecture Programming on Intel Xeon-Phi Coprocessor Performance Issues on Intel Xeon-Phi Coprocessor C-DAC hyPACK-2013 Xeon-Phi Coprocessors : An Overview 2 Intel Xeon Host : An Overview of Xeon - Multi-Core and Systems with Devices Part-I Background : Xeon Host - Multi-Core & Devices C-DAC hyPACK-2013 Xeon-Phi Coprocessors : An Overview 3 Programming paradigms-Challenges Large scale data Computing – Current trends HowTrends to run Computing Programs - Observationsfaster ? You require Super Computer Era of Single - Multi-to-Many Core - Heterogeneous Computing Sequential Computing How to run Programs faster ? Fetch/Store Fast Access of data Compute Fast Processor More Memory to Manage data C-DAC hyPACK-2013 Xeon-Phi Coprocessors : An Overview 4 Multi-threaded Processing using Hyper-Threading Technology Time taken to process n threads on a single processor is significantly more than a single processor system with HT technology enabled. Multithreading Hyper-threading Technology App 0 App 1 App 2 App 0 App 1 App 2 T0 T1 T2 T3 T4 T5 T0 T1 T2 T3 T4 T5 Thread Pool Thread Pool CPU CPU CPU T0 T1 T2 T3 T4 T5 LP0 T0 T2 T4 LP1 Time T1 T3 T5 CPU 2 Threads per Processor Time Source : http://www.intel.com ; Reference : [6], [29], [31] P/C : Microprocessor and cache; SM : Shared memory C-DAC hyPACK-2013 Xeon-Phi Coprocessors : An Overview 5 Relationship among Processors, Processes, &Threads Processors Processors µPi Processes .
    [Show full text]
  • The Relevance of Opencl to HPC
    The Relevance of OpenCL to HPC . Paul Preney, OCT, M.Sc., B.Ed., B.Sc. [email protected] School of Computer Science University of Windsor Windsor, Ontario, Canada Copyright © 2015 Paul Preney. All Rights Reserved. March 4, 2015 The Relevance of OpenCL to HPC Paul Preney 2 Announced Abstract Today’s high performance computing programs are evolving to be increas- ingly more parallel, increasingly more wait-free, increasingly deployed on many different kinds of hardware including general purpose CPUs, GPGPUs, FPGAs, other custom specific-purpose hardware, etc. The OpenCL standards are platforms providing interfaces that enable deployment of programs to vir- tually any heterogeneous computing device. The OpenCL standard defines a highly-vectorizable programming language, OpenCL C, which enables the deployment of programming logic to arbitrary hardware without requiring low-level, “machine-coding” knowledge of such. The OpenCL standard is a critical component of exascale initiatives given that it is hardware neu- tral, with significant support and participation from all the major processor vendors. Unfortunately the main source of information about OpenCL is in the form of its final specifications so there is a lot of misinformation about it. This talk will explain the relevance of the OpenCL standard to the HPC community, and offer a glimpse into what high-level abstractions for OpenCL, under development by software engineers, might look like. The Relevance of OpenCL to HPC Paul Preney 3 Presentation Overview 1. HPC and OpenCL 2. The Relevance Of OpenCL To HPC 3. A Glimpse Into Possible High-Level Abstractions 4. The Future and Discussion 5. References The Relevance of OpenCL to HPC Paul Preney 4 Table of Contents 1.
    [Show full text]
  • Speculative Parallelism in Cilk++
    Speculative Parallelism in Cilk++ Ruben Perez Gregory Malecha MIT Harvard University SEAS [email protected] [email protected] ABSTRACT and are based around backtracking search, often optimized Backtracking search algorithms are useful in many domains, with heuristics. Algorithms such as these can benefit tremen- from SAT solvers to artificial intelligences for playing games dously from parallel processing because separate threads can such as chess. Searching disjoint branches can, inherently be used to search independent paths in parallel. We call be done in parallel though it can considerably increase the this strategy speculative parallelism because only one solu- amount of work that the algorithm does. Such parallelism is tion matters and as soon as that is found, all other work can speculative, once a solution is found additional work is irrele- be stopped. vant, but the individual branches each have their own poten- tial to find the solution. In sequential algorithms, heuristics While simple to describe, this use of parallelism is not always are used to prune regions of the search space. In parallel easy to implement. One parallel programming platform, implementations this pruning often corresponds to abort- cilk5 [FLR98], included language support for this type of ing existing computations that can be shown to be pursuing parallelism, but it has since been removed in more recent dead-ends. While some systems provide native support for versions of the platform adapted to C++, cilk++ [Lei09]. aborting work, Intel's current parallel extensions to C++, Our goal is to regain the power of speculative execution in a implemented in the cilk++ compiler [Lei09], do not.
    [Show full text]
  • Parallel Programming with Matlabmpi ∗
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by CERN Document Server Parallel Programming with MatlabMPI ∗ Jeremy Kepner ([email protected]) MIT Lincoln Laboratory, Lexington, MA 02420 July 23, 2001 Abstract MatlabMPI is a Matlab implementation of the Message Passing Interface (MPI) standard and allows any Matlab program to exploit multiple processors. MatlabMPI currently implements the basic six functions that are the core of the MPI point-to-point communications standard. The key technical innovation of MatlabMPI is that it implements the widely used MPI \look and feel" on top of standard Matlab file I/O, resulting in an extremely compact ( 100 lines) and \pure" implementation which runs anywhere Matlab runs. The performance has been tested on∼ both shared and distributed memory parallel computers. MatlabMPI can match the bandwidth of C based MPI at large message sizes. A test image filtering application using MatlabMPI achieved a speedup of 70 on a parallel computer. ∼ 1 Introduction Matlab [1] is the dominant programming language for implementing numerical computations and is widely used for algorithm development, simulation, data reduction, testing and system evaluation. Many of these computations can benefit from faster execution on a parallel computer. There have been many previous attempts to provide an efficient mechanism for running Matlab programs on parallel computers [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. These efforts of have faced numerous challenges and none have received widespread acceptance. In the world of parallel computing the Message Passing Interface (MPI) [2] is the de facto standard for implementing programs on multiple processors.
    [Show full text]
  • (12) United States Patent (10) Patent No.: US 8.209,664 B2 Yu Et Al
    USOO8209.664B2 (12) United States Patent (10) Patent No.: US 8.209,664 B2 Yu et al. (45) Date of Patent: Jun. 26, 2012 (54) HIGH LEVEL PROGRAMMING 2008, OO86442 A1 4/2008 Dasdan et al. 2008.OO98375 A1 4/2008 Isard EXTENSIONS FOR DISTRIBUTED DATA 2008/0120314 A1 5/2008 Yang et al. PARALLEL PROCESSING 2008/O127200 A1 5/2008 Richards et al. 2008/0250227 A1 10, 2008 Linderman et al. (75) Inventors: Yuan Yu, Cupertino, CA (US); Ulfar 2009/0043803 A1* 2/2009 Frishberg et al. ............. 707/102 Erlingsson, Reykjavik (IS); Michael A 2009/0300656 A1* 12/2009 Bosworth et al. ............. T19,320 Isard, San Francisco, CA (US); Frank OTHER PUBLICATIONS McSherry, San Francisco, CA (US) Blelloch, Guy E., et al., “Implementation of a Portable Nested Data (73) Assignee: Microsoft Corporation, Redmond, WA Parallel Language.” School of Computer Science, Carnegie Mellon (US) University, Pittsburgh, PA, Feb. 1993, Abstract, pp. 1-28. Isard, Michael, et al., “Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks.” Microsoft Research, Sillicon Val (*) Notice: Subject to any disclaimer, the term of this ley, http://research.microsoft.com/research/dv/dryad/eurosysO7. patent is extended or adjusted under 35 pdf). Mar. 23, 2007, 14 pages. U.S.C. 154(b) by 704 days. Ferreira, Renato, et al., “Data parallel language and compiler Support for data intensive applications*1.” Elsevier Science B.V., Parallel (21) Appl. No.: 12/406,826 Computing, vol. 28, Issue 5, May, 2002, 3 pages. Carpenter, Bryan, et al., “HPJava: Data Parallel Extensions to Java.” (22) Filed: Mar 18, 2009 NPAC at Syracuse University, Syracuse, NY, Feb.
    [Show full text]
  • Parallel Programming in Openmp About the Authors
    Parallel Programming in OpenMP About the Authors Rohit Chandra is a chief scientist at NARUS, Inc., a provider of internet business infrastructure solutions. He previously was a principal engineer in the Compiler Group at Silicon Graphics, where he helped design and implement OpenMP. Leonardo Dagum works for Silicon Graphics in the Linux Server Platform Group, where he is responsible for the I/O infrastructure in SGI’s scalable Linux server systems. He helped define the OpenMP Fortran API. His research interests include parallel algorithms and performance modeling for parallel systems. Dave Kohr is a member of the technical staff at NARUS, Inc. He previ- ously was a member of the technical staff in the Compiler Group at Silicon Graphics, where he helped define and implement the OpenMP. Dror Maydan is director of software at Tensilica, Inc., a provider of appli- cation-specific processor technology. He previously was an engineering department manager in the Compiler Group of Silicon Graphics, where he helped design and implement OpenMP. Jeff McDonald owns SolidFX, a private software development company. As the engineering department manager at Silicon Graphics, he proposed the OpenMP API effort and helped develop it into the industry standard it is today. Ramesh Menon is a staff engineer at NARUS, Inc. Prior to NARUS, Ramesh was a staff engineer at SGI, representing SGI in the OpenMP forum. He was the founding chairman of the OpenMP Architecture Review Board (ARB) and supervised the writing of the first OpenMP specifica- tions. Parallel Programming in OpenMP Rohit Chandra Leonardo Dagum Dave Kohr Dror Maydan Jeff McDonald Ramesh Menon Senior Editor Denise E.
    [Show full text]
  • Dryadlinq for Scientific Analyses
    DryadLINQ for Scientific Analyses Jaliya Ekanayake1,a, Atilla Soner Balkirc, Thilina Gunarathnea, Geoffrey Foxa,b, Christophe Poulaind, Nelson Araujod, Roger Bargad aSchool of Informatics and Computing, Indiana University Bloomington bPervasive Technology Institute, Indiana University Bloomington cDepartment of Computer Science, University of Chicago dMicrosoft Research {jekanaya, tgunarat, gcf}@indiana.edu, [email protected],{cpoulain,nelson,barga}@microsoft.com Abstract data-centered approach to parallel processing. In these frameworks, the data is staged in data/compute nodes Applying high level parallel runtimes to of clusters or large-scale data centers and the data/compute intensive applications is becoming computations are shipped to the data in order to increasingly common. The simplicity of the MapReduce perform data processing. HDFS allows Hadoop to programming model and the availability of open access data via a customized distributed storage system source MapReduce runtimes such as Hadoop, attract built on top of heterogeneous compute nodes, while more users around MapReduce programming model. DryadLINQ and CGL-MapReduce access data from Recently, Microsoft has released DryadLINQ for local disks and shared file systems. The simplicity of academic use, allowing users to experience a new these programming models enables better support for programming model and a runtime that is capable of quality of services such as fault tolerance and performing large scale data/compute intensive monitoring. analyses. In this paper, we present our experience in Although DryadLINQ comes with standard samples applying DryadLINQ for a series of scientific data such as Terasort, word count, its applicability for large analysis applications, identify their mapping to the scale data/compute intensive scientific applications is DryadLINQ programming model, and compare their not studied well.
    [Show full text]
  • Automatic and Explicit Parallelization Approaches for Mathematical Simulation Models
    Link¨opingStudies in Science and Technology Licentiate Thesis No. 1716 Automatic and Explicit Parallelization Approaches for Mathematical Simulation Models by Mahder Gebremedhin Department of Computer and Information Science Link¨opingUniversity SE-581 83 Link¨oping,Sweden Link¨oping2015 This is a Swedish Licentiate’s Thesis Swedish postgraduate education leads to a doctor’s degree and/or a licentiate’s degree. A doctor’s degree comprises 240 ECTS credits (4 year of full-time studies). A licentiate’s degree comprises 120 ECTS credits. Copyright c 2015 June Mahder Gebremedhin ISBN 978-91-7519-048-8 ISSN 0280–7971 Printed by LiU Tryck 2015 URL: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-117338 Abstract The move from single-core processor systems to multi-core and many-processor systems comes with the requirement of implementing computations in a way that can utilize these multiple computational units efficiently. This task of writing efficient parallel algorithms will not be possible without improving programming languages and compilers to provide the supporting mecha- nisms. Computer aided mathematical modeling and simulation is one of the most computationally intensive areas of computer science. Even simplified models of physical systems can impose a considerable computational load on the processors at hand. Being able to take advantage of the potential computational power provided by multi-core systems is vital in this area of application. This thesis tries to address how to take advantage of the poten- tial computational power provided by these modern processors in order to improve the performance of simulations, especially for models in the Mod- elica modeling language compiled and simulated using the OpenModelica compiler and run-time environment.
    [Show full text]