An Introduction to Parallel Programming with Openmp
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Introduction to Programming in Fortran 77 for Students of Science and Engineering
Introduction to programming in Fortran 77 for students of Science and Engineering Roman GrÄoger University of Pennsylvania, Department of Materials Science and Engineering 3231 Walnut Street, O±ce #215, Philadelphia, PA 19104 Revision 1.2 (September 27, 2004) 1 Introduction Fortran (FORmula TRANslation) is a programming language designed speci¯cally for scientists and engineers. For the past 30 years Fortran has been used for such projects as the design of bridges and aeroplane structures, it is used for factory automation control, for storm drainage design, analysis of scienti¯c data and so on. Throughout the life of this language, groups of users have written libraries of useful standard Fortran programs. These programs can be borrowed and used by other people who wish to take advantage of the expertise and experience of the authors, in a similar way in which a book is borrowed from a library. Fortran belongs to a class of higher-level programming languages in which the programs are not written directly in the machine code but instead in an arti¯cal, human-readable language. This source code consists of algorithms built using a set of standard constructions, each consisting of a series of commands which de¯ne the elementary operations with your data. In other words, any algorithm is a cookbook which speci¯es input ingredients, operations with them and with other data and ¯nally returns one or more results, depending on the function of this algorithm. Any source code has to be compiled in order to obtain an executable code which can be run on your computer. -
Writing Fast Fortran Routines for Python
Writing fast Fortran routines for Python Table of contents Table of contents ............................................................................................................................ 1 Overview ......................................................................................................................................... 2 Installation ...................................................................................................................................... 2 Basic Fortran programming ............................................................................................................ 3 A Fortran case study ....................................................................................................................... 8 Maximizing computational efficiency in Fortran code ................................................................. 12 Multiple functions in each Fortran file ......................................................................................... 14 Compiling and debugging ............................................................................................................ 15 Preparing code for f2py ................................................................................................................ 16 Running f2py ................................................................................................................................. 17 Help with f2py .............................................................................................................................. -
Heterogeneous Task Scheduling for Accelerated Openmp
Heterogeneous Task Scheduling for Accelerated OpenMP ? ? Thomas R. W. Scogland Barry Rountree† Wu-chun Feng Bronis R. de Supinski† ? Department of Computer Science, Virginia Tech, Blacksburg, VA 24060 USA † Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, CA 94551 USA [email protected] [email protected] [email protected] [email protected] Abstract—Heterogeneous systems with CPUs and computa- currently requires a programmer either to program in at tional accelerators such as GPUs, FPGAs or the upcoming least two different parallel programming models, or to use Intel MIC are becoming mainstream. In these systems, peak one of the two that support both GPUs and CPUs. Multiple performance includes the performance of not just the CPUs but also all available accelerators. In spite of this fact, the models however require code replication, and maintaining majority of programming models for heterogeneous computing two completely distinct implementations of a computational focus on only one of these. With the development of Accelerated kernel is a difficult and error-prone proposition. That leaves OpenMP for GPUs, both from PGI and Cray, we have a clear us with using either OpenCL or accelerated OpenMP to path to extend traditional OpenMP applications incrementally complete the task. to use GPUs. The extensions are geared toward switching from CPU parallelism to GPU parallelism. However they OpenCL’s greatest strength lies in its broad hardware do not preserve the former while adding the latter. Thus support. In a way, though, that is also its greatest weak- computational potential is wasted since either the CPU cores ness. To enable one to program this disparate hardware or the GPU cores are left idle. -
Fortran Resources 1
Fortran Resources 1 Ian D Chivers Jane Sleightholme May 7, 2021 1The original basis for this document was Mike Metcalf’s Fortran Information File. The next input came from people on comp-fortran-90. Details of how to subscribe or browse this list can be found in this document. If you have any corrections, additions, suggestions etc to make please contact us and we will endeavor to include your comments in later versions. Thanks to all the people who have contributed. Revision history The most recent version can be found at https://www.fortranplus.co.uk/fortran-information/ and the files section of the comp-fortran-90 list. https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=comp-fortran-90 • May 2021. Major update to the Intel entry. Also changes to the editors and IDE section, the graphics section, and the parallel programming section. • October 2020. Added an entry for Nvidia to the compiler section. Nvidia has integrated the PGI compiler suite into their NVIDIA HPC SDK product. Nvidia are also contributing to the LLVM Flang project. Updated the ’Additional Compiler Information’ entry in the compiler section. The Polyhedron benchmarks discuss automatic parallelisation. The fortranplus entry covers the diagnostic capability of the Cray, gfortran, Intel, Nag, Oracle and Nvidia compilers. Updated one entry and removed three others from the software tools section. Added ’Fortran Discourse’ to the e-lists section. We have also made changes to the Latex style sheet. • September 2020. Added a computer arithmetic and IEEE formats section. • June 2020. Updated the compiler entry with details of standard conformance. -
Parallel Programming
Parallel Programming Libraries and implementations Outline • MPI – distributed memory de-facto standard • Using MPI • OpenMP – shared memory de-facto standard • Using OpenMP • CUDA – GPGPU de-facto standard • Using CUDA • Others • Hybrid programming • Xeon Phi Programming • SHMEM • PGAS MPI Library Distributed, message-passing programming Message-passing concepts Explicit Parallelism • In message-passing all the parallelism is explicit • The program includes specific instructions for each communication • What to send or receive • When to send or receive • Synchronisation • It is up to the developer to design the parallel decomposition and implement it • How will you divide up the problem? • When will you need to communicate between processes? Message Passing Interface (MPI) • MPI is a portable library used for writing parallel programs using the message passing model • You can expect MPI to be available on any HPC platform you use • Based on a number of processes running independently in parallel • HPC resource provides a command to launch multiple processes simultaneously (e.g. mpiexec, aprun) • There are a number of different implementations but all should support the MPI 2 standard • As with different compilers, there will be variations between implementations but all the features specified in the standard should work. • Examples: MPICH2, OpenMPI Point-to-point communications • A message sent by one process and received by another • Both processes are actively involved in the communication – not necessarily at the same time • Wide variety of semantics provided: • Blocking vs. non-blocking • Ready vs. synchronous vs. buffered • Tags, communicators, wild-cards • Built-in and custom data-types • Can be used to implement any communication pattern • Collective operations, if applicable, can be more efficient Collective communications • A communication that involves all processes • “all” within a communicator, i.e. -
Openmp API 5.1 Specification
OpenMP Application Programming Interface Version 5.1 November 2020 Copyright c 1997-2020 OpenMP Architecture Review Board. Permission to copy without fee all or part of this material is granted, provided the OpenMP Architecture Review Board copyright notice and the title of this document appear. Notice is given that copying is by permission of the OpenMP Architecture Review Board. This page intentionally left blank in published version. Contents 1 Overview of the OpenMP API1 1.1 Scope . .1 1.2 Glossary . .2 1.2.1 Threading Concepts . .2 1.2.2 OpenMP Language Terminology . .2 1.2.3 Loop Terminology . .9 1.2.4 Synchronization Terminology . 10 1.2.5 Tasking Terminology . 12 1.2.6 Data Terminology . 14 1.2.7 Implementation Terminology . 18 1.2.8 Tool Terminology . 19 1.3 Execution Model . 22 1.4 Memory Model . 25 1.4.1 Structure of the OpenMP Memory Model . 25 1.4.2 Device Data Environments . 26 1.4.3 Memory Management . 27 1.4.4 The Flush Operation . 27 1.4.5 Flush Synchronization and Happens Before .................. 29 1.4.6 OpenMP Memory Consistency . 30 1.5 Tool Interfaces . 31 1.5.1 OMPT . 32 1.5.2 OMPD . 32 1.6 OpenMP Compliance . 33 1.7 Normative References . 33 1.8 Organization of this Document . 35 i 2 Directives 37 2.1 Directive Format . 38 2.1.1 Fixed Source Form Directives . 43 2.1.2 Free Source Form Directives . 44 2.1.3 Stand-Alone Directives . 45 2.1.4 Array Shaping . 45 2.1.5 Array Sections . -
Openmp Made Easy with INTEL® ADVISOR
OpenMP made easy with INTEL® ADVISOR Zakhar Matveev, PhD, Intel CVCG, November 2018, SC’18 OpenMP booth Why do we care? Ai Bi Ai Bi Ai Bi Ai Bi Vector + Processing Ci Ci Ci Ci VL Motivation (instead of Agenda) • Starting from 4.x, OpenMP introduces support for both levels of parallelism: • Multi-Core (think of “pragma/directive omp parallel for”) • SIMD (think of “pragma/directive omp simd”) • 2 pillars of OpenMP SIMD programming model • Hardware with Intel ® AVX-512 support gives you theoretically 8x speed- up over SSE baseline (less or even more in practice) • Intel® Advisor is here to assist you in: • Enabling SIMD parallelism with OpenMP (if not yet) • Improving performance of already vectorized OpenMP SIMD code • And will also help to optimize for Memory Sub-system (Advisor Roofline) 3 Don’t use a single Vector lane! Un-vectorized and un-threaded software will under perform 4 Permission to Design for All Lanes Threading and Vectorization needed to fully utilize modern hardware 5 A Vector parallelism in x86 AVX-512VL AVX-512BW B AVX-512DQ AVX-512CD AVX-512F C AVX2 AVX2 AVX AVX AVX SSE SSE SSE SSE Intel® microarchitecture code name … NHM SNB HSW SKL 6 (theoretically) 8x more SIMD FLOP/S compared to your (–O2) optimized baseline x - Significant leap to 512-bit SIMD support for processors - Intel® Compilers and Intel® Math Kernel Library include AVX-512 support x - Strong compatibility with AVX - Added EVEX prefix enables additional x functionality Don’t leave it on the table! 7 Two level parallelism decomposition with OpenMP: image processing example B #pragma omp parallel for for (int y = 0; y < ImageHeight; ++y){ #pragma omp simd C for (int x = 0; x < ImageWidth; ++x){ count[y][x] = mandel(in_vals[y][x]); } } 8 Two level parallelism decomposition with OpenMP: fluid dynamics processing example B #pragma omp parallel for for (int i = 0; i < X_Dim; ++i){ #pragma omp simd C for (int m = 0; x < n_velocities; ++m){ next_i = f(i, velocities(m)); X[i] = next_i; } } 9 Key components of Intel® Advisor What’s new in “2019” release Step 1. -
Introduction to Openmp Paul Edmon ITC Research Computing Associate
Introduction to OpenMP Paul Edmon ITC Research Computing Associate FAS Research Computing Overview • Threaded Parallelism • OpenMP Basics • OpenMP Programming • Benchmarking FAS Research Computing Threaded Parallelism • Shared Memory • Single Node • Non-uniform Memory Access (NUMA) • One thread per core FAS Research Computing Threaded Languages • PThreads • Python • Perl • OpenCL/CUDA • OpenACC • OpenMP FAS Research Computing OpenMP Basics FAS Research Computing What is OpenMP? • OpenMP (Open Multi-Processing) – Application Program Interface (API) – Governed by OpenMP Architecture Review Board • OpenMP provides a portable, scalable model for developers of shared memory parallel applications • The API supports C/C++ and Fortran on a wide variety of architectures FAS Research Computing Goals of OpenMP • Standardization – Provide a standard among a variety shared memory architectures / platforms – Jointly defined and endorsed by a group of major computer hardware and software vendors • Lean and Mean – Establish a simple and limited set of directives for programming shared memory machines – Significant parallelism can be implemented by just a few directives • Ease of Use – Provide the capability to incrementally parallelize a serial program • Portability – Specified for C/C++ and Fortran – Most majors platforms have been implemented including Unix/Linux and Windows – Implemented for all major compilers FAS Research Computing OpenMP Programming Model Shared Memory Model: OpenMP is designed for multi-processor/core, shared memory machines Thread Based Parallelism: OpenMP programs accomplish parallelism exclusively through the use of threads Explicit Parallelism: OpenMP provides explicit (not automatic) parallelism, offering the programmer full control over parallelization Compiler Directive Based: Parallelism is specified through the use of compiler directives embedded in the C/C++ or Fortran code I/O: OpenMP specifies nothing about parallel I/O. -
NUG Single Node Optimization Presentation
Single Node Optimization on Hopper Michael Stewart, NERSC Introduction ● Why are there so many compilers available on Hopper? ● Strengths and weaknesses of each compiler. ● Advice on choosing the most appropriate compiler for your work. ● Comparative benchmark results. ● How to compile and run with OpenMP for each compiler. ● Recommendations for running hybrid MPI/OpenMP codes on a node. Why So Many Compilers on Hopper? ● Franklin was delivered with the only commercially available compiler for Cray Opteron systems, PGI. ● GNU compilers were on Franklin, but at that time GNU Fortran optimization was poor. ● Next came Pathscale because of superior optimization. ● Cray was finally legally allowed to port their compiler to the Opteron so it was added next. ● Intel was popular on Carver, and it produced highly optimized codes on Hopper. ● PGI is still the default, but this is not a NERSC recommendation. Cray's current default is the Cray compiler, but we kept PGI to avoid disruption. PGI ● Strengths ○ Available on a wide variety of platforms making codes very portable. ○ Because of its wide usage, it is likely to compile almost any valid code cleanly. ● Weaknesses ○ Does not optimize as well as compilers more narrowly targeted to AMD architectures. ● Optimization recommendation: ○ -fast Cray ● Strengths ○ Fortran is well optimized for the Hopper architecture. ○ Uses Cray math libraries for optimization. ○ Well supported. ● Weaknesses ○ Compilations can take much longer than with other compilers. ○ Not very good optimization of C++ codes. ● Optimization recommendations: ○ Compile with no explicit optimization arguments. The default level of optimization is very high. Intel ● Strengths ○ Optimizes C++ and Fortran codes very well. -
Parallel Programming with Openmp
Parallel Programming with OpenMP OpenMP Parallel Programming Introduction: OpenMP Programming Model Thread-based parallelism utilized on shared-memory platforms Parallelization is either explicit, where programmer has full control over parallelization or through using compiler directives, existing in the source code. Thread is a process of a code is being executed. A thread of execution is the smallest unit of processing. Multiple threads can exist within the same process and share resources such as memory OpenMP Parallel Programming Introduction: OpenMP Programming Model Master thread is a single thread that runs sequentially; parallel execution occurs inside parallel regions and between two parallel regions, only the master thread executes the code. This is called the fork-join model: OpenMP Parallel Programming OpenMP Parallel Computing Hardware Shared memory allows immediate access to all data from all processors without explicit communication. Shared memory: multiple cpus are attached to the BUS all processors share the same primary memory the same memory address on different CPU's refer to the same memory location CPU-to-memory connection becomes a bottleneck: shared memory computers cannot scale very well OpenMP Parallel Programming OpenMP versus MPI OpenMP (Open Multi-Processing): easy to use; loop-level parallelism non-loop-level parallelism is more difficult limited to shared memory computers cannot handle very large problems An alternative is MPI (Message Passing Interface): require low-level programming; more difficult programming -
PHYS-4007/5007: Computational Physics Course Lecture Notes Section II
PHYS-4007/5007: Computational Physics Course Lecture Notes Section II Dr. Donald G. Luttermoser East Tennessee State University Version 7.1 Abstract These class notes are designed for use of the instructor and students of the course PHYS-4007/5007: Computational Physics taught by Dr. Donald Luttermoser at East Tennessee State University. II. Choosing a Programming Language A. Which is the Best Programming Language for Your Work? 1. You have come up with a scientific idea which will require nu- merical work using a computer. One needs to ask oneself, which programming language will work best for the project? a) Projects involving data reduction and analysis typically need software with graphics capabilities. Examples of such graphics languages include IDL, Mathlab, Origin, GNU- plot, SuperMongo, and Mathematica. b) Projects involving a lot of ‘number-crunching’ typically require a programming language that is capable of car- rying out math functions in scientific notation with as much precision as possible. In physics and astronomy, the most commonly used number-crunching programming languages include the various flavors of Fortran, C, and more recently, Python. i) As noted in the last section, the decades old For- tran 77 is still widely use in physics and astronomy. ii) Over the past few years, Python has been growing in popularity in scientific programming. c) In this class, we will focus on two programming languages: Fortran and Python. From time to time we will discuss the IDL programming language since some of you may encounter this software during your graduate and profes- sional career. d) Any coding introduced in these notes will be written in either of these three programming languages. -
Intel Threading Building Blocks
Praise for Intel Threading Building Blocks “The Age of Serial Computing is over. With the advent of multi-core processors, parallel- computing technology that was once relegated to universities and research labs is now emerging as mainstream. Intel Threading Building Blocks updates and greatly expands the ‘work-stealing’ technology pioneered by the MIT Cilk system of 15 years ago, providing a modern industrial-strength C++ library for concurrent programming. “Not only does this book offer an excellent introduction to the library, it furnishes novices and experts alike with a clear and accessible discussion of the complexities of concurrency.” — Charles E. Leiserson, MIT Computer Science and Artificial Intelligence Laboratory “We used to say make it right, then make it fast. We can’t do that anymore. TBB lets us design for correctness and speed up front for Maya. This book shows you how to extract the most benefit from using TBB in your code.” — Martin Watt, Senior Software Engineer, Autodesk “TBB promises to change how parallel programming is done in C++. This book will be extremely useful to any C++ programmer. With this book, James achieves two important goals: • Presents an excellent introduction to parallel programming, illustrating the most com- mon parallel programming patterns and the forces governing their use. • Documents the Threading Building Blocks C++ library—a library that provides generic algorithms for these patterns. “TBB incorporates many of the best ideas that researchers in object-oriented parallel computing developed in the last two decades.” — Marc Snir, Head of the Computer Science Department, University of Illinois at Urbana-Champaign “This book was my first introduction to Intel Threading Building Blocks.