Parallel Programming with MPI William Gropp Argonne National Laboratory Mathematics and Computer Science Division

Parallel Programming with MPI William Gropp Argonne National Laboratory Mathematics and Computer Science Division

Parallel Programming With MPI William Gropp Argonne National Laboratory Mathematics and Computer Science Division 1 Overview • Introduction to MPI ♦ What it is ♦ Where it came from ♦ Basic MPI communication • Some simple examples ♦ More advanced MPI communication ♦ A non-trivial exercise ♦ Looking to the future: some features from MPI-2 • Building programs using MPI libraries ♦ PETSc • Poisson solver with no MPI ♦ pnetCDF • High performance parallel I/O 2 University of Chicago Department of Energy Models for Parallel Computation • Shared memory (load, store, lock, unlock) • Message Passing (send, receive, broadcast, ...) • Transparent (compiler works magic) • Directive-based (compiler needs help) • Others (BSP, OpenMP, ...) • Task farming (scientific term for large transaction processing) 3 University of Chicago Department of Energy Why Yet Another Parallel Programming Approach? • Distributed memory (shared nothing) systems ♦ Common, easier to build, dominate high- end computing (over 329 of top 500; all 1998 Gordon Bell Prize finalists; most highest-performing applications) • Performance depends on managing memory use ♦ Goal of many parallel programming models is to simplify programming by hiding details of memory locality and management (parallel programming for the masses) • Support for modular programming 4 University of Chicago Department of Energy Message Passing Features • Parallel programs consist of separate processes, each with its own address space ♦ Programmer manages memory by placing data in a particular process • Data sent explicitly between processes ♦ Programmer manages memory motion • Collective operations ♦ On arbitrary set of processes • Data distribution ♦ Also managed by programmer • Message passing model doesn’t get in the way • It doesn’t help either 5 University of Chicago Department of Energy Types of Parallel Computing Models • Data Parallel - the same instructions are carried out simultaneously on multiple data items (SIMD) • Task Parallel - different instructions on different data (MIMD) • SPMD (single program, multiple data) not synchronized at individual operation level • SPMD is equivalent to MIMD since each MIMD program can be made SPMD (similarly for SIMD, but not in practical sense.) Message passing (and MPI) is for MIMD/SPMD parallelism. HPF is an example of an SIMD interface. 6 University of Chicago Department of Energy Comparison with Other Models • Single process (address space) model ♦ OpenMP and threads in general ♦ Fortran 90/95 and compiler-discovered parallelism ♦ System manages memory and (usually) thread scheduling ♦ Named variables refer to the same storage • Single name space model ♦ HPF ♦ Data distribution part of the language, but programs still written as if there is a single name space 7 University of Chicago Department of Energy The Distributed Memory or “Shared-Nothing” Model Process 0 Process 1 Address Space A(10) A(10) Different Variables! • Integer A(10) • Integer A(10) do i=1,10 A(i) = i … enddo print *, A ... 8 University of Chicago Department of Energy The Message-Passing Model • A process is (traditionally) a program counter and address space • Processes may have multiple threads (program counters and associated stacks) sharing a single address space • Message passing is for communication among processes, which have separate address spaces • Interprocess communication consists of ♦ synchronization ♦ movement of data from one process’s address space to another’s 9 University of Chicago Department of Energy What is MPI? • A message-passing library specification ♦ extended message-passing model ♦ not a language or compiler specification ♦ not a specific implementation or product • For parallel computers, clusters, and heterogeneous networks • Full-featured • Designed to provide access to advanced parallel hardware for end users, library writers, and tool developers 10 University of Chicago Department of Energy Where Did MPI Come From? • Early vendor systems (Intel’s NX, IBM’s EUI, TMC’s CMMD) were not portable (or very capable) • Early portable systems (PVM, p4, TCGMSG, Chameleon) were mainly research efforts ♦ Did not address the full spectrum of issues ♦ Lacked vendor support ♦ Were not implemented at the most efficient level • The MPI Forum organized in 1992 with broad participation by: ♦ vendors: IBM, Intel, TMC, SGI, Convex, Meiko ♦ portability library writers: PVM, p4 ♦ users: application scientists and library writers ♦ finished in 18 months 11 University of Chicago Department of Energy Novel Features of MPI • Communicators encapsulate communication spaces for library safety • Datatypes reduce copying costs and permit heterogeneity • Multiple communication modes allow precise buffer management • Extensive collective operations for scalable global communication • Process topologies permit efficient process placement, user views of process layout • Profiling interface encourages portable tools 12 University of Chicago Department of Energy MPI References • The Standard itself: ♦ at http://www.mpi-forum.org ♦ All MPI official releases, in both postscript and HTML • Other information on Web: ♦ at http://www.mcs.anl.gov/mpi ♦ pointers to lots of stuff, including other talks and tutorials, a FAQ, other MPI pages 13 University of Chicago Department of Energy Books on MPI • Using MPI: Portable Parallel Programming with the Message-Passing Interface (2nd edition), by Gropp, Lusk, and Skjellum, MIT Press, 1999. • Using MPI-2: Portable Parallel Programming with the Message-Passing Interface, by Gropp, Lusk, and Thakur, MIT Press, 1999. • MPI: The Complete Reference - Vol 1 The MPI Core, by Snir, Otto, Huss-Lederman, Walker, and Dongarra, MIT Press, 1998. • MPI: The Complete Reference - Vol 2 The MPI Extensions, by Gropp, Huss-Lederman, Lumsdaine, Lusk, Nitzberg, Saphir, and Snir, MIT Press, 1998. • Designing and Building Parallel Programs, by Ian Foster, Addison-Wesley, 1995. • Parallel Programming with MPI, by Peter Pacheco, Morgan-Kaufmann, 1997. 14 University of Chicago Department of Energy Programming With MPI • MPI is a library ♦ All operations are performed with routine calls ♦ Basic definitions in • mpi.h for C • mpif.h for Fortran 77 and 90 • MPI module for Fortran 90 (optional) • First Program: ♦ Create 4 processes in a simple MPI job ♦ Write out process number ♦ Write out some variables (illustrate separate name space) 15 University of Chicago Department of Energy Finding Out About the Environment • Two important questions that arise early in a parallel program are: ♦ How many processes are participating in this computation? ♦ Which one am I? • MPI provides functions to answer these questions: ♦ MPI_Comm_size reports the number of processes. ♦ MPI_Comm_rank reports the rank, a number between 0 and size-1, identifying the calling process 16 University of Chicago Department of Energy Hello (C) #include "mpi.h" #include <stdio.h> int main( int argc, char *argv[] ) { int rank, size; MPI_Init( &argc, &argv ); MPI_Comm_rank( MPI_COMM_WORLD, &rank ); MPI_Comm_size( MPI_COMM_WORLD, &size ); printf( "I am %d of %d\n", rank, size ); MPI_Finalize(); return 0; } 17 University of Chicago Department of Energy Hello (Fortran) program main include 'mpif.h' integer ierr, rank, size call MPI_INIT( ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, rank, ierr ) call MPI_COMM_SIZE( MPI_COMM_WORLD, size, ierr ) print *, 'I am ', rank, ' of ', size call MPI_FINALIZE( ierr ) end 18 University of Chicago Department of Energy Hello (C++) #include "mpi.h" #include <iostream> int main( int argc, char *argv[] ) { int rank, size; MPI::Init(argc, argv); rank = MPI::COMM_WORLD.Get_rank(); size = MPI::COMM_WORLD.Get_size(); std::cout << "I am " << rank << " of " << size << "\n"; MPI::Finalize(); return 0; } 19 University of Chicago Department of Energy Notes on Hello World • All MPI programs begin with MPI_Init and end with MPI_Finalize • MPI_COMM_WORLD is defined by mpi.h (in C) or mpif.h (in Fortran) and designates all processes in the MPI “job” • Each statement executes independently in each process ♦ including the printf/print statements • I/O not part of MPI-1 ♦ print and write to standard output or error not part of either MPI-1 or MPI-2 ♦ output order is undefined (may be interleaved by character, line, or blocks of characters), • A consequence of the requirement that non-MPI statements execute independently 20 University of Chicago Department of Energy Running MPI Programs • The MPI-1 Standard does not specify how to run an MPI program, just as the Fortran standard does not specify how to run a Fortran program. ♦ Many implementations provided mpirun –np 4 a.out to run an MPI program • In general, starting an MPI program is dependent on the implementation of MPI you are using, and might require various scripts, program arguments, and/or environment variables. • mpiexec <args> is part of MPI-2, as a recommendation, but not a requirement, for implementors. • Many parallel systems use a batch environment to share resources among users ♦ The specific commands to run a program on a parallel system are defined by the environment installed on the parallel computer 21 University of Chicago Department of Energy MPI Basic Send/Receive • We need to fill in the details in Process 0 Process 1 Send(data) Receive(data) • Things that need specifying: ♦ How will “data” be described? ♦ How will processes be identified? ♦ How will the receiver recognize/screen messages? ♦ What will it mean for these operations to complete? 22 University of Chicago Department of Energy Some Basic Concepts • Processes can be

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    140 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us