A High-Performance, Heterogeneous MPI

A High-Performance, Heterogeneous MPI

Open MPI: A High-Performance, Heterogeneous MPI Richard L. Graham 1, Galen M. Shipman 1, Brian W. Barrett 2, Ralph H. Castain 1, George Bosilca 3, and Andrew Lumsdaine 2 1Los Alamos National Laboratory 2Indiana University Advanced Computing Laboratory Open Systems Laboratory Los Alamos, NM USA Bloomington, IN USA {rgraham, gshipman, rhc}@lanl.gov {brbarret, lums}@osl.iu.edu 3University of Tennessee Dept. of Computer Science Knoxville, TN USA {bosilca}@cs.utk.edu Abstract defines a job as the execution of either a single application, or multiple communicating applications, each potentially The growth in the number of generally available, dis- on a separate computing system. Within that context, the tributed, heterogeneous computing systems places increas- project generally breaks the definition of heterogeneity into ing importance on the development of user-friendly tools four broad categories: that enable application developers to efficiently use these resources. Open MPI provides support for several aspects Processor heterogeneity Dealing with differences in of heterogeneity within a single, open-source MPI imple- processor speed, internal architecture (e.g., multi- mentation. Through careful abstractions, heterogeneous core), and communication issues caused by the transfer support maintains efficient use of uniform computational of data between processors with different data repre- platforms. We describe Open MPI’s architecture for het- sentations (different endian, floating point representa- erogeneous network and processor support. A key design tion, etc.). features of this implementation is the transparency to the application developer while maintaining very high levels of Network heterogeneity Using different network proto- performance. This is demonstrated with the results of sev- cols between processes in a job. This includes multiple eral numerical experiments. network protocols between two processes and different network protocol to different processes in the job. Run-Time Environment heterogeneity Executing a job 1. Introduction across multiple resources (e.g., multiple clusters) that are locally administered by possibly different schedul- While heterogeneous, distributed computing has been ing systems, authentication realms, etc. around for quite some time, much of the focus has been on resolving functional concerns at the subsystem level (e.g., Binary heterogeneity Coordinating execution of different cross-domain authentication and task scheduling). The us- binaries, either through the coupling of different appli- ability and application-level performance of the resulting cations or as part of a complex single application. environment, however, are system-level issues and have generally received less attention. The Open MPI project While all of these issues are being addressed within the is aimed at providing this system-level integration, with the Open MPI system, this paper will focus primarily on deal- overall goal of addressing both of these issues. ing with processor and network heterogeneity. The num- The concept of heterogeneous computing has taken on a ber of potential combinations facing the application devel- number of meanings over the years. For clarity, Open MPI oper for processor and network interfaces poses a signif- 1-4244-0328-6/06/$20.00 c 2006 IEEE. icant challenge when attempting to build or use a high- with a focus on dealing with processor and network het- performance application. One of the guiding principles in erogeneity – this will include a detailed description of how the Open MPI project is to decompose the data exchange the system automatically detects the local environment and and manipulation interfaces in such a way as to hide details configures itself for optimized performance. Finally, the from both the application developer and the end user while performance of the system in several illustrative scenarios providing effective and portable application performance. is presented. At the same time, those users that want to have very fine control over resource utilization may do so with some extra 2. Related Work effort. The design goals of the project fall into two areas that reflect this philosophy: As heterogeneous computing provides an opportunity to Transparency Differences in the local environment for the maximize available computing resources, there has been application’s processes shall be transparent to the ap- a fair amount of work done on inter-process communica- plication. The Open MPI library will detect the lo- tions processes in MPI applications. Many current produc- cal environment during initialization of each process, tion MPI implementations (e.g., LAM/MPI [4], FT-MPI [7], determine the appropriate configuration for maximum MPICH-G2 [11], StaMPI [10], and PACX-MPI [12]) sup- performance, and configure the local library appropri- port the operation of applications within such an environ- ately. Differences in the environment can arise from ment. Since the Open MPI collaboration involves the de- two sources: velopers of several of the more popular implementations, the project has built upon that prior experience to provide • Operation across multiple clusters, grids, or au- an enhanced capability in this area. tonomous computers. Open MPI introduces the The HeteroMPI [13] project provides extensions to the concept of a cell – i.e., a collection of nodes MPI standard for heterogeneous computing. Library assis- within a common environment – to commonly tance is provided to assist in decomposing the problem to refer to clusters, grids, and other forms of col- best fit the available resources. HeteroMPI still relies on lective computing resources. Multi-cell oper- an underlying MPI implementation for process startup and ations requires the careful apportioning of the high-performance communication. Our current research in application to each cell, and the coordination Open MPI seeks to provide the greatest flexibility and per- of launch operations and subsequent wire-up of formance in precisely the areas where HeteroMPI depends the MPI communication system. The prob- on an underlying MPI implementation for functionality. lem of automatically apportioning an application Much less work has been done in the area of network het- across heterogeneous cells is beyond Open MPI’s erogeneity, particularly with respect to the optimized use of scope – however, once apportioned, Open MPI multi-network interfaces within messages. LA-MPI [9], for will launch and execute the application with no example, supports multi-network communications, but with further guidance. the restriction that any individual message be sent with a single messaging protocol. Similarly, MPICH-VMI2 [15] • Variations in the configuration of nodes within a implements MPI communications over the Grid using a single cluster or grid – this includes differences in high performance protocol within a cluster, and TCP/IP the amount of available memory, operating sys- communications between clusters, but does not support the tem, available network interfaces, and processor simultaneous use of multiple network interfaces on a single architecture. node. The Open MPI collaboration seeks to extract the best of Performance Optimal communications performance is a these experiences and combine them into a single imple- design goal. Heterogeneous support should not im- mentation focused on providing performance in a manner pact performance of connections between processes that is both transparent to the user and yet fully configurable running in a homogeneous environment. This man- when desired. The resulting architecture is described in the dates the handling of heterogeneous processors at the following section. connection level, rather than mandating a global wire protocol. 3. Open MPI Design This paper will describe the Open MPI collaboration’s approach to meeting these goals, and quantify the overall The Open MPI project is a broad-based collaboration performance of the system. Following a brief review of re- of U.S. national laboratories with industry and universities lated work, we present an overview of Open MPI’s design from both Europe and the United States. It aims to provid- ing a high performance, robust, parallel execution environ- Error Management (EM) group Several subsystems that ment for a wide variety of computing environments, includ- collectively monitor the state of processes within an ing small clusters, peta-scale computers, and distributed, application and respond to changes in those states that grid-like, environments. From a communications point- indicate errors have occurred. of-view, Open MPI has two functional units: the Open Run-Time Environment (OpenRTE) [5, 6], responsible for Implementation of each of these groups and Open MPI bootstrapping operations; and the Open MPI [8] communi- itself is based upon the Modular Component Architecture cations library, providing highly efficient communications (MCA) [16], used within both OpenRTE and Open MPI. support, tailored to specific communications context. These Within this architecture, each of the major subsystems is two units work together to support transparent execution of defined as an MCA framework with a well-defined Appli- applications in a heterogeneous environment. cation Programming Interface (API). In turn, each frame- work contains one or more components, each representing a different implementation of that particular framework that 3.1. Bootstrapping Start-up includes support for the entire API. Thus, the behavior of any OpenRTE or Open MPI subsystem

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us