A Python Parallel Programming Model

A Python Parallel Programming Model

CharmPy: A Python Parallel Programming Model Juan J. Galvez, Karthik Senthil, Laxmikant V. Kale Department of Computer Science University of Illinois at Urbana-Champaign, IL, USA E-mail: fjjgalvez, skk3, [email protected] Abstract—Parallel programming can be extremely challenging. approach, learn and use); (b) productivity; (c) provide high- Programming models have been proposed to simplify this task, level abstractions that can hide the details of the underlying but wide acceptance of these remains elusive for many reasons, hardware and network; (d) achieve good parallel performance; including the demand for greater accessibility and productivity. In this paper, we introduce a parallel programming model (e) make efficient use of resources in heterogeneous envi- and framework called CharmPy, based on the Python language. ronments; (f) portability; (g) easy to integrate with existing CharmPy builds on Charm++, and runs on top of its C++ software. The productivity of a language and programming runtime. It presents several unique features in the form of a model, in particular, can be a critical factor for the successful simplified model and API, increased flexibility, and the ability development of a software project, and for its continued long- to write everything in Python. CharmPy is a high-level model based on the paradigm of distributed migratable objects. It term evolution. retains the benefits of the Charm++ runtime, including dynamic In the realm of High-performance Computing (HPC), MPI load balancing, asynchronous execution model with automatic combined with C/C++ or Fortran is widely used. Reasons overlap of communication and computation, high performance, for this include performance/scalability, the perceived sus- and scalability from laptops to supercomputers. By being Python- tainability of these technologies, and the existence of large based, CharmPy also benefits from modern language features, access to popular scientific computing and data science software, legacy codebases. However, though important building-blocks, and interoperability with existing technologies like C, Fortran these technologies by themselves present important limitations and OpenMP. towards achieving the above goals. C++ and Fortran are To illustrate the simplicity of the model, we will show how arguably not introductory-level programming languages. MPI to implement a distributed parallel map function based on provides message passing and synchronization primitives, but the Master-Worker pattern using CharmPy, with support for lacks high-level features like hardware abstractions, dynamic asynchronous concurrent jobs. We also present performance results running stencil code and molecular dynamics mini-apps resource allocation, work scheduling; and is not particularly fully written in Python, on Blue Waters and Cori supercomputers. suited for execution of asynchronous events, or applications For stencil3d, we show performance similar to an equivalent with load imbalance and irregular communication patterns. MPI-based program, and significantly improved performance Many parallel programming languages and runtimes have for imbalanced computations. Using Numba to JIT-compile the been developed in the last two decades [1], with modern critical parts of the code, we show performance for both mini- apps similar to the equivalent C++ code. ones providing high-level abstractions, task-based runtimes, Index Terms—programming model, parallel programming, global address spaces, adaptive load balancing, and message- distributed computing, multiprocessing, Python, HPC driven execution. Examples of modern languages and runtimes include Chapel [2], X10 [3], UPC [4], Legion [5], HPX I. INTRODUCTION AND MOTIVATION [6] and Charm++ [7]. In spite of this, MPI remains by all appearances the de facto standard for parallel programming Effective and productive programming of parallel machines in the HPC field. Analyzing the causes of this is outside the can be extremely challenging. To this day, it remains hard to scope of this paper, but we believe that, although these models find programming models and frameworks that are considered provide powerful abstractions, scalability and performance, accessible and productive by a wide range of users, support obstacles for adoption include either a real or perceived lack a variety of use cases, and achieve good performance and of accessibility, productivity, generality, interoperability and scalability on a wide range of systems. There is demand sustainability. Charm++ has enjoyed success with several large from programmers across various domains to write parallel applications running on supercomputers [8]–[10], but can be applications, but they are neither computer scientists nor expert improved in terms of some of these aspects. programmers. This often leads to their need to rely on experts Parallel programming frameworks based on Python have to implement their ideas, settle for suboptimal (sometimes emerged in recent years (e.g. Dask [11] and Ray [12]). serial) performance, or to develop codes that are difficult to Although aimed at productivity, they tend to have limited scale, maintain and extend. performance and scalability, and applicability only to specific A programming model must meet several demands to over- use cases (e.g. task scheduling, MapReduce, data analytics). come these challenges, including: (a) accessibility (easy to In this paper, we introduce a general-purpose parallel programming model and distributed computing framework independent map functions on multiple nodes with dynamic called CharmPy, which builds on Charm++, and is aimed load balancing, using the well-known master-worker pattern. at overcoming these challenges. One of its distinguishing We discuss the limitations of implementing the same use features is that it uses the Python programming language, case with MPI. We will also show that parallel applications one of the most popular languages in use today [13] together can be written with CharmPy that are comparable in terms with C, C++ and Java. Python has become very popular for of performance and scalability to applications using MPI scientific computing, data science and machine learning, as or written in C++. This is possible even with applications evidenced by software like NumPy, SciPy, pandas, TensorFlow fully written in Python, by using technologies like Numba. and scikit-learn. It is also very effective for integrating existing Python and Charm++ are both highly portable, and CharmPy technologies like C, Fortran and OpenMP code. Its popularity runs on Unix, Windows, macOS and many supercomputer and ease of use [14] helps to avoid the barrier of adopting environments. The code is public and open-source [21]. a new language, and enables straightforward compatibility The rest of the paper is organized as follows. In section with many established software packages. In addition, the II we explain the CharmPy programming model. Section III development of technologies like NumPy [15], Numba [16], presents the parallel map use case. Section IV covers runtime [17] and Cython [18] presents a compelling case for the use implementation details. In section V we present performance of Python as a high-level language driving native machine- results. Finally, in section VI we conclude the paper. optimized code. Using these technologies, it is possible to express a program in Python using high-level concepts, and II. THE CHARMPY PROGRAMMING MODEL have the critical parts (or even the bulk of it) be compiled and In this section, we explain the main concepts of the run natively. CharmPy programming model, beginning with an overview CharmPy runs on top of Charm++ [7], [19], a C++ runtime, of the programming paradigm and its execution model. but it is not a simple Python binding for it. Indeed, CharmPy’s A. Overview programming model is simpler and provides unique features that simplify the task of writing parallel applications, while CharmPy is based on the paradigm of distributed migrat- retaining the runtime capabilities of Charm++. For example, able objects with asynchronous remote method invocation. A Charm++ developers have to write special interface files for program is expressed in terms of objects and the interactions each distributed object type; these files have to be processed between them. There can exist multiple distributed objects per by a special translator that generates C++ code. In addition, processing element (PE); these objects can communicate with the Structured Dagger [20] language is often necessary for any other distributed object in the system via remote method expression of control flow and message order. With CharmPy, invocation, which involves message passing. Objects are not all of the code can be written in Python, and no specialized bound to a specific PE and can migrate between PEs without language, preprocessing or compilation steps are necessary affecting application code. to run an application. CharmPy also benefits from high-level Parallel decomposition is therefore based on objects rather features of Python, like automatic memory management and than system resources, which has the benefit of enabling more object serialization. natural decomposition, abstraction from hardware, and gives With CharmPy, we want to meet the following goals: the runtime flexibility to balance load, schedule work, and overlap computation and communication. • Simple, high-level programming model. In the asynchronous execution model, a process does not • Based on the widely used Python programming language,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us