Investigating the Performance and Productivity of DASH Using The

Investigating the Performance and Productivity of DASH Using The

Investigating the Performance and Productivity of DASH Using the Cowichan Problems Karl F¨urlinger,Roger Kowalewski, Tobias Fuchs and Benedikt Lehmann Ludwig-Maximilians-Universit¨at(LMU) Munich Computer Science Department, MNM Team Oettingenstr. 67, 80538 Munich, Germany Email: [email protected] ABSTRACT in heterogeneous designs, but also at the low end, where DASH is a new realization of the PGAS (Partitioned Global application developers have to deal with shared memory Address Space) programming model in the form of a C++ systems of increasing scale (number of cores) and complex template library. Instead of using a custom compiler, DASH multi-level memory systems. Top-of-the-line server CPUs provides expressive programming constructs using C++ ab- will soon have up to 32 physical cores on a single chip and straction mechanisms and offers distributed data structures many-core CPUs such as Intel's Knights Landing Xeon Phi and parallel algorithms that follow the concepts employed by architecture consists of up to 72 cores. the C++ standard template library (STL). Of course, performance is only one side of the story and In this paper we evaluate the performance and productivity for many developers productivity is at least as big a concern of DASH by comparing our implementation of a set of bench- as performance. This is especially true in newer application mark programs with those developed by expert programmers areas of high performance and parallel computing such as in Intel Cilk, Intel TBB (Threading Building Blocks), Go the life sciences or the digital humanities, where developers and Chapel. We perform a comparison on shared memory are less willing to spend large amounts of time to write and multiprocessor systems ranging from moderately parallel mul- fine-tune their parallel applications. ticore systems to a 64-core manycore system. We additionally In this paper we evaluate the performance and produc- perform a scalability study on a distributed memory system tivity characteristics of DASH, a realization of the PGAS on up to 20 nodes (800 cores). Our results demonstrate that paradigm in the form of a C++ template library. We start DASH offers productivity that is comparable with the bestes- with an evaluation on shared memory systems, where we tablished programming systems for shared memory and also compare DASH to results obtained using Intel Cilk, Intel achieves comparable or better performance. Our results on TBB, Go, and Chapel. To conduct our comparisons we rely multi-node systems show that DASH scales well and achieves on ideomatic implementations of the Cowichan problems by excellent performance. expert programmers that were developed in the course of a related study by Nanz et al. [17]. In addition, we perform a ACM Reference Format: scalability study on a distributed memory system on up to 20 Karl F¨urlinger,Roger Kowalewski, Tobias Fuchs and Benedikt nodes (800 cores). Our results demonstrate that DASH offers Lehmann. 2018. Investigating the Performance and Productivity of DASH Using the Cowichan Problems. In HPC Asia 2018 WS: productivity that is comparable with the best established Workshops of HPC Asia 2018, January 31, 2018, Chiyoda, Tokyo, programming systems for shared memory and also achieves Japan. ACM, New York, NY, USA, 10 pages. https://doi.org/10. comparable or better performance. 1145/3176364.3176366 The rest of this paper is organized as follows: In Sect. 2 we provide background on the material covered in this pa- 1 INTRODUCTION per by providing a short general overview of DASH and the Cowichan problems. In Sect. 3 we then describe the imple- As computer systems are getting increasingly complex, it is mentation of the Cowichan problems in DASH and compare becoming harder and harder to achieve a significant fraction the programming constructs employed with those used by of peak performance. This is true at the very high end, expert programmers for their Go, Chapel, Cilk and TBB where supercomputers today are composed of hundreds of implementation. In Sect. 4 we provide an evaluation of the thousand compute cores and incorporate CPUs and GPUs performance and productivity of DASH compared to these Permission to make digital or hard copies of all or part of this work other implementations on a shared memory system and we for personal or classroom use is granted without fee provided that provide a scaling study of the DASH code on multiple nodes copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first of a supercomputing system. Sect. 5 discusses related work page. Copyrights for components of this work owned by others than and in Sect. 6 we conclude and provide an outlook on future ACM must be honored. Abstracting with credit is permitted. To copy work. otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. HPC Asia 2018 WS, January 31, 2018, Chiyoda, Tokyo, Japan © 2018 Association for Computing Machinery. ACM ISBN 978-1-4503-6347-1/18/01. $15.00 https://doi.org/10.1145/3176364.3176366 HPC Asia 2018 WS, January 31, 2018, Chiyoda, Tokyo, JapanKarl F¨urlinger,Roger Kowalewski, Tobias Fuchs and Benedikt Lehmann 2 BACKGROUND 1 #include <iostream> 2 #include <libdash.h> This section provides background information on our PGAS 3 programming system DASH and the set of benchmark kernels 4 int main(int argc, char *argv[]) { 5 dash::init(&argc, &argv); we have used to evaluate its productivity and performance. 6 7 // 2D integer matrix with 10 rows, 8 cols 8 // default distribution is blocked by rows 2.1 DASH 9 dash::NArray<int, 2> mat(10, 8); DASH [8] is a C++ template library that offers distributed 10 11 for (int i=0; i<mat.local.extent(0); ++i) { data structures and parallel algorithms. A basic data struc- 12 for (int j=0; j<mat.local.extent(1); ++j) { ture available in DASH is the distributed array (dash::Array) 13 mat.local(i, j) = 10*dash::myid()+i+j; 14 } that can span the memory of multiple processes (called ex- 15 } ecution units in DASH terminology), potentially running 16 on separate interconnected nodes. Each unit can access all 17 dash :: barrier (); 18 elements of the distributed array. If remote data is accessed, 19 auto max = dash::max_element(mat.begin(), mat.end()); one-sided operations are triggered using the DASH runtime 20 system (DART) [25]. DART is implemented on top of MPI- 21 if (dash::myid() == 0) { 22 print2d ( mat ); 3 RMA (remote memory access) [15] using passive target 23 cout << "Max is " << (int)(*max) << endl; synchronization mode. One-sided access operations are imple- 24 } 25 mented using MPI Put (writing a remote value) or MPI Get 26 dash :: finalize (); (reading a remote value), while the remote (target) unit is 27 } never involved in the data transfer operation. This mode of Compile and Run: one-sided access maps well to remote direct memory access $> mpicc -L ... -ldash -o example example. (RDMA) technology, which is supported by every modern cc interconnect network and is even used for the implementation $> mpirun -n 4 ./example of the classic two-sided (send-receive) model in MPI [9]. Output: Access to local data elements can be performed using 0 1 2 3 4 5 6 7 direct memory read and write operations and the local part 1 2 3 4 5 6 7 8 2 3 4 5 6 7 8 9 of a data structure is explicitly exposed to the programmer 10 11 12 13 14 15 16 17 using a local view proxy object. For example, arr.local 11 12 13 14 15 16 17 18 12 13 14 15 16 17 18 19 represents all array elements stored locally and access to 20 21 22 23 24 25 26 27 these elements is of course much faster than remote access. 21 22 23 24 25 26 27 28 22 23 24 25 26 27 28 29 Whenever possible, the owner computes model, where each 30 31 32 33 34 35 36 37 unit operates on its local part of a data structure, should be Max is 37 used for maximum performance in DASH. Besides one-dimensional arrays, DASH also offers shared scalars (dash::Shared) and multidimensional arrays (dash::NArrayFigure). 1: A basic example DASH program (left) Other data structures, notably dynamic (growing, shrinking) and its output when run with four execution units containers such as hash maps, are currently under develop- (right). ment. DASH offers a flexible way to specify the data distribu- tion and layout for each data structure using so-called data distribution patterns (dash::Pattern). For one-dimensional Thus STL algorithms (e.g., std::fill, std::iota) can be containers the data distribution patters that can be specified used in conjunction with DASH containers. DASH addition- are cyclic, block-cyclic, and blocked. In multiple dimensions, ally provides parallel versions of selected algorithms. For ex- these specifiers can be combined for each dimension and in ample, dash::fill takes a global range (delimited by global addition tiled distributions are supported. iterators) and performs the fill operation in parallel. DASH also supports the notion of teams, i.e., groups of DASH also generalizes the concepts of pointer, reference, units that form the basis of memory allocations and collective and memory allocation found in regular C++ programs by communication operations. New teams are built by splitting offering global pointersdash::GlobPtr ( ), global references an existing team starting with dash::Team::All(), the team (dash::GlobRef), and global memory regions (dash::GlobMem). that contains all units that comprise the program. If no team These constructs allow DASH to offer a fully-fledged PGAS is specified when instantiating a new data structure, the programming system akin to UPC (Unified Parallel C) [7] default team dash::Team::All() is used.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us