Chapel-on-X: Exploring Tasking Runtimes for PGAS Languages Akihiro Hayashi Sri Raj Paul Max Grossman Rice University Rice University Rice University Houston, Texas Houston, Texas Houston, Texas
[email protected] [email protected] [email protected] Jun Shirako Vivek Sarkar Rice University Georgia Institute of Technology Houston, Texas Atlanta, Georgia
[email protected] [email protected] ABSTRACT ESPM2’17: ESPM2’17: Third International Workshop on Extreme Scale Pro- With the shift to exascale computer systems, the importance of pro- gramming Models and Middleware, November 12–17, 2017, Denver, CO, USA. ductive programming models for distributed systems is increasing. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3152041.3152086 Partitioned Global Address Space (PGAS) programming models aim to reduce the complexity of writing distributed-memory parallel 1 INTRODUCTION programs by introducing global operations on distributed arrays, While conventional message-passing programming models such distributed task parallelism, directed synchronization, and mutual as MPI [13] can greatly accelerate distributed-memory programs, exclusion. However, a key challenge in the application of PGAS the tuning effort required with message-passing APIs imposes addi- programming models is the improvement of compilers and runtime tional burden on programmers. To improve software productivity systems. In particular, one open question is how runtime systems and portability, a more efficient approach would be to provide a meet the requirement of exascale systems, where a large number high-level programming model for distributed systems. of asynchronous tasks are executed. PGAS (Partitioned Global Address Space) programming lan- While there are various tasking runtimes such as Qthreads, OCR, guages such as Chapel, Co-array Fortran, Habanero-C, Unified and HClib, there is no existing comparative study on PGAS task- Parallel C (UPC), UPC++, and X10 [5, 7, 8, 11, 21, 26] are examples ing/threading runtime systems.