
Evaluating Coarray Fortran with the CGPOP Miniapp Andrew I. Stone John M. Dennis Michelle Mills Strout Colorado State University National Center for Atmospheric Colorado State University [email protected] Research [email protected] [email protected] Abstract However, due to POP’s size and complexity it is desirable to avoid The Parallel Ocean Program (POP) is a 71,000 line-of-code pro- integrating Coarrays into the entire application until their benefit gram written in Fortran and MPI. POP is a component of the Com- has been shown in a smaller prototype application. In this paper we munity Earth System Model (CESM), which is a heavily used use the CGPOP miniapp [28] as such a prototype. CGPOP models global climate model. Now that Coarrays are part of the Fortran POP’s Conjugate Gradient routine and contains about 3000 source standard one question raised by POP’s developers is whether Coar- lines of code (SLOC) versus the 71,000 lines of POP. rays could be used to improve POP’s performance or reduce its During this investigation we developed several different variants code volume. Although Coarray Fortran (CAF) has been evaluated of the CGPOP miniapp with the following questions in mind: with smaller benchmarks and with an older version of POP, it has • How does the performance of the CAF variant of CGPOP not been evaluated with newer versions of POP or on modern plat- compare with the original MPI variant extracted from POP? forms. In this paper, we examine what impacts using CAF has on a large climate simulation application by comparing and evaluating • How will using an interconnect with direct PGAS support im- variants of the CGPOP miniapp, which serves as a performance pact performance? proxy of POP. • Does transferring data in CAF by pulling (via get operations) differ in performance from pushing data (via put operations)? 1. Introduction • How easy is it to introduce a communication/computation over- Large scientific simulation applications commonly use MPI to in- lap with the CAF version of CGPOP? troduce parallelism and conduct communication. Although MPI is • What features are missing in the CAF standard and/or current a mature and popular interface, developers often find it difficult to implementation that are necessary to implement an efficient use. MPI requires programmers to handle a large number of im- CAF version of POP or would otherwise be useful? plementation details such as the explicit marshalling and unmar- shalling of data into messages and the explicit specification of com- To answer these questions, we describe the CGPOP miniapp munication schedules. MPI is commonly criticised as being low- in Section 2. We show that CGPOP accurately models the perfor- level and is sometimes referred to as the “the assembly language of mance of POP on two different Cray XT5 systems, a Cray XE6, and parallel programming” [13]. a BlueGene/L system, and describe several variants of the miniapp Parallel programming models such as the PGAS [12, 26, 32] developed to compare CAF and MPI. In Section 3 we present how and DARPA HPCS [21] languages avoid explicit message pass- these different variants compare in terms of performance and code ing and have been developed to make programming parallel appli- volume. In Section 4 we discuss our experience using CAF and cations easier. These languages have been shown to perform well document issues we encountered while programming with it. In and improve programmer productivity within the context of bench- Section 5 we discuss other work that compares MPI to PGAS lan- marks and applications written from scratch [4, 19, 31]. However, guages, and in Section 6 we conclude this paper. the productivity of PGAS languages within large, existing, simula- tion codes has not been extensively researched. The Parallel Ocean Program (POP) [20], developed at Los 2. A Performance Proxy for POP Alamos National Laboratory, is a large simulation code that runs on The Parallel Ocean Program (POP) has been actively developed for machines with thousands of cores. Much of POP’s complexity lies over 18 years [20]. Due to its continued use, maintenance of the in handling parallelization and communication details. As such, application in terms of porting it to new architectures is an ongoing it is worth considering whether a PGAS language could improve issue. Further, POP can be resource intensive to execute. While POP’s performance or reduce its code volume (total lines of code). versions of POP at low-resolution exist, they do not exhibit the Given that POP is written in Fortran, using Fortran’s Coarray same sensitivity to communication performance as the versions at a extensions is a logical step towards introducing a PGAS model. 0:1◦ resolution. On lower memory systems like BlueGene the 0:1◦ POP execution requires a minimum of approximately 800 cores to execute. Even on systems with larger amount of memory per core a minimum of 96 cores is needed for as long as 40 minutes Permission to make digital or hard copies of all or part of this work for personal or just to run a single simulated day. POP applications also includes a classroom use is granted without fee provided that copies are not made or distributed complicated build system that may have to be modified for a new for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute compiler and support stack when moving to a new system. Clearly to lists, requires prior specific permission and/or a fee. a much smaller, less resource intensive piece of code that serves as a proxy for the full application would enable a quicker turn around Copyright c ACM [to be supplied]. $10.00 in the development cycle. Another advantage of a smaller proxy is Intermediate CGPOP miniapp Input file state file Domain Read intermediate state file Timing Data: Data: Construct communication metadata information Stencil coefficients decomposition Blocked forms of Start Timer Vector b generator CGPOP-solver() data from input L2 Norm of file Stop Timer Initial guess for CG output vector x Check correctness Application Metadata: Output timing and verification info Final solution for vector x Blocks Neighbors graph Land mask GDOF arrays Figure 1. Architecture of the CGPOP Miniapp 4 System 10 Name Kraken Hopper Lynx Frost POP solver w/PGI [Hopper] Company Cray Cray Cray IBM CGpop w/Cray [Hopper] System Type XT5 XE6 XT5 BG/L CGpop w/PGI [Hopper] POP solver w/XLF [Frost] # of cores 99,072 153,408 912 8192 3 10 CGpop w/XLF [Frost] Processor POP solver w/PGI [Kraken] CPU Opteron Opteron Opteron PPC440 CGpop w/PGI [Kraken] Mhz 2600 2100 2200 700 Peak Gflops/core 10.4 8.4 8.8 2.8 2 cores/node 12 24 12 2 10 Memory Hierarchy L1 data-cache 64 KB 64 KB 64 KB 32 KB L2 cache 512 KB 512 KB 512 KB 2 KB Execution time (sec) 1 L3 cache 6 MB 12 MB 6 MB 4 MB 10 (shared) (shared) (shared) (shared) Network topology 3D torus 3D torus 2D torus 3D torus # of Links/per node 6 6 4 6 0 10 Bandwidth/link 9.6 GB/s 26.6 GB/s 9.6 GB/s 0.18 GB/s 1 2 3 4 5 10 10 10 10 10 # of cores Table 1. Description of compute platforms used for this study. Figure 2. Execution time in seconds for 1 day of the barotropic component of the POP 0:1◦ benchmark and the 2-sided MPI ver- that developers could prototype changes in it without changing the sion of the CGPOP miniapp on three different compute platforms larger POP application. We developed the CGPOP miniapp to serve as a proxy for POP. We started development of CGPOP in June 2010 and released ver- sion 1.0 of it in July 2011 [1, 28]. This section shows that CGPOP at NCAR; and Kraken, a Cray XT5 located at the National Institute miniapp matches the performance profile of POP, defines what re- for Computational Science (NICS). We list technical information quirements variants of CGPOP miniapp fulfill, and describes the about these compute platforms in Table 1. The compilers we used in variants of CGPOP that we use to compare CAF and MPI. our examination were PGI Fortran, Cray Fortran, and XL Fortran. We present our results in Figure 2, and as can be seen by comparing 2.1 CGPOP as a performance proxy the similarly colored lines for POP and CGPOP, the scalability To be considered a performance proxy, a miniapp should accurately behavior of the two are comparable when the same compiler and model the performance bottleneck of the full application at the machine are used. range of cores that the full application targets. Since the POP application typically runs on thousands to tens of thousands of 2.2 CGPOP miniapp specification processors we compare the scalability of CGPOP and POP along The CGPOP miniapp is defined in terms of its input/output behav- this range. ior and the algorithm it conducts. We illustrate this behavior in Fig- Scalability can be affected by a number of factors including ure 1 and list pseudocode for the CG algorithm in Figure 6. the machine and compiler used. To ensure that the performance As shown in Figure 1, the CGPOP miniapp executable is passed behavior of the CGPOP miniapp matches that of POP we examine an intermediate state file, which is generated by the cginit domain scalability across several different platforms: Hopper, a Cray XE6 decomposition generator.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-