User-Level Power Monitoring and Application Performance on Cray XC30 Supercomputers

User-Level Power Monitoring and Application Performance on Cray XC30 Supercomputers

User-level Power Monitoring and Application Performance on Cray XC30 Supercomputers Alistair Hart, Harvey Richardson Jens Doleschal, Thomas Ilsche, Mario Bielert Matthew Kappel Cray Exascale Research Initiative Europe Technische Universitat¨ Dresden, Cray Inc. King’s Buildings ZIH St. Paul MN, USA Edinburgh, UK Dresden, Germany [email protected] fahart,[email protected] fjens.doleschal,thomas.ilsche,[email protected] Abstract—In this paper we show how users can access and real applications will require software engineering decisions, display new power measurement hardware counters on Cray both at the systemware level but also in the applications XC30 systems (with and without accelerators), either directly themselves. Such application decisions might be made when or through extended prototypes of the Score-P performance measurement infrastructure and Vampir application perfor- the software is designed or at runtime via an autotuning mance monitoring visualiser. framework. This work leverages new power measurement and control For these to be possible, fine-grained instrumentation is features introduced in the Cray XC supercomputer range and needed to measure energy and power usage not just of targeted at both system administrators and users. overall HPC systems, but of individual components within We discuss how to use these counters to monitor energy consumption, both for complete jobs and also for application the architecture. This information also needs to be accessible phases. We then use this information to investigate energy not just to privileged system administrators but also to efficient application placement options on Cray XC30 ar- individual users of the system, and in a way that is easily chitectures, including mixed use of both CPU and GPU on correlated with the execution of their applications. accelerated nodes and interleaving processes from multiple In this paper, we describe some ways that users can mon- applications on the same node. itor the energy and power consumption of their applications Keywords-energy efficiency; power measurement; application when running on the Cray XC supercomputer range We analysis exploit some of the new power measurement and control features that were introduced in the Cray Cascade-class I. INTRODUCTION architectures, aimed both at system-administrators and non- Energy and power consumption are increasingly important privileged users. topics in High Performance Computing. Wholesale electric- We demonstrate how users can access and display new ity prices have recently risen sharply in many regions of the power measurement hardware counters on Cray XC30 sys- world, including in the European states [1], prompting an tems (with and without accelerators) to measure energy interest in lowering energy consumption of HPC systems. consumption by node-specific hardware components. Of par- Environmental (and political) concerns also motivate HPC ticular interest to users is charting power consumption as an data centres to reduce their “carbon footprints”. This has application runs, and correlating this with other information driven an interest in energy-efficient supercomputing, as in full-featured performance analysis tools. We therefore are shown by the rise in popularity of the “Green 500” list extending the Score-P scalable performance measurement of the most efficient HPC systems since its introduction infrastructure for parallel codes [4] to gather power mea- in 2007 [2]. In many regions, the need to balance demand surement counter values on Cray XC30 systems (with and across the electricity supply grid has also led to a require- without accelerators). The framework is compatible with a ment to be able to limit the maximum power drawn by a number of performance analysis tools, and we show how one supercomputing system. example, Vampir [5], displays this data in combination with Looking forward, energy efficiency is one of the application monitoring information. We describe how this paramount design constraints on a realistic exascale super- was achieved and give some usage examples of these tools computer. The United States Department of Energy has set on Cray XC30 systems, based on benchmarks and large- a goal of 20MW [3] for such a machine. Compared to a scale applications. typical petaflop system, this requires a thousand-fold leap in We will use this information to illustrate some of the computational speed in only three times the power budget. design decisions that users may then take when tuning their Energy efficiency goes beyond hardware design, however. applications, using a set of benchmark codes. We will do this To deliver sustained, but energy-efficient, performance of for two Cray XC30 node architecture designs, one based solely on CPUs and another using GPU accelerators. For “userspace” governor, which allows the user to set the CPU CPU systems, we will show that reducing the clockspeed to a specific frequency (“p-state”). of applications at runtime can reduce the overall energy The p-state is labelled by an integer that corresponds consumption of applications. This shows that modern CPU to the CPU clock frequency in kHz. So a p-state label hardware already deviates from the “received wisdom” that of 2600000 corresponds to a 2.6GHz clock frequency1. the most energy-efficient way to run an application is always The only exception is that the Intel CPUs studied here the fastest although, as we discuss, the energy savings are have the option (depending on power and environmental perhaps not currently large enough to justify the increased constraints) of entering a temporary boost state above the runtime. nominal top frequency of the CPU; this option is available For hybrid systems, the GPU does indeed offer higher to the hardware if the p-state is set to 2601 (for a 2.6GHz performance for lower energy consumption, but both of these processor). For the Sandybridge CPUs, the p-states ranged are further improved by harnessing both CPU and GPU in from 1200 to 2600, with a boost-able 2601 state above that the calculation, and we show a simple way of doing this. (where the “turbo frequency” can be as high as 3.3GHz). The structure of this paper is as follows. In Sec. II we For the Ivybridge CPUs, the range was from 1200 to 2700, describe the Cray XC30 hardware used for the results in with a boost-able 2701 state (where the frequency can reach this paper, some relevant features of the system software 3.5GHz). and the procedure for reading the user-accessible power On Cray systems, the p-state of the node can be varied counters. We use these counters to measure the power draw at application launch through the --p-state flag for the of the nodes when idle. In Sec. III we present performance ALPS aprun job launch command (which also sets the results (both runtime- and energy-based) when running some governor to “userspace”). The range of available p-states on benchmark codes in various configurations on CPUs and/or a given node is listed in file: GPUs. The energy results here were obtained using simple /sys/devices/system/cpu/cpu0/cpufreq/ \ API calls; in Sec. IV we describe how the Score-P/Vampir scaling_available_frequencies framework has been modified to provide detailed tracing of B. Enhanced MPMD with Cray ALPS on the Cray XC30 application energy use, and present results both for synthetic The aprun command provides the facility to run more benchmarks and a large-scale parallel application (Gromacs). than one binary at a time as part of the same MPI ap- Finally, we draw our conclusions in Sec. V. plication (sharing the MPI_COMM_WORLD communicator). This work was carried out in part as part of the EU FP7 Historically, this MPMD mode only supported running a project “Collaborative Research in Exascale Systemware, single binary on all cores of a given node. In this work, Tools and Applications (CRESTA)” [6]. Some early results we take advantage of a new capability to combine different from this work were presented in Ref. [7]. binaries on the same node. This is done by using aprun Related work reported at this conference can be found in to execute a script multiple times in parallel (once for each Refs. [8], [9], [10]. MPI rank in our job). Within this script, a new environment II. THE HARDWARE CONSIDERED variable ALPS_APP_PE gives the MPI rank of that instance of the script. We can then use modular arithmetic (based For this work, we consider two configurations of the on the desired number of ranks per node) to decide which “Cascade-class” Cray XC30 supercomputer. The first uses MPI binary should be executed (and set appropriate control nodes containing two twelve-core Intel Xeon E5-2697 “Ivy- variables). Environment variable PMI_NO_FORK must be bridge” CPUs with a clockspeed of 2.7GHz (as used in, set in the main batch jobscript to ensure the binaries launch for instance, the UK “Archer” and ECMWF systems). The correctly. Fig. 1 gives an example of mixing (multiple) second uses hybrid nodes containing a single eight-core Intel OpenACC and OpenMP executables on the same nodes, with Xeon E5-2670 “Sandybridge” CPU with a clockspeed of appropriate -cc binding. 2.6GHz and an Nvidia Tesla K20x “Kepler” GPU (as used in the Swiss “Piz Daint” system installed at CSCS). For C. User-accessible power counters convenience, we refer to these configurations as “Marble” The Cray Linux Environment OS provides access and “Graphite” respectively. to a set of power counters via files in directory A. Varying the CPU clockspeed at runtime /sys/cray/pm_counters. Further details of these counters are implemented can be found in Ref. [8]. Linux OSes, including the Cray Linux Environment that Counter power records the instantaneous power con- runs on the Cray XC30 compute nodes, offer a CPUFreq sumption of the node in units of Watts. This node power governor that can be used to change the clock speed of the includes CPU, memory and associated controllers and other CPUs [11].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us