Introduction to the HPC Challenge Benchmark Suite∗
Total Page:16
File Type:pdf, Size:1020Kb
Introduction to the HPC Challenge Benchmark Suite∗ Jack J. Dongarra† Piotr Luszczek‡ December 13, 2004 Abstract trial user community. HPCS program researchers have initiated a fundamental reassessment of how we define The HPC Challenge suite of benchmarks will exam- and measure performance, programmability, portabil- ine the performance of HPC architectures using ker- ity, robustness and ultimately, productivity in the HPC nels with memory access patterns more challenging domain. than those of the High Performance Linpack (HPL) The HPCS program seeks to create trans-Petaflop benchmark used in the Top500 list. The HPC Chal- systems of significant value to the Government HPC lenge suite is being designed to augment the Top500 community. Such value will be determined by assess- list, provide benchmarks that bound the performance ing many additional factors beyond just theoretical peak of many real applications as a function of memory flops (floating-point operations). Ultimately, the goal is access characteristics e.g., spatial and temporal local- to decrease the time-to-solution, which means decreas- ity, and provide a framework for including additional ing both the execution time and development time of an benchmarks. The HPC Challenge benchmarks are scal- application on a particular system. Evaluating the capa- able with the size of data sets being a function of bilities of a system with respect to these goals requires a the largest HPL matrix for a system. The HPC Chal- different assessment process. The goal of the HPCS as- lenge benchmark suite has been released by the DARPA sessment activity is to prototype and baseline a process HPCS program to help define the performance bound- that can be transitioned to the acquisition community aries of future Petascale computing systems. The for 2010 procurements. suite is composed of several well known computational The most novel part of the assessment activity will kernels (STREAM, High Performance Linpack, matrix be the effort to measure/predict the ease or difficulty multiply – DGEMM, matrix transpose, FFT, RandomAc- of developing HPC applications. Currently, there is no cess, and bandwidth/latency tests) that attempt to span quantitative methodology for comparing the develop- high and low spatial and temporal locality space. ment time impact of various HPC programming tech- nologies. To achieve this goal, the HPCS program is 1 High Productivity Computing using a variety of tools including Systems • Application of code metrics on existing HPC codes, The DARPA High Productivity Computing Sys- tems (HPCS) [1] is focused on providing a new gen- • Several prototype analytic models of development eration of economically viable high productivity com- time, puting systems for national security and for the indus- • Interface characterization (e.g. programming lan- ∗ This work was supported in part by the DARPA, NSF, and guage, parallel model, memory model, communi- DOE though the DAPRA HPCS program under grant FA8750-04- 1-0219. cation model), †University of Tennessee Knoxville and Oak Ridge National Laboratory • Scalable benchmarks designed for testing both per- ‡University of Tennessee Knoxville formance and programmability, 1 • Classroom software engineering experiments, and tests over the complete system (global). In partic- ular, to characterize the architecture of the system we • Human validated demonstrations. consider three testing scenarios: These tools will provide the baseline data necessary 1. Local – only a single processor is performing com- for modeling development time and allow the new tech- putations. nologies developed under HPCS to be assessed quanti- 2. Embarrassingly Parallel – each processor in the en- tatively. tire system is performing computations but they do As part of this effort we are developing a scalable no communicate with each other explicitly. benchmark for the HPCS systems. The basic goal of performance modeling is to mea- 3. Global – all processors in the system are perform- sure, predict, and understand the performance of a com- ing computations and they explicitly communicate puter program or set of programs on a computer system. with each other. The applications of performance modeling are numer- ous, including evaluation of algorithms, optimization The HPC Challenge benchmark consists at this of code implementations, parallel library development, time of 7 performance tests: HPL [3], STREAM [4], and comparison of system architectures, parallel system RandomAccess, PTRANS, FFT (implemented us- design, and procurement of new systems. ing FFTE [5]), DGEMM [6, 7] and b eff La- tency/Bandwidth [8, 9, 10]. HPL is the Linpack TPP (toward peak performance) benchmark. The test 2 Motivation stresses the floating point performance of a system. STREAM is a benchmark that measures sustainable The DARPA High Productivity Computing Sys- memory bandwidth (in GB/s), RandomAccess mea- tems (HPCS) program has initiated a fundamental re- sures the rate of random updates of memory. PTRANS assessment of how we define and measure perfor- measures the rate of transfer for larges arrays of data mance, programmability, portability, robustness and, from multiprocessor’s memory. Latency/Bandwidth ultimately, productivity in the HPC domain. With this measures (as the name suggests) latency and bandwidth in mind, a set of kernels was needed to test and rate a of communication patterns of increasing complexity be- system. The HPC Challenge suite of benchmarks con- tween as many nodes as is time-wise feasible. sists of four local (matrix-matrix multiply, STREAM, Many of the aforementioned tests were widely used RandomAccess and FFT) and four global (High Per- before HPC Challenge was created. At first, this may formance Linpack – HPL, parallel matrix transpose seemingly make our benchmark merely a packaging ef- – PTRANS, RandomAccess and FFT) kernel bench- fort. However, almost all components of HPC Chal- marks. HPC Challenge is designed to approximately lenge were augmented from their original form to pro- bound computations of high and low spatial and tem- vide consistent verification and reporting scheme. We poral locality (see Figure 1). In addition, because should also stress the importance of running these very HPC Challenge kernels consist of simple mathematical tests on a single machine and have the results available operations, this provides a unique opportunity to look at once. The tests were useful separately for the HPC at language and parallel programming model issues. In community before and with the unified HPC Challenge the end, the benchmark is to serve bothe the system user framework they create an unprecendented view of per- and designer communities [2]. formance characterization of a system – a comprehen- sive view that captures the data under the same condi- tions and allows for variety of analysis depending on 3 The Benchmark Tests end user needs. Each of the included tests examines system perfor- This first phase of the project have developed, hardened, mance for various points of the conceptual spatial and and reported on a number of benchmarks. The collec- temporal locality space shown in Figure 1. The ra- tion of tests includes tests on a single processor (local) tionale for such selection of tests is to measures per- 2 PTRANS HPL STREAM DGEMM CFD Radar X-section Applications TSP DSP Spatial locality RandomAccess FFT 0 Temporal locality Figure 1: Targeted application areas in the memory access locality space. formance bounds on metrics important to HPC appli- system. To stress the productivity aspect of the HPC cations. The expected behavior of the applications is Challange benchmark, we require that the information to go through various locality space points during run- about the changes made to the orignial code be submit- time. Consequently, an application may be represented ted together with the benchmark results. While we un- as a point in the locality space being an average (pos- derstand that full disclosure of optimization techniques sibly time-weighed) of its various locality behaviors. may sometimes be impossible to obtain (due to for ex- Alternatively, a decomposition can be made into time- ample trade secrets) we ask at least for some guidence disjoint periods in which the application exhibits a sin- for the users that would like to use similar optimizations gle locality characteristic. The application’s perfor- in their applications. mance is then obtained by combining the partial results from each period. Another aspect of performance assesment addressed 4 Benchmark Details by HPC Challenge is ability to optimize benchmark code. For that we allow two different runs to be re- Almost all tests included in our suite operate on either ported: matrices or vectors. The size of the former we will de- note below as n and the latter as m. The following holds • Base run done with with provided reference imple- throughout the tests: mentation. n2 ' m ' Available Memory • Optimized run that uses architecture specific opti- mizations. Or in other words, the data for each test is scaled so that the matrices or vectors are large enough to fill almost The base run, in a sense, represents behavior of legacy all available memory. code because it is conservatively written using only HPL is the Linpack TPP (Toward Peak Performance) widely available programming languages and libraries. variant of the original Linpack benchmark which mea- It reflects a commonly used approach to prallel pro- sures the floating point rate of execution for solving a cessing sometimes referred