
TECHNICAL PAPER Parallelism, Compute Intensity, and Data Vectorization: The CRAY APP Bradley R. Carlile Cray Research Superservers, Inc. 3601 SW Murray Blvd., Beaverton, Oregon 97005 [email protected] (503) 641-3151 (phone); (503) 641-4497 (fax) Abstract The CRAY APP is a general purpose, 84 processor, mul- tiple instruction multiple data (MIMD) shared-memory High performance on parallel algorithms requires high system [2] [19]. Utilization of commodity processors al- delivered memory bandwidth, fast computations, and min- imal parallel overheads. These three requirements have far lows it to be a very cost effective machine. It is a multi-user reaching ramifications on complete system design and per- compute server programmed using autoparallelizing FOR- formance. To satisfy the high computation rates of parallel TRAN or C in a Unix environment [1]. The peak perfor- programs, memory inefficiencies can be avoided by using mance is 6.7 Gflops for 32-bit computations and 3.4 Gflops knowledge of the applications data access patterns and the for 64-bit computations. The CRAY APP was designed as interaction of computations and data movement. Compute intensity (ratio of compute operations to memory accesses a production machine with an emphasis on ease-of-use. required) is central to the understanding parallel perfor- mance. Several other characteristics of parallel programs The CRAY APP uses commercial processors that can and techniques to exploit them will be discussed. One of issue multiple pipelined instructions to deliver fast compu- these techniques is data vectorization. Data vectorization tations in parallel programs. Loops are optimized on multi- focuses vectorization techniques on the data movement in a ple instruction issue processors using Software Pipelining code section. This and other techniques have been realized in the hardware and software design of the CRAY APP techniques [12] [23]. Software pipelining allows multiple shared-memory system. instruction issue processors to be viewed as efficient pro- grammable vector processors. The key to understanding high performance system de- 1.0 Introduction sign is understanding the characteristics of the important user applications. Memory usage is one of the most critical High performance on parallel programs depends on the and often overlooked characteristics of programs. This is following requirements. becoming more critical as the gap between processor speed and memory speed grows [9]. The memory bandwidth of a 1) High memory bandwidth system is also a major contributor to the price point of a 2) Fast computations system. At any particular memory bandwidth, efficient use of memory bandwidth can provide higher performance 3) Minimal parallel overheads than higher memory bandwidths that are used inefficiently. This paper will focus on several aspects of memory usage These three requirements have far reaching ramifica- and some parallel issues. tions for performance, ease of programming, programming model, optimization techniques, and suitable types of ap- 2.0 Memory Bandwidth plications. An understanding of these requirements and a hardware/software codesign process has led to the develop- Memory bandwidth is directly related to performance. ment of the CRAY APP shared memory system. Shared- The relationship between compute operations and data re- memory systems do not have split address spaces like dis- quired is called Compute Intensity [10] [11]. Others have tributed memory machines that require careful data distri- subsequently also defined the reciprocal of compute inten- bution for performance. In addition, automatic sity as R [6]. Compute intensity is defined as follows: parallelizing compilers for shared memory machines are a maturing technology. Technical Paper - Draft Submission 1 Submitted to: SuperComputing ‘93, Portland, November 1993. = ⁄ (4) Number of Operations Time Memory Accesses Memory Bandwidth Compute Intensity = (1) Number of Data Words Accessed Either equation (2) or (4) can be used to determine the percentage of memory bandwidth achieved on a given ap- For numerical computations, the operation count is usu- plication. The percentage of memory bandwidth delivered ally counted in terms of floating-point operations. Of is a particularly helpful metric when optimizing the perfor- course, it is equally valid to use an integer operation count mance of an application. The CRAY APP often delivers 60- for integer dominated computations. Most applications 90% of total memory bandwidth during the execution of have high compute intensity. High compute intensity is of- parallel programs. ten found in nested loops that reuse data. It is also found in calculations that perform complicated operations on data. Relative to the problem size, most algorithms have ei- ther constant compute intensity, log growth compute inten- The Compute Intensity of an algorithm can be used to sity, or linear growth compute intensity. Current cache- determine the performance bound of an application on a based processors have enough on-chip storage to often re- given memory system. This estimate is based on delivered alize moderate compute intensities of 4 to 30 for a wide va- memory bandwidth. riety of applications. Our experience is that half of the applications have loops with constant compute intensity = × Performance Intensity Memory Bandwidth (2) with moderate values. Table 1., contains an example of or each of these classes of compute intensities. Operations Operations Words = × (3) Data Compute Second Word Second Operation Words Intensity Algorithm Count Used (Ops/Word) This formula gives the maximum performance that the memory system can sustain for a given application. Even Sine 23N 2N 11.5 though this formula is completely independent of the float- 5 ing-point processing capabilities of a given machine it can Complex 1D FFT 5NNlog 2 4N log 2N often be a better measure. A different compiler focus or a 4 2 3 2 1 different algorithm implementation can often greatly in- Real Solver N 2N N crease the realized compute intensity of an application. In- 3 3 creases in compute intensity will be reflected in higher Table 1. Compute Intensities of Basic Algorithms execution performance on any memory bandwidth. The compute intensity in an application will often be dif- Most applications have a great deal of compute intensi- ferent for each basic code block (loop, nested loops, condi- ty. Even for small data sizes, many important algorithms tional, etc.) within an application. The compute intensity of exceed the design point of current small or large scale ar- each basic block is dependent on system architecture and chitectures. Most architectures have a much higher perfor- the compiler’s optimization strategy. If the program con- mance potential based on memory bandwidth. For example, sists of a linear sequence of basic blocks with different one dimensional FFTs of length 2k have a compute intensi- compute intensities, then a realized compute intensity, IR, ty of 13.75 (see Table 1.). Using this compute intensity and for the entire sequence is the weighted average of the com- equation (2) one could support 220 Gflops on the memory pute intensities of each block, Ib, multiplied by percentage bandwidth of a CRAY Y-MP/C90 (16 Gigawords/s to vec- of work in each basic block, Pb. tor units). However, for this algorithm the performance is limited to less than the peak computational rate of 16 n = × Gflops. A compiler could produce a code that has a com- IR ∑ Ib Pb (5) pute intensity of 1.0 and still achieve maximum perfor- b = 1 mance. The compute intensity and the percentage of work in each basic block is often dependent on the problem size of Another way to estimate performance is to base it on the an application. Frequently, the compute intensity will grow number of memory accesses required. Sometimes it is eas- with an increase in problem size. ier to estimate the required data accesses than the compute intensity. This estimate is most accurate when the memory bandwidth of an application is saturated. Technical Paper - Draft Submission 2 Submitted to: SuperComputing ‘93, Portland, November 1993. It is helpful to define another ratio called leverage to poor memory bandwidth utilization and thereby degrade quantify the data movement in a particular implementation. compute performance when implemented on cache-based Leverage is defined as follows: systems. The problems can be grouped into the three basic categories of cache miss handling (MISS), bandwidth Compute Time shortcomings (BW), and latency issues (LAT). These are Leaverage = (6) Data Movement Time shown with the associated causes in the table below. Leverage is directly related to compute intensity on a given machine. Compute time is related to the operation count and data movement time is related to the number of data points involved in the computation. An algorithm with set- a high compute intensity will often have a high leverage. line size write policy associativity However, it is possible to have an algorithm with a low Cache Problem (type) miss penalty compute intensity and a high leverage. This results either Non-Stride-1 slow (BW) yes yes no no when a calculation takes a long time to perform the floating Over-fetch (BW) yes no yes yes point operations or when many non-floating-point opera- Write BW Waste (BW) yes no no yes tions are performed. Interference (MISS) yes no yes yes Leverage can be used to explain how several processors Miss Stalls (MISS) yes yes no no can work in parallel to saturate the available memory band- Latency variance (LAT) no yes no yes width. For example, if a particular loop has a leverage of 11 it will spend only 9% of it’s execution time moving data. If Table 2. Cache Problems and Causes the computation is in a parallel region of code, eleven pro- cessors could be computing while one processor is moving Losing memory bandwidth is a chief concern in these data. In this way, twelve processors can saturate the mem- systems since the delivered “cache-friendly” stride-1 data ory bandwidth and maximize the performance achieved on fetching memory bandwidth of current commercial micro- the memory system.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-