The LINPACK Benchmark on the Fujitsu AP 1000∗

The LINPACK Benchmark on the Fujitsu AP 1000∗

The LINPACK Benchmark on the Fujitsu AP 1000∗ Richard P. Brent† Computer Sciences Laboratory Australian National University Canberra, Australia Abstract 1.1 The Fujitsu AP 1000 We describe an implementation of the LINPACK The Fujitsu AP 1000 (also known as the “CAP 2”) Benchmark on the Fujitsu AP 1000. Design consider- is a MIMD machine with up to 1024 independent 25 1 ations include communication primitives, data distri- Mhz SPARC processors which are called cells. Each bution, use of blocking to reduce memory references, cell has 16 MByte of dynamic RAM and 128 KByte and effective use of the cache. Some details of our cache. In some respects the AP 1000 is similar to the blocking strategy appear to be new. The results show CM 5, but the AP 1000 does not have vector units, that the LINPACK Benchmark can be implemented ef- so the floating-point speed of a cell is constrained ficiently on the AP 1000, and it is possible to attain by the performance of its Weitek floating-point unit (5.56 Mflop/cell for overlapped multiply and add in about 80 percent of the theoretical peak performance 64-bit arithmetic). The topology of the AP 1000 is a for large problems. torus, with hardware support for wormhole routing [5]. There are three communication networks – the B-net, 1 Introduction for communication with the host; the S-net, for syn- chronization; and the T-net, for communication be- This paper describes an implementation of the LIN- tween cells. The T-net is the most significant for us. PACK Benchmark [6] on the Fujitsu AP 1000. Pre- It provides a theoretical bandwidth of 25 MByte/sec liminary results were reported in [2]. between cells. In practice, due to overheads such as The LINPACK Benchmark involves the solution copying buffers, about 6 MByte/sec is achievable by of a nonsingular system of n linear equations in n user programs. For details of the AP 1000 architecture unknowns, using Gaussian elimination with partial and software environment, see [3, 4]. pivoting and double-precision (64-bit) floating-point arithmetic. We are interested in large systems since 1.2 Notation there is no point in using a machine such at the AP 1000 to solve single small systems. Apart from the optional use of assembler for some The manner in which matrices are stored is very linear algebra kernels, our programs are written in important on a distributed-memory machine such as the C language. For this reason, we use C index- the AP 1000. In Section 2 we discuss several matrix ing conventions, i.e. rows and columns are indexed storage conventions, including the one adopted for the from 0. Because we often need to take row/column benchmark programs. indices modulo some number related to the machine Section 3 is concerned with the algorithm used for configuration, indexing from 0 is actually more con- parallel LU factorization. Various other aspects of the venient than indexing from 1. In C a matrix with design of the benchmark programs are mentioned in m ≤ MMAX rows and n ≤ NMAX columns may be de- Section 4. For example, we discuss the use of blocking clared as a[MMAX][NMAX], where MMAX and NMAX are con- to improve efficiency, effective use of the cache, and stants known at compile-time. The matrix elements communication primitives. are a[i][j], 0 ≤ i < m, 0 ≤ j < n. Rows are stored in In Section 5 we present results obtained on AP 1000 contiguous locations, i.e. a[i][j] is adjacent to a[i][j+1]. machines with up to 512 cells. Some conclusions are (Fortran conventions are to store columns in contigu- given in Section 6. ous locations, and to start indexing from 1 rather than from 0.) ∗Appeared in Proc. Frontiers ’92 (McLean, VA, October 1992), IEEE Press, 1992, 128–135. c 1992, IEEE. 1SPARCTM is a trademark of Sun Microsystems, Inc. †E-mail address: [email protected] rpb130 typeset using LATEX When referring to the AP 1000 configuration as addition of a multiple of the pivot row. The col- ncelx by ncely, we mean that the AP 1000 is con- umn which defines the multipliers also has to be figured as a torus with ncelx cells in the horizon- broadcast. tal (X) direction (corresponding to index j above) and ncely cells in the vertical (Y) direction (cor- Row/column send/receive. For example, if piv- responding to index i). Here “torus” implies that oting is implemented by explicitly interchanging the cell connections wrap around, i.e. cell (x, y) (for rows, then at each pivoting step two rows may 0 ≤ x < ncelx, 0 <= y < ncely) is connected to cell have to be interchanged. (x ± 1 mod ncelx, y ± 1 mod ncely). Wrap-around is Row/column scan. Here we want to apply an not important for the LINPACK Benchmark – our al- associative operator θ to data in one (or more) gorithm would work almost as well on a machine with rows or columns. For example, when selecting a rectangular grid without wrap-around. the pivot row it is necessary to find the index of The total number of cells is P = ncelx · ncely. The the element of maximum absolute value in (part aspect ratio is the ratio ncely : ncelx. If the aspect of) a column. This may be computed with a scan ratio is 1 or 2 we sometimes use s = ncelx. In this if we define an associative operator θ as follows: case ncely = s or 2s and P = s2 or 2s2. (a, i), if |a| ≥ |b|; (a, i) θ (b, j) = 1.3 Restrictions (b, j), otherwise. Our first implementation was restricted to a square Other useful associative operators are addition (of AP 1000 array, i.e. one with aspect ratio 1. Later this scalars or vectors) and concatenation (of vectors). was extended in an ad hoc manner to aspect ratio 2. This was sufficient for benchmark purposes since all It is desirable for data to be distributed in a “nat- AP 1000 machines have 2k cells for some integer k, ural” manner, so that the operations of row/column and can be configured with aspect ratio 1 (if k is even) broadcast/send/receive/scan can be implemented effi- or 2 (if k is odd). For simplicity, the description in ciently [1]. Sections 3 and 4 assumes aspect ratio 1. A simple mapping of data to cells is the column- A minor restriction is that n is a multiple of ncely. wrapped representation. Here column i of a matrix This is not a serious constraint, since it is easy to add is stored in the memory associated with cell i mod P , up to ncely −1 rows and columns (zero except for unit assuming that the P cells are numbered 0, 1,...,P −1. diagonal entries) to get a slightly larger system with Although simple, and widely used in parallel im- essentially the same solution. plementations of Gaussian elimination [13, 14, 18, 19], the column-wrapped representation has some disad- vantages – lack of symmetry, poor load balancing, and 2 Matrix storage conventions poor communication bandwidth for column broad- cast – see [1]. Similar comments apply to the anal- On the AP 1000 each cell has a local memory which ogous row-wrapped representation. is accessible to other cells only via explicit message Another conceptually simple mapping is the blocked passing. It is customary to partition data such as ma- representation. Here A is partioned into at most P trices and vectors across the local memories of several blocks of contiguous elements, and at most one block cells. This is essential for problems which are too large is stored in each cell. However, as described in [1], to fit in the memory of one cell, and in any case it the blocked representation is inconvenient for matrix is usually desirable for load-balancing reasons. Since operations if several matrices are stored with different vectors are a special case of matrices, we consider the block dimensions, and it suffers from a load-balancing partitioning of a matrix A. problem. Before considering how to map data onto the Harder to visualize, but often better than the AP 1000, it is worth considering what forms of data row/column-wrapped or blocked representations, is movement are common in Gaussian elimination with the scattered representation [8], sometimes called dot partial pivoting. Most other linear algebra algorithms mode [17] or cut-and-pile. Assume as above that the have similar data movement requirements. To perform cells form an s by s grid. Then the matrix element ai,j Gaussian elimination we need – is stored in cell (j mod s, i mod s) with local indices (bi/sc, bj/sc). The matrices stored locally on each cell Row/column broadcast. For example, the pivot have the same shape (e.g. triangular, ...) as the global row has to be sent to processors responsible for matrix A, so the computational load on each cell is ap- other rows, so that they can be modified by the proximately the same. 2 The blocked and scattered representations do not In the following we shall only consider the transfor- require a configuration with aspect ratio 1, although mation of A to U, as the transformation of A¯ to U¯ is matrix multiplication and transposition are easier to similar. implement if the aspect ratio is 1 than in the general If A has n rows, the following steps have to be re- case [20]. In the general case, the scattered represen- peated n−1 times, where the k-th iteration completes tation of A on an ncelx by ncely configuration stores computation of the k-th row of U – matrix element ai,j on cell (j mod ncelx, i mod ncely).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us