Cache-Oblivious Algorithms EXTENDED ABSTRACT Matteo Frigo Charles E

Total Page:16

File Type:pdf, Size:1020Kb

Cache-Oblivious Algorithms EXTENDED ABSTRACT Matteo Frigo Charles E Cache-Oblivious Algorithms EXTENDED ABSTRACT Matteo Frigo Charles E. Leiserson Harald Prokop Sridhar Ramachandran MIT Laboratory for Computer Science, 545 Technology Square, Cambridge, MA 02139 ¢¡¤£¦¥¨§¢© ¡ § ¨¨ ¥¨¡ ! "#¨§#£$§ %¥'&( # &*)+%£,&-§" Main Abstract This paper presents asymptotically optimal algo- organized by Memory rithms for rectangular matrix transpose, FFT, and sorting on optimal replacement computers with multiple levels of caching. Unlike previous strategy optimal algorithms, these algorithms are cache oblivious: no Cache variables dependent on hardware parameters, such as cache CPU size and cache-line length, need to be tuned to achieve opti- mality. Nevertheless, these algorithms use an optimal amount W of work and move data optimally among multiple levels of work Q cache. For a cache with size Z and cache-line length L where Z 3 L Cache lines 2 0 cache / 1 . Ω Z L the number of cache misses for an m n ma- misses Θ 0 Lines 2 3 trix transpose is / 1 mn L . The number of cache misses of length L for either an n-point FFT or the sorting of n numbers is 050 0 Θ 0 Θ 24/ 3 / 2 / / 1 n L 1 logZ n . We also give an mnp -work al- 1 gorithm to multiply an m 1 n matrix by an n p matrix that Figure 1: The ideal-cache model 0 0 26/ 2 2 3 2 3 7 incurs Θ / 1 mn np mp L mnp L Z cache faults. We introduce an “ideal-cache” model to analyze our algo- shall assume that word size is constant; the particular rithms. We prove that an optimal cache-oblivious algorithm constant does not affect our asymptotic analyses. The designed for two levels of memory is also optimal for multi- cache is partitioned into cache lines, each consisting of ple levels and that the assumption of optimal replacement in L consecutive words which are always moved together the ideal-cache model can be simulated efficiently by LRU re- placement. We also provide preliminary empirical results on between cache and main memory. Cache designers typ- the effectiveness of cache-oblivious algorithms in practice. ically use L ; 1, banking on spatial locality to amortize the overhead of moving the cache line. We shall gener- 1. Introduction ally assume in this paper that the cache is tall: Ω 2 8 :9 Z < L (1) Resource-oblivious algorithms that nevertheless use re- sources efficiently offer advantages of simplicity and which is usually true in practice. portability over resource-aware algorithms whose re- The processor can only reference words that reside source usage must be programmed explicitly. In this in the cache. If the referenced word belongs to a line paper, we study cache resources, specifically, the hier- already in cache, a cache hit occurs, and the word is archy of memories in modern computers. We exhibit delivered to the processor. Otherwise, a cache miss oc- several “cache-oblivious” algorithms that use cache as curs, and the line is fetched into the cache. The ideal effectively as “cache-aware” algorithms. cache is fully associative [20, Ch. 5]: cache lines can be Before discussing the notion of cache obliviousness, stored anywhere in the cache. If the cache is full, a cache 9 : we first introduce the 8 Z L ideal-cache model to study line must be evicted. The ideal cache uses the optimal the cache complexity of algorithms. This model, which off-line strategy of replacing the cache line whose next is illustrated in Figure 1, consists of a computer with a access is furthest in the future [7], and thus it exploits two-level memory hierarchy consisting of an ideal (data) temporal locality perfectly. cache of Z words and an arbitrarily large main mem- Unlike various other hierarchical-memory models ory. Because the actual size of words in a computer is [1, 2, 5, 8] in which algorithms are analyzed in terms of typically a small, fixed size (4 bytes, 8 bytes, etc.), we a single measure, the ideal-cache model uses two mea- sures. An algorithm with an input of size n is measured This research was supported in part by the Defense Advanced : by its work complexity W 8 n —its conventional running Research Projects Agency (DARPA) under Grant F30602-97-1-0270. Matteo Frigo was supported in part by a Digital Equipment Corpora- time in a RAM model [4]—and its cache complexity 9 : tion fellowship. Q 8 n;Z L —the number of cache misses it incurs as a function of the size Z and line length L of the ideal cache. cache-oblivious algorithm that requires no tuning pa- When Z and L are clear from context, we denote the rameters such as the s in BLOCK-MULT. We present : cache complexity simply as Q 8 n to ease notation. such an algorithm, which works on general rectangular We define an algorithm to be cache aware if it con- matrices, in Section 2. The problems of computing a tains parameters (set at either compile-time or runtime) matrix transpose and of performing an FFT also suc- that can be tuned to optimize the cache complexity for cumb to remarkably simple algorithms, which are de- the particular cache size and line length. Otherwise, the scribed in Section 3. Cache-oblivious sorting poses a algorithm is cache oblivious. Historically, good perfor- more formidable challenge. In Sections 4 and 5, we mance has been obtained using cache-aware algorithms, present two sorting algorithms, one based on mergesort but we shall exhibit several optimal1 cache-oblivious al- and the other on distribution sort, both of which are op- gorithms. timal in both work and cache misses. To illustrate the notion of cache awareness, consider The ideal-cache model makes the perhaps- the problem of multiplying two n n matrices A and questionable assumptions that there are only two B to produce their n n product C. We assume that levels in the memory hierarchy, that memory is man- the three matrices are stored in row-major order, as aged automatically by an optimal cache-replacement shown in Figure 2(a). We further assume that n is strategy, and that the cache is fully associative. We “big,” i.e., n ; L, in order to simplify the analysis. The address these assumptions in Section 6, showing that conventional way to multiply matrices on a computer to a certain extent, these assumptions entail no loss with caches is to use a blocked algorithm [19, p. 45]. of generality. Section 7 discusses related work, and The idea is to view each matrix M as consisting of Section 8 offers some concluding remarks, including ¡ ¡ : 8 : 8 n s n s submatrices Mi j (the blocks), each of some preliminary empirical results. which has size s s, where s is a tuning parame- ter. The following algorithm implements this strategy: 2. Matrix multiplication 0 ¢ ¢ ¢ BLOCK-MULT / A B C n This section describes and analyzes a cache-oblivious al- 3 1 for i £ 1 to n s gorithm for multiplying an m n matrix by an n p ma- 3 2 do for j £ 1 to n s : trix cache-obliviously using Θ 8 mnp work and incurring ¡ ¡ 3 3 do for k £ 1 to n s ¥ ¥ ¥ 8 ¥ ¥ : ¥ ¦ : Θ 8 0 m n p mn np mp L mnp L Z cache ¢ ¢ ¢ 4 do ORD-MULT / A B C s ik k j i j misses. These results require the tall-cache assumption 9 9 9 : The ORD-MULT 8 A B C s subroutine computes (1) for matrices stored in row-major layout format, but 3 ¥ 8 : C ¤ C AB on s s matrices using the ordinary O s the assumption can be relaxed for certain other layouts. algorithm. (This algorithm assumes for simplicity that We also show that Strassen’s algorithm [31] for multi- Θ lg7 2 : s evenly divides n, but in practice s and n need have no plying n n matrices, which uses 8 n work , incurs ¡ special relationship, yielding more complicated code in Θ 2 ¡ lg7 ¥ ¥ ¦ : 8 1 n L n L Z cache misses. the same spirit.) In [9] with others, two of the present authors analyzed Depending on the cache size of the machine on which an optimal divide-and-conquer algorithm for n n ma- BLOCK-MULT is run, the parameter s can be tuned to trix multiplication that contained no tuning parameters, make the algorithm run fast, and thus BLOCK-MULT is but we did not study cache-obliviousness per se. That a cache-aware algorithm. To minimize the cache com- algorithm can be extended to multiply rectangular matri- plexity, we choose s to be the largest value such that ces. To multiply a m n matrix A and a n p matrix B, the three s s submatrices simultaneously fit in cache. ¡ the REC-MULT algorithm halves the largest of the three 2 ¥ : An s s submatrix is stored on Θ 8 s s L cache lines. dimensions and recurs according to one of the following From the tall-cache assumption (1), we can see that three cases: Θ 8§¦ : s < Z . Thus, each of the calls to ORD-MULT runs ¡ ¡ Θ 2 A1 A1B 8 : < 9 with at most Z L < s L cache misses needed to B (2) A A B bring the three matrices into the cache. Consequently, 2 2 Θ ¥ the cache complexity of the entire algorithm is 8 1 B1 ¡ ¡ ¡ ¡ ¡ < ¥ 9 2 3 2 3 A1 A2 A1B1 A2B2 (3) Θ 8 ¦ : 8 : :< 8 ¥ ¥ ¦ : n L ¥ n Z Z L 1 n L n L Z , since B2 the algorithm has to read n2 elements, which reside on ¨ < ¡ A B B AB AB (4) 2 1 2 1 2 n L © cache lines.
Recommended publications
  • Engineering Cache-Oblivious Sorting Algorithms
    Engineering Cache-Oblivious Sorting Algorithms Master’s Thesis by Kristoffer Vinther June 2003 Abstract Modern computers are far more sophisticated than simple sequential programs can lead one to believe; instructions are not executed sequentially and in constant time. In particular, the memory of a modern computer is structured in a hierarchy of increasingly slower, cheaper, and larger storage. Accessing words in the lower, faster levels of this hierarchy can be done virtually immediately, but accessing the upper levels may cause delays of millions of processor cycles. Consequently, recent developments in algorithm design have had a focus on developing algorithms that sought to minimize accesses to the higher levels of the hierarchy. Much experimental work has been done showing that using these algorithms can lead to higher performing algorithms. However, these algorithms are designed and implemented with a very specific level in mind, making it infeasible to adapt them to multiple levels or use them efficiently on different architectures. To alleviate this, the notion of cache-oblivious algorithms was developed. The goal of a cache-oblivious algorithm is to be optimal in the use of the memory hierarchy, but without using specific knowledge of its structure. This automatically makes the algorithm efficient on all levels of the hierarchy and on all implementations of such hierarchies. The experimental work done with these types of algorithms remain sparse, however. In this thesis, we present a thorough theoretical and experimental analysis of known optimal cache-oblivious sorting algorithms. We develop our own simpler variants and present the first optimal sub-linear working space cache-oblivious sorting algorithm.
    [Show full text]
  • Cache-Oblivious Algorithms by Harald Prokop
    Cache-Oblivious Algorithms by Harald Prokop Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY. June 1999 c Massachusetts Institute of Technology 1999. All rights reserved. Author Department of Electrical Engineering and Computer Science May 21, 1999 Certified by Charles E. Leiserson Professor of Computer Science and Engineering Thesis Supervisor Accepted by Arthur C. Smith Chairman, Departmental Committee on Graduate Students 2 Cache-Oblivious Algorithms by Harald Prokop Submitted to the Department of Electrical Engineering and Computer Science on May 21, 1999 in partial fulfillment of the requirements for the degree of Master of Science. Abstract This thesis presents “cache-oblivious” algorithms that use asymptotically optimal amounts of work, and move data asymptotically optimally among multiple levels of cache. An algorithm is cache oblivious if no program variables dependent on hardware configuration parameters, such as cache size and cache-line length need to be tuned to minimize the number of cache misses. We show that the ordinary algorithms for matrix transposition, matrix multi- plication, sorting, and Jacobi-style multipass filtering are not cache optimal. We present algorithms for rectangular matrix transposition, FFT, sorting, and multi- pass filters, which are asymptotically optimal on computers with multiple levels 2 ( ) of caches. For a cache with size Z and cache-line length L, where Z = Ω L , ( + = ) the number of cache misses for an m n matrix transpose is Θ 1 mn L . The number of cache misses for either an n-point FFT or the sorting of n numbers is + ( = )( + )) Θ ( 1 n L 1 logZn .
    [Show full text]
  • Design Implement Analyze Experiment
    Felix Putze and Peter Sanders design experiment Algorithmics analyze implement Course Notes Algorithm Engineering TU Karlsruhe October 19, 2009 Preface These course notes cover a lecture on algorithm engineering for the basic toolbox that Peter Sanders is reading at Universitat¨ Karlsruhe since 2004. The text is compiled from slides, scientific papers and other manuscripts. Most of this material is in English so that this language was adopted as the main language. However, some parts are in German. The primal sources of our material are given at the beginning of each chapter. Please refer to the original publications for further references. This document is still work in progress. Please report bugs of any type (content, language, layout, . ) to [email protected]. Thank you! 1 Contents 1 Was ist Algorithm Engineering? 6 1.1 Einfuhrung¨ . .6 1.2 Stand der Forschung und offene Probleme . .8 2 Data Structures 16 2.1 Arrays & Lists . 16 2.2 External Lists . 18 2.3 Stacks, Queues & Variants . 20 3 Sorting 24 3.1 Quicksort Basics . 24 3.2 Refined Quicksort . 25 3.3 Lessons from experiments . 27 3.4 Super Scalar Sample Sort . 30 3.5 Multiway Merge Sort . 34 3.6 Sorting with parallel disks . 35 3.7 Internal work of Multiway Mergesort . 38 3.8 Experiments . 41 4 Priority Queues 44 4.1 Introduction . 44 4.2 Binary Heaps . 45 4.3 External Priority Queues . 47 4.4 Adressable Priority Queues . 54 5 External Memory Algorithms 57 5.1 Introduction . 57 5.2 The external memory model and things we already saw .
    [Show full text]
  • Chapter 3 Cache-Oblivious Algorithms
    Portable High-Performance Programs by Matteo Frigo Laurea, Universita di Padova (1992) Dottorato di Ricerca, Universita di Padova (1996) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY J ne 1999 -. @ Matteo Frigo, MCMXCIX. All rights reserved. The author hereby grants to MIT permission to reproduce and distribute publicly paper and electronic copies of this thesis document in whole or in part, and to grant others the right to do so. Author........ ......... ...................... Department of Electrical Engineering and Computer Science June 22, 1999 Certified by ............ ................... -V Charles E. Leiserson Professor of Computer Science and Engineering Thesis Supervisor Accepted by .................. & .. ..-.-.-. -..... ......-- Arthur C. Smith . Denartmental Commnsie on Graduate Students Copyright @ 1999 Matteo Frigo. Permission is granted to make and distribute verbatim copies of this thesis provided the copy- right notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this thesis under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this thesis into another language, under the above conditions for modified versions, except
    [Show full text]
  • Cache-Oblivious and Data-Oblivious Sorting and Applications
    Cache-Oblivious and Data-Oblivious Sorting and Applications T-H. Hubert Chan∗ Yue Guoy Wei-Kai Liny Elaine Shiy Abstract Although external-memory sorting has been a classical algorithms abstraction and has been heavily studied in the literature, perhaps somewhat surprisingly, when data-obliviousness is a requirement, even very rudimentary questions remain open. Prior to our work, it is not even known how to construct a comparison-based, external-memory oblivious sorting algorithm that is optimal in IO-cost. We make a significant step forward in our understanding of external-memory, oblivious sorting algorithms. Not only do we construct a comparison-based, external-memory oblivious sorting algorithm that is optimal in IO-cost, our algorithm is also cache-agnostic in that the algorithm need not know the storage hierarchy's internal parameters such as the cache and cache-line sizes. Our result immediately implies a cache-agnostic ORAM construction whose asymptotical IO-cost matches the best known cache-aware scheme. Last but not the least, we propose and adopt a new and stronger security notion for external- memory, oblivious algorithms and argue that this new notion is desirable for resisting possible cache-timing attacks. Thus our work also lays a foundation for the study of oblivious algorithms in the cache-agnostic model. ∗Department of Computer Science, The University of Hong Kong. [email protected] yComputer Science Department, Cornell University. [email protected], [email protected], [email protected] 1 Introduction In data-oblivious algorithms, a client (also known as a CPU) performs some computation over sensitive data, such that a possibly computationally bounded adversary, who can observe data access patterns, gains no information about the input or the data.
    [Show full text]