
Global Management of Cache Hierarchies Mohamed Zahran Sally A. McKee City University of New York Chalmers University of Technology New York, NY Gothenburg USA Sweden [email protected] [email protected] ABSTRACT of cache (and usually some form of hardware prefetching). Cache memories currently treat all blocks as if they were Unfortunately, designing an efficient cache hierarchy is any- equally important, but this assumption of equally importance thing but trivial, and requires choosing among myriad pa- is not always valid. For instance, not all blocks deserve to rameters at each level. One pivotal design decision controls be in L1 cache. We therefore propose globalized block place- block placement — where to put an incoming block. Place- ment, and we present a global placement algorithm for man- ment policies affect overall cache performance, not only in aging blocks in a cache hierarchy by deciding where in the terms of hits and misses, but also in terms of bandwidth uti- hierarchy an incoming block should be placed. Our technique lization and response times. A poor policy can increase the makes decisions by adapting to the access patterns of differ- number of misses, trigger higher traffic lower levels of the hi- ent blocks. erarchy, and increase miss penalties. Given these problems, The contributions of this paper are fourfold. First, we much research and development effort has been devoted to motivate our solution by demonstrating the importance of a finding effective cache placement and replacement policies. globalized placement scheme. Second, we present a method Almost all designs resulting from these studies deal with the to categorize cache block behavior into one of four categories. policies within a single cache. Although such local policies Third, we present one potential design exploiting this cate- can be efficient within a cache, they cannot take into account gorization. Finally, we demonstrate the performance of the interactions among several caches in the (ever deeper) hier- proposed design. For the SPEC CPU benchmark suite, the archy. Given this, we advocate a holistic view of the cache scheme enhances overall system performance (IPC) by an hierarchy. average of 12% over a traditional LRU scheme, reducing Cache policies usually assume that all blocks are of the traffic between the L1 and L2 caches by an average of 20% same importance, deserving a place in all caches, since in- while using a table as small as 3KB. clusive policies are usually enforced. In our observation, this is not true. A block that is referenced only once does not need to be in cache, and the same holds for a block refer- Categories and Subject Descriptors enced very few times over a long period (especially for L1 C.1.0 [Computer Systems Organization]: Processor Ar- cache). Overall performance depends not only on how much chitectures — General data the hierarchy holds, but also on which data it retains. The working sets of modern applications are much larger General Terms than all caches in most hierarchies (exceptions being very large, off-chip L3 caches, for instance), which makes decid- Design, Performance ing which blocks to keep where in the hierarchy of crucial importance. We address precisely this problem. Keywords In this paper we segregate block behaviors into four cate- cache memory, memory hierarchy gories. We show how each category must be treated in terms of cache hierarchy placement. Finally, we propose an archi- 1. INTRODUCTION tecture implementation that dynamically categorizes blocks and inserts them in the hierarchy based on their categories. As the gap between processor and memory speeds in- creases, effective and efficient cache hierarchies become more and more crucial. The currently common method for ad- 2. BACKGROUND AND RELATED WORK dressing this memory wall problem exploits several levels Inclusion in the cache hierarchy has attracted attention from researchers since caches were introduced. Cache hier- archies have largely been inclusive for almost two decades; Permission to make digital or hard copies of all or part of this work for that is, L1 is a subset of L2, which is subset of L3, and so on. personal or classroom use is granted without fee provided that copies are This organization worked well before the sub-micron era, not made or distributed for profit or commercial advantage and that copies especially when single-core chips were the primary design bear this notice and the full citation on the first page. To copy otherwise, to choice. Uniprocessor cycle times were often large enough to republish, to post on servers or to redistribute to lists, requires prior specific hide cache access latencies. permission and/or a fee. CF’10, May 17–19, 2010, Bertinoro, Italy. With the advent of multiple cores on a chip [1, 2, 3], on- Copyright 2010 ACM 978-1-4503-0044-5/10/05 ...$10.00. chip caches are increasing in number, size, and design so- phistication, and private caches are decreasing in size. For propose retaining some fraction of the working set in cache instance, the IBM POWER4 architecture [4] has a 1.5MB so that fraction can contribute to cache hits. Subramanian L2 cache organized as three slices shared among its two pro- et al [12] present another adaptive replacement policy: the cessor cores; the IBM POWER5 has a 1.875MB L2 cache cache switches between different replacement policies based with a 36MB off-chip L3 [5]; the Intel Itanium [6] has a on access behavior. Wong and Baer [13] propose techniques three-level, on-chip cache with combined capacity of 3MB; to close the gap between LRU and OPT replacement. and the Intel Core i7 (Nehalem) has a shared L3 inclusive All cache misses are not of equal importance (e.g., some cache of 8MB [7]. As the complexity of on-chip caches in- data are required more quickly by the instructions that con- creases, the need to reduce miss rates grows in importance, sume them, whereas others are required by instructions that as does access time (even for L1 caches, single-cycle access are more latency tolerant). The amount of exploitable mem- times are no longer possible). ory level parallelism (MLP) [14, 15, 16] also affects applica- Designers have traditionally maintained inclusion in the tion performance, and thus Qureshi et al. [17] propose an memory hierarchy for several reasons: for instance, in mul- MLP-aware cache replacement policy. In this paper we pro- tiprocessor systems, inclusion simplifies memory controller pose a scheme that is globalized, adaptive, and complemen- and processor design by limiting effects of cache coherence tary to most of the aforementioned techniques. messages to higher levels in the memory hierarchy. Unfor- tunately, cache designs that enforce inclusion are inherently 3. BLOCK BEHAVIOR: A CASE STUDY wasteful with respect to both space and bandwidth: every line in a lower level is duplicated in higher levels, and up- The performance of a cache hierarchy and its effects on dates in lower levels trigger many more updates in other overall system performance inherently depend on cache block levels, wasting bandwidth. As the relative bandwidth onto behavior. For example, a block rarely accessed may evict a a multiple-core chip decreases with the number of on-chip block very heavily accessed, causing in higher miss rates. CPUs and relatively smaller cache real estate per CPU, this Sometimes, if the evicted block is dirty, higher bandwidth problem has sparked a wave of proposals for non-inclusive requirements result. cache hierarchies. The behavior of a cache block can be summarized by two We can violate inclusion two ways. The first is to have main characteristics: the number of times it is accessed, a non-inclusive cache, and the second is to have a mutually and the number of times it has been evicted and re-fetched. exclusive cache. For the former, we simply do not enforce The first is an indication of the importance of the block, inclusion. Most of the proposals in this category apply a re- and the second shows how block accesses are distributed in placement algorithm that is local to individual caches. For time. As an example, Figure 1 shows two benchmarks from instance, when a block is evicted from L2, its correspond- SPEC2000: twolf from SPECINT and art from SPECFP [18]. ing block is not evicted from L1. However, the motivation These two benchmarks are known to be memory bound [19]. for such schemes is to develop innovative local replacement The figure shows four histograms. Those on the left show policies. Qureshi et al. [8] propose a replacement algorithm the distribution of total numbers of accesses to different in which an incoming block is inserted in the LRU instead blocks. For twolf the majority of blocks are accessed be- of MRU position without enforcing inclusion, since blocks tween 1,000 and 10,000 times, but for art the majority are brought into cache have been observed to move from MRU accessed between 100 and 1,000 times. Some blocks are ac- to LRU without being referenced again. The authors de- cessed very few times: more than 8,000 blocks are accessed couple block placement in the LRU stack and victim selec- fewer than 100 times. The histograms on the right show tion. Xie and Loh propose to also decouple block promotion numbers of block reuses, or the number of times a block is on a reference [9]. All these techniques improve efficiency, evicted and reloaded. Over 15,000 unique blocks in twolf but only at the levels of individual caches: each cache acts and over 25,000 in art are loaded more than 1000 times. individually, with no global view of the hierarchy. We in- Based on these observations, a block may be loaded very stead propose schemes that are complementary to and can few times, and may be accessed very lightly in each epoch be combined with such local schemes, but our approach has (time between evictions).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-