Introducing Hierarchy-Awareness in Replacement and Bypass Algorithms for Last-Level Caches

Introducing Hierarchy-Awareness in Replacement and Bypass Algorithms for Last-Level Caches

Introducing Hierarchy-awareness in Replacement and Bypass Algorithms for Last-level Caches Mainak Chaudhuri Jayesh Gaur Nithiyanandan Bashyam Indian Institute of Technology Intel Architecture Group Intel Architecture Group Kanpur 208016, INDIA Bangalore 560103, INDIA Bangalore 560103, INDIA [email protected] [email protected] [email protected] Sreenivas Subramoney Joseph Nuzman Intel Architecture Group Intel Architecture Group Bangalore 560103, INDIA Haifa 31015, ISRAEL [email protected] [email protected] ABSTRACT sive LLC design with an identical hierarchy, this corresponds The replacement policies for the last-level caches (LLCs) are to an average throughput improvement of 8.2% with only 17% usually designed based on the access information available lo- more data write transactions originating from the L2 cache. cally at the LLC. These policies are inherently sub-optimal due to lack of information about the activities in the inner- Categories and Subject Descriptors levels of the hierarchy. This paper introduces cache hierarchy- aware replacement (CHAR) algorithms for inclusive LLCs (or B.3 [Memory Structures]: Design Styles L3 caches) and applies the same algorithms to implement ef- General Terms ficient bypass techniques for exclusive LLCs in a three-level hierarchy. In a hierarchy with an inclusive LLC, these algo- Algorithms, design, measurement, performance rithms mine the L2 cache eviction stream and decide if a block Keywords evicted from the L2 cache should be made a victim candidate in the LLC based on the access pattern of the evicted block. Last-level caches, replacement policy, bypass algorithm Ours is the first proposal that explores the possibility of using a subset of L2 cache eviction hints to improve the replacement 1. INTRODUCTION algorithms of an inclusive LLC. The CHAR algorithm classi- The replacement policy for a particular level of a cache fies the blocks residing in the L2 cache based on their reuse hierarchy is usually designed based on the access informa- patterns and dynamically estimates the reuse probability of tion (frequency, recency, etc.) available only at that level of each class of blocks to generate selective replacement hints the hierarchy. Such a level-centric design methodology for to the LLC. Compared to the static re-reference interval pre- the last-level cache (LLC) fails to incorporate two important diction (SRRIP) policy, our proposal offers an average reduc- pieces of information. First, the reuses taking place in the tion of 10.9% in LLC misses and an average improvement of inner-levels of a hierarchy are not propagated to the LLC. 3.8% in instructions retired per cycle (IPC) for twelve single- Past research showed that propagating even a small fraction threaded applications. The corresponding reduction in LLC of these reuses to the LLC can significantly increase the traf- misses for one hundred 4-way multi-programmed workloads is fic in the on-die interconnect [7]. Second, the clean evictions 6.8% leading to an average improvement of 3.9% in through- from the inner-levels are usually not propagated to the LLC put. Finally, our proposal achieves an 11.1% reduction in in an inclusive or a non-inclusive/non-exclusive hierarchy. LLC misses and a 4.2% reduction in parallel execution cycles This paper, for the first time, studies the possibility of us- for six 8-way threaded shared memory applications compared ing an appropriately chosen subset of L2 cache evictions as to the SRRIP policy. hints for improving the replacement algorithms of an inclu- In a cache hierarchy with an exclusive LLC, our CHAR sive LLC (or L3 cache) in a three-level hierarchy. The central proposal offers an effective algorithm for selecting the subset idea is that when the L2 cache residency of a block comes to of blocks (clean or dirty) evicted from the L2 cache that need an end, one can estimate its future liveness based on its reuse not be written to the LLC and can be bypassed. Compared to pattern observed during its residency in the L2 cache. Partic- the TC-AGE policy (analogue of SRRIP for exclusive LLC), ularly, if we can deduce that the next reuse distance of such our best exclusive LLC proposal improves average throughput a block is significantly beyond the LLC reach, we can notify by 3.2% while saving an average of 66.6% of data transactions the LLC that this block should be marked a potential victim from the L2 cache to the on-die interconnect for one hundred candidate in the LLC. An early eviction of such a block can 4-way multi-programmed workloads. Compared to an inclu- help retain more blocks in the LLC with relatively shorter reuse distances. In a hierarchy with an exclusive LLC, this liveness information can be used to decide the subset of the L2 cache evictions that need not be allocated in the LLC. Permission to make digital or hard copies of all or part of this work for To estimate the merit of making an inclusive LLC aware of personal or classroom use is granted without fee provided that copies are the L2 cache evictions, we conduct an oracle-assisted experi- not made or distributed for profit or commercial advantage and that copies ment where the LLC runs the two-bit SRRIP policy [8] (this bear this notice and the full citation on the first page. To copy otherwise, to is our baseline in this paper). The two-bit SRRIP policy republish, to post on servers or to redistribute to lists, requires prior specific fills a block into the LLC with a re-reference prediction value permission and/or a fee. (RRPV) of two and promotes it to RRPV of zero on a hit. PACT’12, September 19–23, 2012, Minneapolis, Minnesota, USA. A block with RRPV three (i.e., large re-reference distance) is Copyright 2012 ACM 978-1-4503-1182-3/12/09 ...$15.00. chosen as the victim in a set. If none exists, the RRPVs of all 1 The fact that 60% of the L2 cache evictions are dead cor- 0.8 responds well with the already known fact that the blocks 0.6 brought into the LLC have low use counts [21]. For the data 0.4 in Figure 2, the average use count per LLC block is about SRRIP−evict−oracle Belady 1.67 (reciprocal of dead percentage). In summary, the L2 0.2 cache eviction stream is rich in information regarding live- Normalized LLC miss count 0 ness of the cache blocks. Accurate separation of the live blocks AVG 179.art 254.gap 403.gcc 429.mcf 462.libq from the dead ones in the L2 cache eviction stream can bridge 173.applu 401.bzip2 172.mgrid 183.equake 456.hmmer 464.h264ref482.sphinx3 a significant portion of the gap between the baseline and the Figure 1: Number of LLC misses with oracle-assisted optimal replacement policy for the LLC. LLC replacement policies normalized to baseline SRRIP In Section 2, we present our cache hierarchy-aware replace- in an inclusive LLC. ment (CHAR) algorithms for inclusive LLCs. Figure 3 shows the blocks in the set are increased in steps of one until a block a high-level implementation of our CHAR algorithm for inclu- with RRPV three is found. Ties are broken by victimizing the 1 sive LLCs. The dead hint detector hardware is part of the L2 block with the least physical way id. In our oracle-assisted cache controller. It consumes the L2 cache eviction stream 2 experiment, on an L2 cache eviction of a data block , a next and identifies the eviction addresses that should be sent to forward use distance oracle determines the relative order be- the LLC as dead hints. As usual, it sends all dirty evictions tween the next forward use distances of the evicted block and to the LLC, some of which may be marked as dead hints. We the current SRRIP victim in the LLC set where the L2 cache note that the general framework of CHAR algorithms is not victim belongs to. If the next forward use distance of the tied to any specific LLC replacement policy. L2 cache victim is bigger, it is marked a potential victim by Section 2 also discusses how our CHAR proposal seamlessly changing its RRPV to three in the LLC. applies to exclusive LLCs as well. In such designs, we use the The left bar in each group of Figure 1 shows the number dead hints to decide which blocks evicted from the L2 cache of LLC misses of this oracle-assisted policy normalized to the can be bypassed and need not be allocated in the exclusive baseline SRRIP. The bar on the right in each group shows LLC (blocks are allocated and written to an exclusive LLC the number of LLC misses in Belady’s optimal algorithm [1, when they are evicted from the L2 cache). This leads to 20] normalized to the baseline. These experiments are carried bandwidth saving in the on-die interconnect and effective ca- out on an offline cache hierarchy simulator that takes as in- pacity allocation in the LLC. Further, we show that simple put the entire L2 cache access trace of twelve single-threaded variants of our CHAR proposal can be used to dynamically applications drawn from SPEC 2000 and SPEC 2006 suites. decide if a block should be cached in exclusive mode or non- The L2 cache access trace of each application is collected fora inclusive/non-exclusive mode in the LLC. The former mode representative set of one billion dynamic instructions chosen optimizes the effective on-die cache capacity, while the latter using the SimPoint toolset [22].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us