
R-Cache: A Highly Set-Associative In-Package Cache using Memristive Arrays Payman Behnam, Arjun Pal Chowdhury, and Mahdi Nazm Bojnordi School of Computing, University of Utah Salt Lake City, UT, USA fbehnam, arjunmax, [email protected] Abstract—Over the past decade, three-dimensional die stacking to scale [1]. Furthermore, the DRAM power consumption— technology has been considered for building large-scale in- due to periodic refresh, precharge, static power in large row package memory systems. In particular, in-package DRAM cache buffers, and dynamic power for row activation, and data has been considered as a promising solution for high band- width and large-scale cache architectures. There are, however, movement over large wires—has resulted in serious thermal significant challenges such as limited energy efficiency, costly tag issues in 3D die-stacked memories [13]. Apart from these management, and physical limitations for scalability that need to problems, DRAM caches need to manage a large amount of be effectively addressed before one can adopt in-package caches meta data for dirty, valid, tag, and error correction code (ECC) in the real-world applications. bits that have become a major problem for bandwidth and This paper proposes R-Cache, an in-package cache made by 3D die stacking of memristive memory arrays to alleviate the energy efficiency [5]–[12]. above mentioned challenges. Our simulation results on a set As a result, DRAM alternative technologies have been of memory intensive parallel applications indicate that R-Cache considered for in-package memories, such as recent propos- outperforms the state-of-the-art proposals for in-package caches. als that examine 3D memory systems with resistive RAM R-Cache improves performance by 38% and 27% over the state- of-the-art direct mapped and set associative cache architectures, (RRAM) [14]–[16]. Among all DRAM alternatives, RRAM respectively. Moreover, R-Cache results in averages of 40% and is a non-volatile memory technology based on memristive 27% energy reductions as compared to the direct mapped and devices that exhibit FLASH-like density and fast read speeds set-associative cache systems. comparable to DRAM [17]. Similar to the 3D die-stacked I. INTRODUCTION DRAM, building an in-package cache with RRAM remains a significant challenge, mainly due to the excessive capacity The ever growing use of data intensive applications across and bandwidth consumption for managing meta data. This various fields of science and engineering have made large paper examines R-Cache, an in-package die-stacked memory memories and cache architectures inevitable components of architecture specifically designed for highly set-associative future exascale computing platforms. While on-chip memories caches using memristive arrays. The proposed architecture are limited by power and area consumption, off-chip memory enables energy-efficient and fast in-package caching through bandwidth wall systems suffer from the so called due to pin the following contributions. count limitations [1] and expensive off-chip data movement. Recently, three dimensional (3D) die stacking has enabled • R-Cache provides a 3D memory structure specifically integrating multiple DRAM dice within the same processor designed for dense and highly set associative caches. The package that can reach memory bandwidths in excess of proposed architecture benefits from the search capabilities Tera bytes per second (TBps) using the through-silicon via in the resistive memory arrays to realize parallel searches (TSV) technologies. High bandwidth memory (HBM), hybrid for highly set-associative cache lookup. To the best of memory cube (HMC), and wide IO (WIO) are examples of our knowledge this is the first proposal that specifically in-package memory systems mainly designed for DRAM [2]– designs the memory layers and re-purposes the interface [4]. These recent achievements in memory technologies have for building an RRAM-based HBM cache. motivated numerous researchers to design giga-scale DRAM- • Through in-memory tag checking and block management, based in-package cache architectures. the proposed R-Cache architecture provides a superior Despite the high bandwidth interfacing technology, 3D die bandwidth efficiency and performance compared to the stacked DRAM access is almost identical to the conventional state-of-the-are direct mapped and set associative cache DDRx memories [5]–[12]. Moreover, current proposals for architectures. Moreover, RRAM eliminates the need for in-package DRAM provide limited capacity. This limitation activation and refresh energy, which results in reducing may become an even more serious concern for the future the execution time, energy consumption, and bandwidth data-intensive applications as the DRAM designers cannot utilization of the in-package cache and off-chip memory. provide a clear road map on how the technology will continue Our simulation results on a set of 12 memory intensive This work was supported in part by the National Science Foundation (NSF) applications indicate that R-Cache improves performance by under Grant CCF-1755874 and in part by the University of Utah Seed. 38% and 27% over the state-of-the-art direct mapped and set associative cache architectures, respectively. Moreover, management [5], [6], [8], [12]. This paper leverages the higher R-Cache results in 40% and 27% reductions the average density and in-memory processing capabilities of the RRAM energy consumption compared to the direct mapped and set technology to build a highly set-associative fine-grained gigas- associative cache systems. cale cache architecture for data intensive applications. Figure 2 illustrates the fundamental differences between R-Cache and II. BACKGROUND AND MOTIVATION the existing proposals for direct mapped and set-associative This section provides the necessary background knowledge cache architectures. on high bandwidth memory, existing DRAM-based cache (a) Direct Mapped HBM Cache architectures, and resistive memory technologies. BW Optimizer Controller T D T D T D T D T D ... T D A. High Bandwidth Memory On-chip Tag Checker Figure 1 illustrates an example high bandwidth memory (HBM) system including eight 3D stacked DRAM dice and a (b) Set Associative HBM Cache processor die placed in the same package. The bottom DRAM Way Coordinator T D T D T D T D T D T D Controller ... layer and the processor die are connected to the silicon inter- way 1 way 2 way 1 way 2 way 1 way 2 On-chip poser through micro bumps [18]. Through-silicon vias (TSVs) Tag Checker Set Set Set are used to connect the DRAM layers, which are typically divided into 4-8 vertically independent vaults (channels) [19]. (c) Proposed R-Cache Each vault is horizontally divided into 8-16 banks that results T D T D ... T D T D T D ... T D Controller ... in up to 128 banks to exploit data-level parallelism in user way 1 way 2 way 128 way 1 way 2 way 128 applications. The banks comprise a hierarchy of sub-banks and Set In-Memory Set sub-arrays with a global row buffer. The DRAM technology Tag Checker necessitates a sequence of precharge, activate, read, Fig. 2. Fundamental differences between the existing direct mapped (a), set and write operations prior to transferring data for every associative (b), and the proposed R-Cache (c) architectures. memory request. Typically, one HBM controller per vault is Figure 2(a) shows the existing direct mapped HBM based needed to orchestrate data movement between the processor cache architectures that store the tag and data bits in HBM, and DRAM layers, which involves (1) issuing the right set while the tag checking is carried out on the processor die. Loh of commands for each memory request and (2) enforcing and Hill propose a DRAM based architecture that accesses the necessary timing constraints among all the generated multiple tags or a data block during every HBM access [5]. commands. This paper proposes a high bandwidth RRAM Instead, Alloy [6] proposes to read both the tag and data (TAD) interface realized through the conventional HBM commands bits of every cache block within an HBM access [6]. Therefore, with a new set of timing parameters specifically computed for the Alloy cache ameliorates latency and bandwidth at the cost R-Cache operations. of reducing hit rate. Bear [8] proposes to further optimize the HBM bandwidth by reducing the amount of unnecessary traffic between the processor, cache, and main memory. To improve the limited hit-rate in direct mapped caches, Accord [12] has been recently proposed to develop a set- associative HBM cache. Figure 2(b) shows the key compo- nents of such DRAM based cache architecture. Similarly to the direct mapped cache, tag checking is carried out on the processor die; however, the block address mapping is modified Fig. 1. Illustrative example of a DRAM-base cache architecture using the to form cache sets in HBM. Such block organizations, on the high bandwidth memory interface. one hand, increases the cache hit rate; on the other hand, it exerts significant bandwidth overheads for checking all of the B. HBM Cache Architectures tags within every cache set. The Accord cache alleviates this Most existing proposals for gigascale cache architectures problem by (1) limiting the number of cache ways and (2) focus on developing control mechanisms to alleviate the band- employing a way coordinator [20] that reduces the number of width and storage overheads of tag checking. Block granularity unnecessary tag checks per set. DRAM caches have millions of cache blocks each of which Figure 2(c) shows the proposed R-Cache architecture that associated with a tag to make cache lookup possible. Numer- addresses both the bandwidth and latency problems through ous architectures, such as Unison [7], TDC [9], FTDC [10], architecting the memory layers with RRAM arrays. R-Cache and Banshee [11], address this problem by increasing the introduces an energy-efficient in-memory tag checking mech- access granularity from less than hundred bytes to multi- anism that eliminates unnecessary HBM accesses for transfer- kilobyte pages at the cost of more energy and bandwidth ring tag bits between HBM and the processor.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-