Stash: Have Your Scratchpad and Cache It Too ∗ Rakesh Komuravelli† Matthew D

Stash: Have Your Scratchpad and Cache It Too ∗ Rakesh Komuravelli† Matthew D

Stash: Have Your Scratchpad and Cache It Too ∗ Rakesh Komuravelli† Matthew D. Sinclair† Johnathan Alsop† Muhammad Huzaifa† Maria Kotsifakou† Prakalp Srivastava† Sarita V. Adve†‡ Vikram S. Adve†‡ † University of Illinois at Urbana-Champaign ‡ École Polytechnique Fédérale de Lausanne [email protected] Abstract the memory hierarchy is expected to become a dominant con- Heterogeneous systems employ specialization for energy sumer of energy as technology continues to scale [18, 22]. efficiency. Since data movement is expected to be a dominant Thus, efficient data movement is essential for energy efficient consumer of energy, these systems employ specialized mem- heterogeneous systems. ories (e.g., scratchpads and FIFOs) for better efficiency for To provide efficient data movement, modern heterogeneous targeted data. These memory structures, however, tend to exist systems use specialized memories; e.g., scratchpads and FI- in local address spaces, incurring significant performance and FOs. These memories often provide better efficiency for spe- energy penalties due to inefficient data movement between the cific access patterns but they also tend to exist in disjoint, global and private spaces. We propose an efficient heteroge- private address spaces. Transferring data to/from these pri- neous memory system where specialized memory components vate address spaces incurs inefficiencies that can negate the are tightly coupled in a unified and coherent address space. benefits of specialization. We propose an efficient heteroge- This paper applies these ideas to a system with CPUs and neous memory system where specialized memory components GPUs with scratchpads and caches. are tightly coupled in a unified and coherent address space. We introduce a new memory organization, stash, that com- Although industry has recently transitioned to more tightly bines the benefits of caches and scratchpads without incurring coupled heterogeneous systems with a single unified address their downsides. Like a scratchpad, the stash is directly ad- space and coherent caches [1, 3, 20], specialized components dressed (without tags and TLB accesses) and provides compact such as scratchpads are still within private, incoherent address storage. Like a cache, the stash is globally addressable and spaces. visible, providing implicit data movement and increased data In this paper, we focus on heterogeneous systems with reuse. We show that the stash provides better performance CPUs and GPUs where the GPUs access both coherent caches and energy than a cache and a scratchpad, while enabling and private scratchpads. We propose a new memory organiza- new use cases for heterogeneous systems. For 4 microbench- tion, stash, that combines the performance and energy benefits marks, which exploit new use cases (e.g., reuse across GPU of caches and scratchpads. Like a scratchpad, the stash pro- compute kernels), compared to scratchpads and caches, the vides compact storage and does not have conflict misses or stash reduces execution cycles by an average of 27% and 13% overheads from tags and TLB accesses. Like a cache, the respectively and energy by an average of 53% and 35%. For stash is globally addressable and visible, enabling implicit, 7 current GPU applications, which are not designed to exploit on-demand, and coherent data movement and increasing data the new features of the stash, compared to scratchpads and reuse. Table 1 compares caches and scratchpads (Section 2 caches, the stash reduces cycles by 10% and 12% on average discusses the stash). (max 22% and 31%) respectively, and energy by 16% and 32% on average (max 30% and 51%). 1.1. Caches 1. Introduction Caches are a common memory organization in modern sys- Specialization is a natural path to energy efficiency. There tems. Their software transparency makes them easy to pro- has been a significant amount of work on compute specializa- gram, but caches have several inefficiencies: tion and it seems clear that future systems will support some Indirect, hardware-managed addressing: Cache loads and collection of heterogeneous compute units (CUs). However, stores specify addresses that hardware must translate to deter- mine the physical location of the accessed data. This indirect ∗This work was supported in part by Intel Corporation through the Illi- nois/Intel Parallelism Center at Illinois, the National Science Foundation under addressing implies that each cache access (a hit or a miss) grants CCF-1018796 and CCF-1302641, a Qualcomm Innovation Fellowship incurs (energy) overhead for TLB lookups and tag compar- for Komuravelli and Sinclair, and by the Center for Future Architectures Re- isons. Virtually tagged caches do not require TLB lookups search (C-FAR), one of the six centers of STARnet, a Semiconductor Research on hits, but they incur additional overhead, including dealing Corporation program sponsored by MARCO and DARPA. with synonyms, page mapping and protection changes, and Permission to make digital or hard copies of all or part of this work for cache coherence [8]. Furthermore, the indirect, hardware- personal or classroom use is granted without fee provided that copies are not managed addressing also results in unpredictable hit rates made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components due to cache conflicts, causing pathological performance (and of this work owned by others than ACM must be honored. Abstracting with energy) anomalies, a notorious problem for real-time systems. credit is permitted. To copy otherwise, or republish, to post on servers or to Inefficient, cache line based storage: Caches store data at redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. fixed cache line granularities which wastes SRAM space when ISCA ’15, June 13 - 17, 2015, Portland, OR, USA a program does not access the entire cache line (e.g., when a c 2015 ACM. ISBN 978-1-4503-3402-0/15/06...$15.00 program phase traverses an array of large objects but accesses DOI: http://dx.doi.org/10.1145/2749469.2750374 only one field in each object). Feature Benefit Cache Scratchpad Stash No address translation hardware access 7 3 3 Directly addressed (if physically tagged) (on hits) No tag access 7 3 3 No conflict misses 7 3 3 Compact storage Efficient use of SRAM storage 7 3 3 Implicit data movement from/to structure Global addressing No pollution of other memories 3 7 3 On-demand loads into structures Lazy writebacks to global address space (AS) Global visibility 3 7 3 Reuse across compute kernels and application phases Table 1: Comparison of cache, scratchpad, and stash. 1.2. Scratchpads scratchpad copy and then writes back the result to aosA. The Scratchpads (referred to as shared memory in CUDA) are local bottom of Figure 1a shows some of the corresponding steps memories that are managed in software, either by the program- in hardware. First, the hardware must explicitly copy fieldX mer or through compiler support. Unlike caches, scratchpads into the scratchpad. To achieve this, the application issues are directly addressed so they do not have overhead from TLB an explicit load of the corresponding global address to the lookups or tag comparisons (this provides significant savings: L1 cache (event 1). On an L1 miss, the hardware brings 34% area and 40% power [7] or more [26]). Direct addressing fieldX’s cache line into the L1, polluting it as a result (events also eliminates the pathologies of conflict misses and has a 2 and 3). Next, the hardware sends the data value from the fixed access latency (100% hit rate). Scratchpads also provide L1 to the core’s register (event 4). Finally, the application compact storage since the software only brings useful data issues an explicit store instruction to write the value from into the scratchpad. These features make scratchpads appeal- the register into the corresponding scratchpad address (event ing, especially for real-time systems [5, 37, 38]. However, 5). At this point, the scratchpad has a copy of fieldX and scratchpads also have some inefficiencies: the application can finally access the data (events 6 and 7). Not Globally Addressable: Scratchpads use a separate ad- Once the application is done modifying the data, the dirty dress space disjoint from the global address space, with no scratchpad data is explicitly written back to the global address hardware mapping between the two. Thus, extra instructions space, requiring loads from the scratchpad and stores into the must explicitly move data between the two spaces, incurring cache (not shown in the figure). performance and energy overhead. Furthermore, in current We refer to this scratchpad usage mode, where data is moved systems the additional loads and stores typically move data via explicitly from/to the global space, as the global-unmapped the core’s L1 cache and its registers, polluting these resources mode. Scratchpads can also be used in temporary mode for and potentially evicting (spilling) useful data. Scratchpads private, temporary values. Temporary values do not require also do not perform well for applications with on-demand global address loads or writebacks because they are discarded loads because today’s scratchpads preload all elements. In after their use (they trigger only events 6 and 7). applications with control/data dependent accesses, only a few 1.3. Stash of the preloaded elements will be accessed. Not Globally Visible: A scratchpad is visible only to its local To address the inefficiencies of caches and scratchpads we CU. Thus dirty data in the scratchpad must be explicitly writ- introduce a new memory organization, stash, which combines ten back to the corresponding global address before it can be the best properties of the cache and scratchpad organizations. used by other CUs. Further, all global copies in the scratchpad Similar to a scratchpad, the stash is software managed, directly must be freshly loaded if they could have been written by addressable, and can compactly map non-contiguous global other CUs.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us