RDIP: Return-Address-Stack Directed Instruction Prefetching

RDIP: Return-Address-Stack Directed Instruction Prefetching

RDIP: Return-address-stack Directed Instruction Prefetching Aasheesh Kolli Ali Saidi Thomas F. Wenisch University of Michigan ARM University of Michigan [email protected] [email protected] [email protected] ABSTRACT 1.5 NoP L1 instruction fetch misses remain a critical performance N2L 1.4 bottleneck, accounting for up to 40% slowdowns in server Ideal applications. Whereas instruction footprints typically fit 1.3 within last-level caches, they overwhelm L1 caches, whose capacity is limited by latency constraints. Past work has 1.2 shown that server application instruction miss sequences are highly repetitive. By recording, indexing, and prefetch- 1.1 ing according to these sequences, nearly all L1 instruction misses can be eliminated. However, existing schemes require 1.0 impractical storage and considerable complexity to correct for minor control-flow variations that disrupt sequences. 0.9 In this work, we simplify and reduce the energy require- ments of accurate instruction prefetching via two observa- 0.8 Relative Performance (Relative to NoP) tions: (1) program context as captured in the call stack cor- SSJ gem5 relates strongly with L1 instruction misses, and (2) the re- GMEAN turn address stack (RAS), already present in all high perfor- HD-wdcnt HD-teraread mance processors, succinctly summarizes program context. MC-friendfeedMC-microblog We propose RAS-Directed Instruction Prefetching (RDIP), which associates prefetch operations with signatures formed Figure 1: Speedup under next-2-line (N2L) and an ideal- from the contents of the RAS. RDIP achieves 70% of the po- ized instruction cache (Ideal) normalized to a cache without tential speedup of an ideal L1 cache, outperforms a prefetcher- prefetching (NoP). less baseline by 11.5% and reduces energy and complexity relative to sequence-based prefetching. RDIP's performance is within 2% of the state-of-the-art Proactive Instruction Fetch, with nearly 3X reduction in storage and 1.9X reduc- 1. INTRODUCTION tion in energy overheads. Recent research shows that L1 instruction fetch misses re- main a critical performance bottleneck in traditional server Categories and Subject Descriptors workloads [9, 8, 11, 12, 32], cloud computing workloads [7, B.3.2 [Memory Structures]: Design Styles - cache mem- 19], and even smartphone applications [10], accounting for ories up to 40% slowdowns. Whereas instruction footprints typ- ically fit comfortably within last-level caches, they over- whelm L1 caches, whose capacity is tightly limited by la- General Terms tency constraints. Intermediate instruction caches may re- Design, Performance duce miss penalties, but are either too small to capture the entire instruction footprint or have high access latencies, Keywords which are then exposed on the execution critical path [7, 11]. Unlike L1 data misses, out of order mechanisms are ineffec- RAS, caching, instruction fetch, prefetching, accuracy tive in hiding instruction miss penalties. Figure 1 shows the performance gap between a 32kB L1 cache without prefetch- Permission to make digital or hard copies of all or part of this work for ing (\NoP"), the same cache with a conventional next-2- personal or classroom use is granted without fee provided that copies are not line prefetcher (\N2L"), and an idealized (unbounded size, made or distributed for profit or commercial advantage and that copies bear but single-cycle latency; \Ideal") cache for a range of cloud this notice and the full citation on the first page. Copyrights for components computing and server applications. Instruction prefetching of this work owned by others than ACM or the author must be honored. To has the potential to improve performance by an average of copy otherwise, to republish, to post on servers or to redistribute to lists, 16.2%, but next-line prefetching falls well short of this po- requires prior specific permission and/or a fee. MICRO’46, December 7-11, 2013, Davis, CA, USA tential. (For a brief description of the workloads and corre- Copyright 2013 ACM 978-1-4503-2638-4/13/12 ...$15.00. sponding performance metrics, see Section 4). 260 The importance of instruction prefetching has long been 2. RELATED WORK recognized. Nearly all shipping processors, from embedded- Due to the performance criticality of instruction cache to server-class systems, include some form of next-line or misses, there is a rich body of prior work on instruction stream-based instruction prefetcher [13, 14, 26, 30, 31, 37]. prefetching. Even early computer systems included next- Such prefetchers are simple and require negligible storage, line instruction prefetchers to exploit the common case of but fail to prefetch across fetch discontinuities (e.g., func- sequential instruction fetch [1]. This early concept evolved tion calls and returns), which incur numerous stalls. Early into next-N-line and instruction stream prefetchers [14, 26, research attempts at more sophisticated instruction prefetch 30, 37], which make use of a variety of trigger events and con- leverage the branch predictor to run ahead of fetch [5, 24, 27, trol mechanisms to modulate the aggressiveness and looka- 28, 33, 36]. However, poor branch predictability and insuf- head of the prefetcher. However, next-line prefetchers pro- ficient lookahead limit the effectiveness of these designs [9]. vide poor accuracy and lack prefetch lookahead for codes To our knowledge, such instruction prefetchers have never with frequent branching and function calls. Nevertheless, been commercially deployed. because of their simplicity, next-line and stream prefetch- More recent hardware prefetching proposals rely on the ers are widely deployed in industrial designs (e.g., [13]). To observation that instruction cache miss or instruction exe- our knowledge, more sophisticated hardware prefetchers pro- cution sequences are highly repetitive [8, 9]. These designs posed in the literature have never been deployed. log the miss/execution streams in a large circular buffer and To improve prefetch accuracy and lookahead in branch- maintain an index on this buffer to locate and replay past and call-heavy code, researchers have proposed several branch sequences on subsequent triggers. The most recent proposal, predictor based prefetchers [5, 27, 28, 33]. Run-ahead ex- Proactive Instruction Fetch (PIF) [8], can eliminate nearly ecution [22], wrong path instruction prefetching [24], and all L1 instruction misses, but requires impractical storage speculative threading mechanisms [34, 39] also prefetch in- and substantial complexity to correct for minor control-flow structions by using control flow speculation to explore ahead variations that disrupt otherwise-repetitive sequences. Stor- of the instruction fetch unit. As shown by Ferdman et al. [8], age (and hence, energy) requirements grow rapidly with code these prefetchers suffer from interference caused by wrong footprint because these mechanisms log all instruction-block path execution and insufficient lookahead when the branch fetches and must maintain an index by block address. predictor traverses loop branches. RDIP instead bases its In this work, we seek to simplify and reduce the stor- predictions on non-speculative RAS state, which is more age and energy requirements of accurate hardware instruc- stable. It is important to note that execution-path based tion prefetching. We exploit the observation that program correlation has been shown to be effective for branch predic- context, as captured in the call stack, correlates strongly tion [23], dead-block prediction [17] and last touch predic- with L1 instruction misses. Moreover, we note that mod- tion [16], but none of these leverage the RAS to make their ern high-performance processors already contain a structure respective predictions. that succinctly summarizes program context: the return The discontinuity prefetcher [32] handles fetch disconti- address stack (RAS). Upon each call or return operation, nuities but its lookahead is limited to one fetch discontinu- RAS-Directed Instruction Prefetching (RDIP) encodes the ity at a time to prevent run-away growth in the number of RAS state into a compact signature comprising the active prefetch candidates. The branch history guided prefetcher call stack, and direction/destination of the current call or (BHGP) [33] keeps track of the branch instructions executed return. We find a strong correlation between these signa- and near future instruction cache misses, and prefetches tures and L1 instruction misses, and associate each signa- them upon next occurrence of the branch. BHGP cannot ture with a sequence of misses recently seen for that signa- differentiate among invocations of the same branch, which ture. By generating signatures on both calls and returns, results in either unnecessary prefetches or reduced coverage. we construct signatures with fine-grained context informa- In RDIP, by using the RAS (instead of a single entry) to tion that can prefetch not only on traversals down the call make prefetching decisions, different invocations of a func- graph, but can also prefetch accurately within large func- tion call are uniquely identified leading to more accurate tions and when deep call hierarchies unwind|a key advan- prefetching. tage over past hardware and software prefetching schemes TIFS [9] and PIF [8] address the limitations of branch- that exploit caller-callee relationships [2, 20]. Furthermore, predictor-directed prefetching by directly recording the in- we find that RAS signatures are highly predictable|current struction fetch miss and instruction commit sequences,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us