The Performance of Runtime Data Cache Prefetching in a Dynamic Optimization System ∗

Total Page:16

File Type:pdf, Size:1020Kb

The Performance of Runtime Data Cache Prefetching in a Dynamic Optimization System ∗ The Performance of Runtime Data Cache Prefetching in a Dynamic Optimization System ∗ Jiwei Lu, Howard Chen, Rao Fu, Wei-Chung Hsu, Dong-Yuan Chen Bobbie Othmer, Pen-Chung Yew Department of Computer Science and Engineering Microprocessor Research Lab University of Minnesota, Twin Cities Intel Corporation {jiwei, chenh, rfu, hsu, bothmer, yew}@cs.umn.edu [email protected] Abstract 1.1. Impact of Program Structures Traditional software controlled data cache prefetching is We first give a simple example to illustrate several is- often ineffective due to the lack of runtime cache miss and sues involved in software controlled data cache prefetching. miss address information. To overcome this limitation, we Fig. 1 shows a typical loop nest used to perform a ma- implement runtime data cache prefetching in the dynamic trix multiplication. Experienced programmers know how optimization system ADORE (ADaptive Object code RE- optimization). Its performance has been compared with void matrix_multiply(long A[N][N], long B[N][N], long C[N][N]) static software prefetching on the SPEC2000 benchmark { int i, j, k; suite. Runtime cache prefetching shows better performance. for ( i = 0; i < N; ++i ) On an Itanium 2 based Linux workstation, it can increase for (j = 0; j < N; ++j ) performance by more than 20% over static prefetching on for (k = 0; k < N; ++k ) some benchmarks. For benchmarks that do not benefit from A[i][j] += B[i][k] * C[k][j]; } prefetching, the runtime optimization system adds only 1%- 2% overhead. We have also collected cache miss profiles to Figure 1. Matrix Multiplication guide static data cache prefetching in the ORCR compiler. With that information the compiler can effectively avoid generating prefetches for loops that hit well in the data to use cache blocking, loop interchange, unroll and jam, or cache. library routines to increase performance. However, typi- cal programmers rely on the compiler to conduct optimiza- tions. In this simple example, the innermost loop is a can- didate for static cache prefetching. Note that the three ar- 1. Introduction rays in the example are passed as parameters to the func- tion. This introduces the possibility of aliasing, and results in less precise data dependency analysis. We used two com- Software controlled data cache prefetching is an effi- pilers for Itanium system in this study: the IntelR C/C++ cient way to hide cache miss latency. It has been very ItaniumR Compiler (i.e. ECC V7.0) [15] and the ORC open successful for dense matrix oriented numerical applications. research compiler 2.0 [26]. Both compilers have imple- However, for other applications that include indirect mem- mented static cache prefetch optimizations that are turned ory references, complicated control structures and recursive on at O3. For the above example function, the ECC com- data structures, the performance of software cache prefetch- piler generates cache prefetches for the innermost loop, but ing is often limited due to the lack of cache miss and miss the ORC compiler does not. Moreover, if the above exam- address information. In this paper, we try to keep the ap- ple is re-coded to treat the three arrays as global variables, plicability of software data prefetching by using cache miss the ECC compiler generates much more efficient code (over profiles and a runtime optimization system. 5x faster on an Itanium 2 machine) by using loop unrolling. ∗ This simple example shows that program structure has a sig- This work is supported in part by the U.S. National Science Founda- tion under grants CCR-0105574 and EIA-0220021, and grants from Intel, nificant impact on the performance of static cache prefetch- Hewlett Packard and Unisys. ing. Some program structures require more complicated Proceedings of the 36th International Symposium on Microarchitecture (MICRO-36 2003) 0-7695-2043-X/03 $17.00 © 2003 IEEE analysis (such as interprocedural analysis) to conduct effi- stride loops, prefetch instructions are more difficult to re- cient and effective cache prefetching. duce. Stride-based prefetching is easy to perform efficiently. 1.2. Impact of Runtime Memory Behavior Prefetching for pointer chasing references and indirect memory references [23][25][29][35][22] are relatively chal- It is in general difficult to predict memory working set lenging since they incur a higher overhead, and must be size and reference behaviors at compile time. Consider used more carefully. A typical compiler would not attempt Gaussian Elimination, for example. The execution of the high overhead prefetching unless there is sufficient evidence loop nest usually generates frequent cache misses at the be- that a code region has frequent data cache misses. ginning of the execution and few cache misses near the end Due to the lack of cache miss information, static cache of the loop nest. Initially, the sub-matrix to be processed prefetches usually are less aggressive in order to avoid un- is usually too large to fit in the data caches; hence frequent desirable runtime overhead. Therefore we attempt to con- cache misses will be generated. As the sub-matrices to be duct data cache prefetching at runtime through a dynamic processed get smaller, they may fit in the cache and produce optimization system. This dynamic optimization system au- fewer cache misses. It is hard for the compiler to generate tomatically monitors the performance of the execution of one binary that meets the cache prefetch requirements for the binary, identifies the performance critical loops/traces both ends. with frequent data cache misses, re-optimizes the selected The memcpy library routine is another example. In some traces by inserting cache prefetch instructions, and patches applications, the call to memcpy may involve a large amount the binary to redirect the subsequent execution to the op- of data movement and intensive cache misses. In some other timized traces. Using cache miss and cache miss address applications, the calls to the memcpy routine have few cache information collected from the hardware performance mon- misses. Once again, it is not easy 1 to provide one memcpy itors, our runtime system conducts more efficient and effec- routine that meets all the requirements. tive software cache prefetching, and significantly increases the performance of several benchmark programs over static void daxpy( double *x, double *y, double a, int n ) cache prefetching. Also in this paper, we will establish a secondary process for comparison to feedback cache miss {int i ; profile to the ORC compiler so that the compiler could for ( i = 0; i < n; ++i ) perform cache prefetch optimizations only for the code re- y[i] += a * x[i]; gion/loop that has frequent cache misses. } The remainder of the paper is organized as follows. Sec- tion 2 introduces the performance monitoring features on Figure 2. DAXPY Itanium processors and the framework of our runtime op- timization system. Section 3 discusses the implemention of the runtime data cache prefetching. In Section 4, we 1.3. Impact of Micro-architectural Constraints present the performance evaluation of runtime prefetching and a profile-guided static prefetching approach. Section 5 Microarchitecture can also limit the effectiveness of soft- highlights the related works and Section 6 offers the con- ware controlled cache prefetching. For example, the issue clusion and future work. bandwidth of memory operations, the memory bus and bank bandwidth [14][34], the miss latency, the non-blocking de- gree of caches, and memory request buffer size will affect the effectiveness of software cache prefetching. Consider 2. Runtime Optimization and ADORE the DAXPY loop in Fig. 2, for example. On the latest Ita- nium 2 processor, two iterations of this loop can be com- puted in one cycle (2 ldfpds, 2 stfds, 2 fmas, which can fit The ADORE (ADaptive Object code REoptimization) in two MMF bundles). If prefetches must be generated for system is a trace based user-mode dynamic optimiza- x y both and arrays, the requirement of two extra memory tion system. Unlike other dynamic optimization systems operations per iteration would exceed the “two bundles per [10][3][7], its profiling and trace selection are based on cycle” constraint. Since the array references in this exam- sampling of hardware performance monitors. This section ple exhibit unit stride, the compiler could unroll the loop explains performance monitoring, trace selection and opti- to reduce the number of prefetch instructions. For non-unit mization mechanisms in ADORE. Runtime prefetching is 1A compiler may generate multiple versions of memcpy to handle dif- part of the ADORE system and will be discussed in detail ferent cases. in Section 3. Proceedings of the 36th International Symposium on Microarchitecture (MICRO-36 2003) 0-7695-2043-X/03 $17.00 © 2003 IEEE 2.1. Performance Monitoring and Runtime Sam- Main Thread DynOpt Thread pling on Itanium processor Trace User Event Main Selector Intel’s Itanium architecture offers extensive hardware Buffer (UEB) support for performance monitoring [19]. The Itanium- Program Optimizer 2 processor provides more than one hundred counters to measure the performance events such as memory latency, Patcher Phase branch prediction rate, CPU cycles, retired-instruction Detector counts, and pipeline stalls. Two usage models are supported using these performance counters: workload charateriza- Trace Cache System Sample tion, which gives the overall runtime cycle breakdown, and Kernel Space Buffer (SSB) profiling, which provides information for identifying pro- gram bottlenecks. Both models are programmable and fully supported in the latest IA64 Linux kernel. Based on these Figure 3. ADORE Framework two models, St´ephane Eranian developed a generic kernel interface called perfmon [28] to help programmers access the IA64 PMU (Performance Mornitoring Unit) and collect int __libc_start_main (…) { performance profiles on Itanium processors. The sampling … of ADORE is built on top of this kernel interface. dyn_open( ); In ADORE, sample collection is achieved through a on_exit (dyn_close); signal handler communicating with the perfmon interface.
Recommended publications
  • Branch-Directed and Pointer-Based Data Cache Prefetching
    Branch-directed and Pointer-based Data Cache Prefetching Yue Liu Mona Dimitri David R. Kaeli Department of Electrical and Computer Engineering Northeastern University Boston, MA Abstract The design of the on-chip cache memory and branch prediction logic has become an integral part of a microprocessor implementation. Branch predictors reduce the effects of control hazards on pipeline performance. Branch prediction implementations have been proposed which eliminate a majority of the pipeline stalls associated with branches. Caches are commonly used to reduce the performance gap between microprocessor and memory technology. Caches can only provide this benefit if the requested address is in the cache when requested. To increase the probability that addresses are present when requested, prefetching can be employed. Prefetching attempts to prime the cache with instructions and data which will be accessed in the near future. The work presented here describes two new prefetching algorithms. The first ties data cache prefetching to branches in the instruction stream. History of the data stream is incorporated into a Branch Target Buffer (BTB). Since branch behavior determines program control flow, data access patterns are also dependent upon this behavior. Results indicate that combining this strategy with tagged prefetching can significantly improve cache hit ratios. While improving cache hit rates is important, the resulting memory bus traffic introduced can be an issue. Our prefetching technique can also significantly reduce the overall memory bus traffic. The second prefetching scheme dynamically identifies pointer-based data references in the data stream, and records these references in order to successfully prime the cache with the next element in the linked data structure in advance of its access.
    [Show full text]
  • Non-Sequential Instruction Cache Prefetching for Multiple--Issue Processors
    Non-Sequential Instruction Cache Prefetching for Multiple--Issue Processors Alexander V. Veidenbaum Qingbo Zhao Abduhl Shameer Dept. of Information and DSET Inc. Microsoft Inc. Computer Science University of California Irvine 1 [email protected] Abstract This paper presents a novel instruction cache prefetching mechanism for multiple-issue processors. Such processors at high clock rates often have to use a small instruction cache which can have significant miss rates. Prefetching from secondary cache or even memory can hide the instruction cache miss penalties, but only if initiated sufficiently far ahead of the current program counter. Existing instruction cache prefetching methods are strictly sequential and do not prefetch past conditional branches which may occur almost every clock cycle in wide-issue processors. In this study, multi-level branch prediction is used to overcome this limitation. By keeping branch history and target addresses, two methods are defined to predict a future PC several branches past the current branch. A prefetching architecture using such a mechanism is defined and evaluated with respect to its accuracy, the impact of the instruction prefetching on performance, and its interaction with sequential prefetching. Both PC-based and history- based predictors are used to perform a single-lookup prediction. Targeting an on-chip L2 cache with low latency, prediction for 3 branch levels is evaluated for a 4-issue processor and cache architecture patterned after the DEC Alpha-21164. It is shown that history-based predictor is more accurate, but both predictors are effective. The prefetching unit using them can be effective and succeeds when the sequential prefetcher fails.
    [Show full text]
  • Denial-Of-Service Attacks on Shared Cache in Multicore: Analysis and Prevention
    Denial-of-Service Attacks on Shared Cache in Multicore: Analysis and Prevention Michael G. Bechtel, Heechul Yun University of Kansas, USA. fmbechtel, [email protected] Abstract—In this paper we investigate the feasibility of denial- of the cores can access the cache until it is unblocked, which of-service (DoS) attacks on shared caches in multicore platforms. can take a long time (e.g., hundreds of CPU cycles) as it may With carefully engineered attacker tasks, we are able to cause need to access slow main memory. Therefore, if an attacker more than 300X execution time increases on a victim task running on a dedicated core on a popular embedded multicore platform, can intentionally induce shared cache blocking, it can cause regardless of whether we partition its shared cache or not. Based significant timing impacts to other tasks on different cores. on careful experimentation on real and simulated multicore This is because every shared cache hit access, which would platforms, we identify an internal hardware structure of a non- normally take a few cycles, would instead take a few hundreds blocking cache, namely the cache writeback buffer, as a potential cycles until the cache is unblocked. target of shared cache DoS attacks. We propose an OS-level solution to prevent such DoS attacks by extending a state-of-the- While most prior works on cache isolation focused on art memory bandwidth regulation mechanism. We implement the cache space partitioning, either in software or in hardware [9], proposed mechanism in Linux on a real multicore platform and [16]–[19], [25], [27], [37], [43], [44], these cache partitioning show its effectiveness in protecting against cache DoS attacks.
    [Show full text]
  • Fetch Directed Instruction Prefetching
    In Proceedings of the 32nd Annual International Symposium on Microarchitecture (MICRO-32), November 1999. Fetch Directed Instruction Prefetching y z Glenn Reinmany Brad Calder Todd Austin y Department of Computer Science and Engineering, University of California, San Diego z Electrical Engineering and Computer Science Department, University of Michigan Abstract consumed by the instruction cache. This decoupling allows Instruction supply is a crucial component of processor the branch predictor to run ahead of the instruction fetch. performance. Instruction prefetching has been proposed as This can be beneficial when the instruction cache has a miss, a mechanism to help reduce instruction cache misses, which or when the execution core backs up, thereby filling up the in turn can help increase instruction supply to the processor. connecting buffers and causing the instruction cache to stall. In this paper we examine a new instruction prefetch ar- This second case can occur because of data cache misses or chitecture called Fetch Directed Prefetching, and compare long latency instructions in the pipeline. In this paper we it to the performance of next-line prefetching and streaming examine using the fetch block addresses in the FTQ to pro- buffers. This architecture uses a decoupled branch predic- vide Fetch Directed Prefetching (FDP) for the instruction tor and instruction cache, so the branch predictor can run cache. We use the fetch block addresses stored in the FTQ ahead of the instruction cache fetch. In addition, we ex- to guide instruction prefetching, masking stall cycles due to amine marking fetch blocks in the branch predictor that are instruction cache misses.
    [Show full text]
  • Dynamic Eviction Set Algorithms and Their Applicability to Cache Characterisation
    UPTEC IT 20036 Examensarbete 30 hp September 2020 Dynamic Eviction Set Algorithms and Their Applicability to Cache Characterisation Maria Lindqvist Institutionen för informationsteknologi Department of Information Technology Abstract Dynamic Eviction Set Algorithms and Their Applicability to Cache Characterisation Maria Lindqvist Teknisk- naturvetenskaplig fakultet UTH-enheten Eviction sets are groups of memory addresses that map to the same cache set. They can be used to perform efficient information-leaking attacks Besöksadress: against the cache memory, so-called cache side channel attacks. In Ångströmlaboratoriet Lägerhyddsvägen 1 this project, two different algorithms that find such sets are implemented Hus 4, Plan 0 and compared. The second of the algorithms improves on the first by using a concept called group testing. It is also evaluated if these algorithms can Postadress: be used to analyse or reverse engineer the cache characteristics, which is a Box 536 751 21 Uppsala new area of application for this type of algorithms. The results show that the optimised algorithm performs significantly better than the previous Telefon: state-of-the-art algorithm. This means that countermeasures developed 018 – 471 30 03 against this type of attacks need to be designed with the possibility of Telefax: faster attacks in mind. The results also shows, as a proof-of-concept, that 018 – 471 30 00 it is possible to use these algorithms to create a tool for cache analysis. Hemsida: http://www.teknat.uu.se/student Handledare: Christos Sakalis Ämnesgranskare: Stefanos Kaxiras Examinator: Lars-Åke Nordén UPTEC IT 20036 Tryckt av: Reprocentralen ITC Acknowledgements I would like to thank my supervisor Christos Sakalis for all the guidance, dis- cussions and feedback during this thesis.
    [Show full text]
  • In Using the GNU Compiler Collection (GCC)
    Using the GNU Compiler Collection For gcc version 6.1.0 (GCC) Richard M. Stallman and the GCC Developer Community Published by: GNU Press Website: http://www.gnupress.org a division of the General: [email protected] Free Software Foundation Orders: [email protected] 51 Franklin Street, Fifth Floor Tel 617-542-5942 Boston, MA 02110-1301 USA Fax 617-542-2652 Last printed October 2003 for GCC 3.3.1. Printed copies are available for $45 each. Copyright c 1988-2016 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with the Invariant Sections being \Funding Free Software", the Front-Cover Texts being (a) (see below), and with the Back-Cover Texts being (b) (see below). A copy of the license is included in the section entitled \GNU Free Documentation License". (a) The FSF's Front-Cover Text is: A GNU Manual (b) The FSF's Back-Cover Text is: You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development. i Short Contents Introduction ::::::::::::::::::::::::::::::::::::::::::::: 1 1 Programming Languages Supported by GCC ::::::::::::::: 3 2 Language Standards Supported by GCC :::::::::::::::::: 5 3 GCC Command Options ::::::::::::::::::::::::::::::: 9 4 C Implementation-Defined Behavior :::::::::::::::::::: 373 5 C++ Implementation-Defined Behavior ::::::::::::::::: 381 6 Extensions to
    [Show full text]
  • Gnu Assembler
    Using as The gnu Assembler (Sourcery G++ Lite 2010q1-188) Version 2.19.51 The Free Software Foundation Inc. thanks The Nice Computer Company of Australia for loaning Dean Elsner to write the first (Vax) version of as for Project gnu. The proprietors, management and staff of TNCCA thank FSF for distracting the boss while they gotsome work done. Dean Elsner, Jay Fenlason & friends Using as Edited by Cygnus Support Copyright c 1991, 92, 93, 94, 95, 96, 97, 98, 99, 2000, 2001, 2002, 2006, 2007, 2008, 2009 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled \GNU Free Documentation License". i Table of Contents 1 Overview :::::::::::::::::::::::::::::::::::::::: 1 1.1 Structure of this Manual :::::::::::::::::::::::::::::::::::::: 14 1.2 The GNU Assembler :::::::::::::::::::::::::::::::::::::::::: 15 1.3 Object File Formats::::::::::::::::::::::::::::::::::::::::::: 15 1.4 Command Line ::::::::::::::::::::::::::::::::::::::::::::::: 15 1.5 Input Files :::::::::::::::::::::::::::::::::::::::::::::::::::: 16 1.6 Output (Object) File:::::::::::::::::::::::::::::::::::::::::: 16 1.7 Error and Warning Messages :::::::::::::::::::::::::::::::::: 16 2 Command-Line Options::::::::::::::::::::::: 19 2.1 Enable Listings: `-a[cdghlns]'
    [Show full text]
  • A Survey on Recent Hardware Data Prefetching Approaches with an Emphasis on Servers
    A Survey on Recent Hardware Data Prefetching Approaches with An Emphasis on Servers MOHAMMAD BAKHSHALIPOUR, Sharif University of Technology MEHRAN SHAKERINAVA, Sharif University of Technology FATEMEH GOLSHAN, Sharif University of Technology ALI ANSARI, Sharif University of Technology PEJMAN LOTFI-KAMRAN, Institute for Research in Fundamental Sciences (IPM) HAMID SARBAZI-AZAD, Sharif University of Technology & Institute for Research in Fundamental Sciences (IPM) Data prefetching, i.e., the act of predicting application’s future memory accesses and fetching those that are not in the on-chip caches, is a well-known and widely-used approach to hide the long latency of memory accesses. e fruitfulness of data prefetching is evident to both industry and academy: nowadays, almost every high-performance processor incorporates a few data prefetchers for capturing various access paerns of applications; besides, there is a myriad of proposals for data prefetching in the research literature, where each proposal enhances the eciency of prefetching in a specic way. In this survey, we discuss the fundamental concepts in data prefetching and study state-of-the-art hardware data prefetching approaches. Additional Key Words and Phrases: Data Prefetching, Scale-Out Workloads, Server Processors, and Spatio- Temporal Correlation. 1 INTRODUCTION Server workloads like Media Streaming and Web Search serve millions of users and are considered an important class of applications. Such workloads run on large-scale data-center infrastructures that are backed by processors which are essentially tuned for low latency and quality-of-service guarantees. ese processors typically include a handful of high-clock frequency, aggressively- speculative, and deeply-pipelined cores so as to run server applications as fast as possible, satisfying arXiv:2009.00715v1 [cs.AR] 1 Sep 2020 end-users’ latency requirements [1–11].
    [Show full text]
  • Optimizations Enabled by a Decoupled Front-End Architecture
    Optimizations Enabled by a Decoupled Front-End Architecture y z Glenn Reinmany Brad Calder Todd Austin y Department of Computer Science and Engineering, University of California, San Diego z Electrical Engineering and Computer Science Department, University of Michigan Abstract In the pursuit of instruction-level parallelism, significant demands are placed on a processor’s instruction de- livery mechanism. Delivering the performance necessary to meet future processor execution targets requires that the performance of the instruction delivery mechanism scale with the execution core. Attaining these targets is a challenging task due to I-cache misses, branch mispredictions, and taken branches in the instruction stream. To counter these challenges, we present a fetch architecture that decouples the branch predictor from the instruc- tion fetch unit. A Fetch Target Queue (FTQ) is inserted between the branch predictor and instruction cache. This allows the branch predictor to run far in advance of the address currently being fetched by the cache. The decou- pling enables a number of architecture optimizations including multi-level branch predictor design, fetch-directed instruction prefetching, and easier pipelining of the instruction cache. For the multi-level predictor, we show that it performs better than a single-level predictor, even when ignoring the effects of cycle-timing issues. We also examine the performance of fetch-directed instruction prefetching using a multi-level branch predictor and show that an av- erage 19% speedup is achieved. In addition, we examine pipelining the instruction cache to achieve a faster cycle time for the processor pipeline, and show that pipelining provides an average 27% speedup over not pipelining the instruction cache for the programs examined.
    [Show full text]
  • Data Locality Optimizations for Multigrid Methods on Structured Grids
    Data Locality Optimizations for Multigrid Methods on Structured Grids Christian Weiß Technische Universitat¨ Munchen¨ Institut fur¨ Informatik Lehrstuhl fur¨ Rechnertechnik und Rechnerorganisation Data Locality Optimizations for Multigrid Methods on Structured Grids Christian Weiß Vollst¨andiger Abdruck der von der Fakult¨at f¨ur Informatik der Technischen Universit¨at M¨unchen zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) genehmigten Dissertation. Vorsitzender: Univ.-Prof. Dr. Hans Michael Gerndt Pr¨ufer der Dissertation: 1. Univ.-Prof. Dr. Arndt Bode 2. Univ.-Prof. Dr. Ulrich R¨ude Friedrich-Alexander-Universit¨at Erlangen-N¨urnberg 3. Univ.-Prof. (komm.) Dr.-Ing. Eike Jessen, em. Die Dissertation wurde am 26. September 2001 bei der Technischen Universit¨at M¨unchen eingereicht und durch die Fakult¨at f¨ur Informatik am 06. Dezember 2001 angenommen. Abstract Beside traditional direct solvers iterative methods offer an efficient alternative for the solution of systems of linear equations which arise in the solution of partial differen- tial equations (PDEs). Among them, multigrid algorithms belong to the most efficient methods based on the number of operations required to achieve a good approximation of the solution. The relevance of the number of arithmetic operations performed by an application as a metric for the complexity of an algorithm wanes since the performance of modern computing systems nowadays is limited by memory latency and bandwidth. Consequently, almost all computer manufacturers nowadays equip their computers with cache–based hierarchical memory systems. Thus, the efficiency of multigrid methods is rather determined by good data locality, i.e. good utilization of data caches, than by the number of arithmetic operations.
    [Show full text]
  • Instruction Cache Prefetching Using Multilevel Branch Prediction
    Instruction Cache Prefetching Using Multilevel Branch Prediction Alexander V. Veidenbaum Dept. Of Electrical Engineering and Computer Science University of Illinois at Chicago [email protected] Abstract This paper presents an instruction cache prefetching mechanism capable of prefetching past branches in multiple-issue processors. Such processors at high clock rates often use small instruction caches which have significant miss rates. Prefetching from secondary cache can hide the instruction cache miss penalties but only if initiated sufficiently far ahead of the current program counter. Existing instruction cache prefetching methods are strictly sequential and cannot do that due to their inability to prefetch past branches. By keeping branch history and branch target addresses we predict a future PC several branches past the current branch. We describe a possible prefetching architecture and evaluate its accuracy, the impact of the instruction prefetching on performance, and its interaction with sequential prefetching. For a 4- issue processor and a cache architecture patterned after the DEC Alpha-21164 we show that our prefetching unit can be more effective than sequential prefetching. The two types of prefetching eliminate different types of misses and thus can be effectively combined to achieve better performance. 1. Introduction Instruction-level parallelism is one of the main factors allowing the high performance delivered by state-of-the-art processors. Such a processor is designed to issue and execute K instructions every cycle. K=4 in today’s typical processor. A processor organization consists of an instruction fetch unit followed by a decode and execution units. The fetch unit’s task is to supply the decode unit with K instructions every cycle.
    [Show full text]
  • Coverstory by Markus Levy, Technical Editor
    coverstory By Markus Levy, Technical Editor ACRONYMS IN DIRECTORY ALU: arithmetic-logic unit AMBA: Advanced Microcontroller Bus Architecture BIU: bus-interface unit CAN: controller-area network CMP: chip-level multiprocessing DMA: direct-memory access EDO: extended data out EJTAG: enhanced JTAG FPU: floating-point unit JVM: Java virtual machine MAC: multiply-accumulate MMU: memory-management unit NOP: no-operation instruction PIO: parallel input/output PLL: phase-locked loop PWM: pulse-width modulator RTOS: real-time operating system SDRAM: synchronous DRAM SOC: system on chip SIMD: single instruction multiple data TLB: translation-look-aside buffer VLIW: very long instruction word Illustration by Chinsoo Chung 54 edn | September 14, 2000 www.ednmag.com EDN’s 27th annual MICROPROCESSOR/MICROCONTROLLER DIRECTORY The party’s at the high end elcome to the year 2000 version of tectural and device features that will help you the EDN Microprocessor/Microcon- quickly compare the important processor dif- troller Directory: It’s the 27th consec- ferences. These processor tables include the Wutive year of this directory, and we standard fare of features and contain informa- certainly have something to celebrate. The en- tion related to EEMBC (EDN Embedded Mi- tire processor industry has seen incredible croprocessor Benchmark Consortium, growth, but the real parties have been happen- www.eembc.org). EEMBC’s goal is to provide ing in the 32-bit, 64-bit, and VLIW fraternities. the certified benchmark scores to help you se- Although a number of new architectures have lect the processor for your next design. Rather sprung up, processor vendors have primarily than fill an entire table with the processor focused on delivering newer versions or up- scores, EDN has indicated which companies grades of their existing architectures.
    [Show full text]