Cache Memories

Cache Memories

Cache Memories ALAN JAY SMITH Unwersity of California, Berkeley, Californm 94720 Cache memories are used in modern, medium and high-speed CPUs to hold temporarily those portions of the contents of main memory which are {believed to be) currently in use. Since instructions and data in cache memories can usually be referenced in 10 to 25 percent of the time required to access main memory, cache memories permit the executmn rate of the machine to be substantially increased. In order to function effectively, cache memories must be carefully designed and implemented. In this paper, we explain the various aspects of cache memorms and discuss in some detail the design features and trade-offs. A large number of original, trace-driven simulation results are presented. Consideration is given to practical implementatmn questions as well as to more abstract design issues. Specific aspects of cache memories that are investigated include: the cache fetch algorithm (demand versus prefetch), the placement and replacement algorithms, line size, store-through versus copy-back updating of main memory, cold-start versus warm-start miss ratios, mulhcache consistency, the effect of input/output through the cache, the behavior of split data/instruction caches, and cache size. Our discussion includes other aspects of memory system architecture, including translation lookaside buffers. Throughout the paper, we use as examples the implementation of the cache in the Amdahl 470V/6 and 470V/7, the IBM 3081, 3033, and 370/168, and the DEC VAX 11/780. An extensive bibliography is provided. Categories and Subject Descriptors: B.3.2 [Memory Structures]: Design Styles--cache memorws; B.3.3 [Memory Structures]: Performance Analysis and Design Aids; C.O. [Computer Systems Organization]: General; C.4 [Computer Systems Organiza- tion]: Performance of Systems General Terms: Design, Experimentation, Measurement, Performance Additional Key Words and Phrases' Buffer memory, paging, prefetching, TLB, store- through, Amdah1470, IBM 3033, BIAS INTRODUCTION instructions and operands to be fetched and/or stored. For exam,~le, in typical large, Definition and Rationale high-speed computers (e.g., Amdahl 470V/ Cache memories are small, high-speed 7, IBM 3033), main memory can be ac- buffer memories used in modern computer cessed in 300 to 600 nanoseconds; informa- systems to hold temporarily those portions tion can be obtained from a cache, on the of the contents of main memory which are other hand, in 50 to 100 nanoseconds. Since (believed to be) currently in use. Informa- the performance of such machines is al- tion located in cache memory may be ac- ready limited in instruction execution rate cessed in much less time than that located by cache memory access time, the absence in main memory (for reasons discussed of any cache memory at all would produce throughout this paper}. Thus, a central a very substantial decrease in execution processing unit (CPU) with a cache mem- speed. ory needs to spend far less time waiting for Virtually all modern large computer sys- Permission to copy without fee all or part of this matenal is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying Is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1982 ACM 0010-4892/82/0900-0473 $00.75 Computing Surveys, Vol. 14, No. 3, September 1982 474 * A. J. Smith CONTENTS two to four years, circuit speed and density will progress sufficiently to permit cache memories in one chip microcomputers. (On-chip addressable memory is planned INTRODUCTION for the Texas Instruments 99000 [LAFF81, Definltlon and Rationale ELEC81].) Even microcomputers could Overwew of Cache Deslgn benefit substantially from an on-chip cache, Cache Aspects since on-chip access times are much smaller 1. DATA AND MEASUREMENTS 1 1 Ratmnale than off-chip access times. Thus, the ma- 12 Trace-Driven Snnulatlon terial presented in this paper should be 1 3 Slmulatlon Evaluatmn relevant to almost the full range of com- 14 The Traces puter architecture implementations. 15 Swnulatmn Methods 2 ASPECTS OF CACHE DESIGN AND OPERA- The success of cache memories has been TION explained by reference to the "property of 2.1 Cache Fetch Algorithm locality" [DENN72]. The property of local- 2.2 Placement Algorithm ity has two aspects, temporal and spatial. 2.3 Line Size Over short periods of time, a program dis- 2.4 Replacement Algorithm 2.5 Write-Through versus Copy-Back tributes its memory references nonuni- 2.6 Effect of Multlprogramming Cold-Start and formly over its address space, and which Warm-Start portions of the address space are favored 2.7 Multmache Conslstency remain largely the same for long periods of 2.8 Data/Instruction Cache time. This first property, called temporal 2 9 Virtual Address Cache 2.10 User/Super~sor Cache locality, or locality by time, means that the 2.11 Input/Output through the Cache information which will be in use in the near 2 12 Cache Size future is likely to be in use already. This 2 13 Cache Bandwldth, Data Path Width, and Ac- type of behavior can be expected from pro- cess Resolutmn 2 14 Multilevel Cache gram loops in which both data and instruc- 2 15 Plpehmng tions are reused. The second property, lo- 2 16 Translatmn Lookaslde Buffer cality by space, means that portions of the 2 17 Translator address space which are in use generally 2 18 Memory-Based Cache 2 19 Specmlized Caches and Cache Components consist of a fairly small number of individ- 3 DIRECTIONS FOR RESEARCH AND DEVEL- ually contiguous segments of that address OPMENT space. Locality by space, then, means that 3 1 On-Clnp Cache and Other Technology Ad- the loci of reference of the program in the vances near future are likely to be near the current 3.2 Multmache Consistency 3.3 Implementatmn Evaluatmn loci of reference. This type of behavior can 3.4 Hit Ratio versus S~e be expected from common knowledge of 3.5 TLB Design programs: related data items (variables, ar- 3.6 Cache Parameters versus Architecture and rays) are usually stored together, and in- Workload APPENDIX EXPLANATION OF TRACE NAMES structions are mostly executed sequentially. ACKNOWLEDGMENTS Since the cache memory buffers segments REFERENCES of information that have been recently used, the property of locality implies that A v needed information is also likely to be found in the cache. terns have cache memories; for example, Optimizing the design of a cache memory the Amdahl 470, the IBM 3081 [IBM82, generally has four aspects: REIL82, GUST82], 3033, 370/168, 360/195, the Univac 1100/80, and the Honeywell 66/ (1) Maximizing the probability of finding a 80. Also, many medium and small size ma- memory reference's target in the cache chines have cache memories; for example, (the hit ratio), the DEC VAX 11/780, 11/750 [ARMS81], (2) minimizing the time to access informa- and PDP-11/70 [SIRE76, SNOW78], and tion that is indeed in the cache {access the Apollo, which uses a Motorolla 68000 time), microprocessor. We believe that within (3) minimizing the delay due to a miss, and Computing Surveys, Vol 14, No. 3, September 1982 Cache Memories * 475 (4) minimizing the overheads of updating I Moln Memory I main memory, maintaining multicache consistency, etc. I ..,,,"~oc he S-Unlt Tronslotor (All of these have to be accomplished ' ASrT under suitable cost constraints, of course.) {I T-Unit I E-Unlt "~rite Through Buffers~ There is also a trade-off between hit ratio and access time. This trade-offhas not been sufficiently stressed in the literature and it Figure 1. A typical CPU design and the S-unit. is one of our major concerns in this paper. In this paper, each aspect of cache memo- There is usually a translator, which trans- ries is discussed at length and, where avail- lates virtual to real memory addresses, and able, measurement results are presented. In a TLB (translation lookaside buffer) which order for these detailed discussions to be buffers (caches) recently generated (virtual meaningful, a familiarity with many of the address, real address) pairs. Depending on aspects of cache design is required. In the machine design, there can be an ASIT (ad- remainder of this section, we explain the dress space identifier table), a BIAS (buffer operation of a typical cache memory, and invalidation address stack), and some write- then we briefly discuss several aspects of through buffers. Each of these is discussed cache memory design. These discussions in later sections of this paper. are expanded upon in Section 2. At the end Figure 2 is a diagram of portions of a of this paper, there is an extensive bibliog- typical S-unit, showing only the more im- raphy in which we have attempted to cite portant parts and data paths, in particular all relevant literature. Not all of the items the cache and the TLB. This design is in the bibliography are referenced in the typical of that used by IBM (in the 370/168 paper, although we have referred to items and 3033) and by Amdahl (in the 470 series). there as appropriate. The reader may wish Figure 3 is a flowchart that corresponds in particular to refer to BADE79, BARS72, to the operation of the design in Figure GIBS67, and KAPL73 for other surveys of 2. A discussion of this flowchart follows. some aspects of cache design. CLAR81 is The operation of the cache commences particularly interesting as it discusses the with the arrival of a virtual address, gener- design details of a real cache.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    58 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us