The Case for Compressed Caching in Virtual Memory Systems

The Case for Compressed Caching in Virtual Memory Systems

THE ADVANCED COMPUTING SYSTEMS ASSOCIATION The following paper was originally published in the Proceedings of the USENIX Annual Technical Conference Monterey, California, USA, June 6-11, 1999 The Case for Compressed Caching in Virtual Memory Systems _ Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis aUniversity of Texas at Austin © 1999 by The USENIX Association All Rights Reserved Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. For more information about the USENIX Association: Phone: 1 510 528 8649 FAX: 1 510 548 5738 Email: [email protected] WWW: http://www.usenix.org The Case for Compressed Caching in Virtual Memory Systems Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis Dept. of Computer Sciences University of Texas at Austin Austin, Texas 78751-1182 g fwilson|sfkaplan|smaragd @cs.utexas.edu http://www.cs.utexas.edu/users/oops/ Abstract In [Wil90, Wil91b] we proposed compressed caching for virtual memory—storing pages in compressed form Compressed caching uses part of the available RAM to in a main memory compression cache to reduce disk pag- hold pages in compressed form, effectively adding a new ing. Appel also promoted this idea [AL91], and it was level to the virtual memory hierarchy. This level attempts evaluated empirically by Douglis [Dou93] and by Russi- to bridge the huge performance gap between normal (un- novich and Cogswell [RC96]. Unfortunately Douglis’s compressed) RAM and disk. experiments with Sprite showed speedups for some pro- Unfortunately, previous studies did not show a consis- grams, but no speedup or some slowdown for others. tent benefit from the use of compressed virtual memory. Russinovich and Cogswell’s data for a mixed PC work- In this study, we show that technology trends favor com- load showed only a slight potential benefit. There is a pressed virtual memory—it is attractive now, offering re- widespread belief that compressed virtual memory is at- duction of paging costs of several tens of percent, and tractive only for machines without a fast local disk, such it will be increasingly attractive as CPU speeds increase as diskless handheld computers or network computers, faster than disk speeds. and laptops with slow disks. As we and Douglis pointed out, however, compressed virtual memory is more attrac- Two of the elements of our approach are innova- tive as CPUs continue to get faster. This crucial point tive. First, we introduce novel compression algorithms seems to have been generally overlooked, and no operat- suited to compressing in-memory data representations. ing system designers have adopted compressed caching. These algorithms are competitive with more mature Ziv- Lempel compressors, and complement them. Second, we In this paper, we make a case for the value of com- adaptively determine how much memory (if at all) should pressed caching in modern systems. We aim to show be compressed by keeping track of recent program be- that the discouraging results of former studies were pri- havior. This solves the problem of different programs, marily due to the use of machines that were quite slow or phases within the same program, performing best for by current standards. For current, fast, disk-based ma- different amounts of compressed memory. chines, compressed virtual memory offers substantial performance improvements, and its advantages only in- crease as processors get faster. We also study future 1 Introduction trends in memory and disk bandwidths. As we show, compressed caching will be increasingly attractive, re- For decades, CPU speeds have continued to double ev- gardless of other OS improvements (like sophisticated ery 18 months to two years, but disk latencies have im- prefetching policies, which reduce the average cost of proved only very slowly. Disk latencies are five to six disk seeks, and log-structured file systems, which reduce orders of magnitude greater than main memory access the cost of writes to disk). latencies, while other adjacent levels in the memory hi- erarchy typically differ by less than one order of mag- We will also show that the use of better compression nitude. Programs that run entirely in RAM benefit from algorithms can provide a significant further improvement improvements in CPU speeds, but the runtime of pro- in the performance of compressed caching. Better Ziv- grams that page is likely to be dominated by disk seeks, Lempel variants are now available, and we introduce here and may run many times more slowly than CPU-bound a new family of compression algorithms designed for in- programs. memory data representations rather than file data. The concrete points in our analysis come from simula- As we explain below, the results for these algo- tions of programs covering a variety of memory require- rithms are quite encouraging. A straightforward imple- ments and locality characteristics. At this stage of our mentation in C is competitive with the best assembly- experiments, simulation was our chosen method of eval- coded Ziv-Lempel compressor we could find, and su- uation because it allowed us to easily try many ideas in perior to the LZRW1 algorithm (written in C by Ross a controlled environment. It should be noted that all our Williams)[Wil91a] used in previous studies of com- simulation parameters are either relatively conservative pressed virtual memory and compressed file caching. or perfectly realistic. For instance, we assume a quite As we will explain, we believe that our results are sig- fast disk in our experiments. At the same time, the costs nificant not only because our algorithms are competitive of compressions and decompressions used in our simu- and often superior to advanced Ziv-Lempel algorithms, lations are the actual runtime costs for the exact pages but because they are different. Despite their immaturity, whose compression or decompression is being simulated they work well, and they complement other techniques. at any time. They also suggest areas for research into significantly The main value of our simulation results, however, more effective algorithms for in-memory data. is not in estimating the exact benefit of compressed (Our algorithms are also interesting in that they could caching (even though it is clearly substantial). Instead, be implemented in a very small amount of hardware, in- we demonstrate that it is possible to detect reliably how cluding only a tiny amount of space for dictionaries, pro- much memory should be compressed during a phase of viding extraordinarily fast and cheap compression with a program execution. The result is a compressed virtual small amount of hardware support.) memory policy that adapts to program behavior. The exact amount of compressed memory crucially affects 2.1 Background: Compression program performance: compressing too much memory when it is not needed can be detrimental, as is compress- ing too little memory when slightly more would pre- To understand our algorithms and their relationship to vent many memory faults. Unlike any fixed fraction of other algorithms, it is necessary to understand a few basic compressed memory, our adaptive compressed caching ideas about data compression. (We will focus on lossless scheme yields uniformly high benefits for all test pro- compression, which allows exact reconstruction of the grams and a wide range of memory sizes. original data, because lossy compression would gener- ally be a disaster for compressed VM.) All data compression algorithms are in a deep sense 2 Compression Algorithms ad hoc—they must exploit expected regularities in data to achieve any compression at all. All compression al- In [WLM91] we explained how a compressor with a gorithms embody expectations about the kinds of regu- knowledge of a programming language implementation larities that will be encountered in the data being com- could exploit that knowledge to achieve high compres- pressed. Depending on the kind of data being com- sion ratios for data used by programs. In particular, we pressed, the expectations may be appropriate or inappro- explained how pointer data contain very little informa- priate and compression may work better or worse. The tion on average, and that pointers can often be com- main key to good compression is having the right kinds pressed down to a single bit. of expectations for the data at hand. Here we describe algorithms that make much weaker Compression can be thought of as consisting of two assumptions, primarily exploiting data regularities im- phases, which are typically interleaved in practice: mod- posed by hardware architectures and common program- eling and encoding [BCW90, Nel95]. Modeling is the ming and language-implementation strategies. These al- process of detecting regularities that allow a more con- gorithms are fast and fairly symmetrical—compression is cise representation of the information. Encoding is the not much slower than decompression. This makes them construction of that more concise representation. especially suitable for compressed virtual memory ap- plications, where pages are compressed about as often as Ziv-Lempel compression. Most compression algo- 1 they’re decompressed. rithms, including the overwhelmingly popular Ziv- Lempel family, are based on detection of exact repeti- 1A variant of one of our algorithms has been used successfully for several years in the virtual memory system of the Apple Newton, a tions of strings of atomic tokens. The token size is usu- personal digital assistant with no disk [SW91] (Walter Smith, personal ally one byte, for speed reasons and because much data communication 1994, 1997). While we have not previously published this algorithm, we sketched it for Smith and he used it in the New- Ziv-Lempel algorithm Apple had used previously, but was much faster. ton, with good results—it achieved slightly less compression than a Unfortunately, we do not have any detailed performance comparisons.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    33 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us