The Compression Cache: Virtual Memory Compression for Handheld

The Compression Cache: Virtual Memory Compression for Handheld

The Compression Cache: Virtual Memory Compression for Handheld Computers Michael J. Freedman Recitation: Rivest TR1 March 16, 2000 Abstract Power consumption and sp eed are the largest costs for a virtual memory system in handheld computers. This pap er describ es a metho d of trading o computation and useable physical memory to reduce disk I/O. The design uses a compression cache, keeping some virtual memory pages in compressed form rather than sending them to the backing store. Eciency is managed by a log-structured circular bu er, supp orting dynamic memory partitioning, diskless op eration, and disk spin-down. Contents 1 Intro duction 2 2 Design Criteria and Considerations 3 2.1 Compression to Segmented Memory . 3 2.2 Reversed Compression in Fixed-Size Pages . 3 2.3 Non-Reversed Compression in Fixed-Sized Pages . 3 2.4 Compression to Log-Structured Bu ers . 4 3 Design 4 3.1 Virtual Memory . 4 3.1.1 Basic Hierarchical Paging . 4 3.1.2 Mo di ed Page Tables . 5 3.1.3 Cache Descriptor Table . 5 3.1.4 Circular Compression Cache . 6 3.2 Paging the Compressed Store . 6 3.2.1 Storing Compressed Pages . 6 3.2.2 Recovering Compressed Pages . 8 3.3 Variable Memory Allo cation . 8 3.4 Optimized Disk Accesses . 9 3.5 Diskless Op eration . 9 4 Results and Discussion 10 4.1 Energy Eciency . 10 4.1.1 Constant Clo ck Sp eed . 10 4.1.2 Disk Stores versus In-Memory Compression . 11 4.2 Memory Eciency . 12 4.2.1 Initial RAM Partitioning . 12 4.2.2 Hierarchical Page Table Size . 12 4.3 Prefetching Pages . 13 4.4 Disk Spin Policy . 13 4.5 Technology Trends . 14 5 Conclusion 14 1 Intro duction The handheld computer industry has witnessed signi cant growth in the past few years. Users have b egun to use p ersonal data assistants PDAs and other mobile computers in great numb ers. The applications and features provided by these systems have expanded to match this interest. Newer mo dels of the Palm or Windows CE PDAs provide increased storage capacity, applications such as spreadsheets, databases, and do cument viewers, and wireless communication to read email and news. Users desire even greater functionality: the ability to surf the Web, to communicate with audio and video, to playmusic with the advent of mp3 les, and to use sp eech-based interfaces. This list is far from all-inclusive, but many of these op erations share the nature of b eing highly memory-intensive. The greater demands placed on mobile computers are dicult to resolve in the face of technological trends. While the pro cessor power and physical memory size of workstations have increased dramatically in the past decade, handheld computers have signi cantly less memory. The development of memory-intensive applications and faster pro cessors for hand- held systems only comp ounds this problem. Software designers are forced to create programs with smaller memory fo otprints, but the available physical memory still might not be su- cient. As with all mo dern computers, virtual memory and paging are necessary to supp ort avariety of applications. The diculty in paging on handheld computers follows similar technological trends. While pro cessor sp eed has increased greatly, I/O sp eed and battery life has not witnessed similar growth. Handheld computers may communicate over slower wireless networks, run diskless, or use smaller, slower lo cal disks. An on-b oard battery is generally used for power consumption. The dominant cost of a virtual memory system is backing stores to disk. VM p erformance on a mobile computer can be improved by decreasing trac to and from the backing store. This problem is often handled by implementing page replacement p olicies that swap out pages in memory that will not be touched for the longest time. An optimal replacement p olicy cannot be p erformed on-line, therefore techniques such as the least-recently-used LRU heuristic and working set mo del only approximate optimal page selection based on previous b ehavior. These algorithms may p erform p o orly for sp eci c referencing patterns. An obvious alternative to reduce dep endence on the backing store is to increase the number of pages that can be stored in physical memory. This pap er describ es a system that uses compression to increase the numb er of pages that can be stored in physical memory. A section of the physical memory that would normally be used directly by the application is used instead to hold pages in compressed form. This area is known as the compression cache. With more pages in physical memory, fewer page faults are issued by the virtual memory manager. Fewer slow, energy-intensive I/O accesses are necessary. An imp ortant consequence of decreased disk use is the asso ciated savings in power consumption. This remainder of this pap er is organized as follows: section 2 details the design require- ments and various options for a compressed virtual memory scheme; section 3 presents a detailed description of the selected compressed virtual memory design; and section 4 rep orts calculations and discusses the design choices made. 2 2 Design Criteria and Considerations The virtual memory system for our handheld computer must supp ort several criteria. The design must b e able to compress pages in memory. It requires a metho d by which compressed pages may be referenced and extracted from compressed store. Furthermore, this pro cess needs to be b oth relatively quick and should save considerable battery power as compared to disk accesses. Compression algorithms result in variable-sized enco ding. Inatypical linked data struc- ture, many words p oint to nearby ob jects, are nil, or contain small integers or zero. Thus, algorithms take advantage of this redundancy,orlow information-theoretic entropy, to reduce the necessary enco ding to representaword. Using Lemp el-Ziv co ding, a blo ck of memory is compressible to one-quarter of its size on average [1, 11]. Variable-sized compressed pages add complexity to virtual memory systems. Paging has b een generally adopted as a simpler, more ecient storage technique, yet requires xed-size storage. Several p ossible means for storing compressed pages are considered. 2.1 Compression to Segmented Memory Segmentation allows variable-sized storage at any lo cation in memory. As memory op erations o ccur, however, the storage b ecomes \fragmented" as free chunks of memory app ear b etween sections b eing used. At some stage, the memory manager needs to shue these free memory segments around { known as \burping the memory" { to compact the storage. This pro cess to coalesce free memory into contiguous bu ers is computationally dep endent up on memory size. To reduce this complexity and improve sp eed, one can imagine partitioning memory into two, four, or even more chunks on which burping is p erformed indep endently. This partitioning, however, leads to the mo del of xed-size paging. 2.2 Reversed Compression in Fixed-Size Pages Compressed pages are stored within the system's xed-size 8 KByte pages, and are removed from compressed storage when paged. The memory manager could lo cate available space for a compressed page and copy it to that free space. To fetch the compressed page, the normal hierarchical page table walk is used to lo cated the compressed page. The data is uncompressed back to a 8 KByte page in uncompressed space, and the region is marked as \free." However, a system pro cess is still needed to burp each xed page to recover the fragmented space. 2.3 Non-Reversed Compression in Fixed-Sized Pages With non-reversed compression, pages are only copied, not removed, when paged from the compressed store. This scheme can lead to several copies of data in memory. Memory management can prove more dicult, approaching something similar to garbage collection. A compressed page is only evicted if its corresp onding uncompressed page is dirty, or the compressed page is no longer referenced. If an uncompressed page do es not change, it can merely reference an existing compressed store to move into compressed space. This design has a ma jor problem asso ciated with compressing new pages. The system would encounter diculty when attempting to allo cating new memory in which to store 3 compressed pages. Any given xed-page in compressed space might b e storing a mix of b oth current and outdated pages. The entire page could not therefore be thrown away, but this design seeks sp eci cally to not require memory compacting. Deadlo cks to evict pages are foreseeable, as b oth p ossible evictees could hold current compressed pages that could not b e ushed. Pro cesses could quickly run out of space for compressed storage; using a resizable RAM partition only p ostp ones this problem. Both techniques for compressing data into xed-sized pages have management diculties. Pages might not be compressible { a stream of completely random bits has no redundant information { and thus compression may yield full 8 KByte pages. To maintain the correct- ness of our pseudo-LRU algorithm, these pages would still be moved into the compressed store. However, there is no ro om on that xed page to store any extra information such as the compressed page's back reference, information bits, or page o set. An external data structure not referenced by the page table would add complexityto the design. 2.4 Compression to Log-Structured Bu ers A log-structured circular bu er [9] maps physical pages into the kernel's virtual address space, one by one, eventually wrapping around to the start of the cache.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us