Demand Paging

Total Page:16

File Type:pdf, Size:1020Kb

Demand Paging Operating Systems: Internals and Design Principles, 6/E Roadmap William Stallings • Hardware and Control Structures • Operating System Software Chapter 8 • UNIX and Solaris Memory Management Virtual Memory • Linux Memory Management • Windows Memory Management Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Key points in Terminology Memory Management 1) Memory references are logical addresses dynamically translated into physical addresses at run time – A process may be swapped in and out of main memory occupying different regions at different times during execution 2) A process may be broken up into pieces that do not need to located contiguously in main memory Breakthrough in Execution of a Process Memory Management • If both of those two characteristics are • Operating system brings into main present, memory a few pieces of the program – then it is not necessary that all of the pages or • Resident set - portion of process that is in all of the segments of a process be in main main memory memory during execution. • An interrupt is generated when an address • If the next instruction, and the next data is needed that is not in main memory location are in memory then execution can proceed • Operating system places the process in a blocking state – at least for a time Implications of Execution of a Process this new strategy • Piece of process that contains the logical • More processes may be maintained in address is brought into main memory main memory – Operating system issues a disk I/O Read – Only load in some of the pieces of each request process – Another process is dispatched to run while the – With so many processes in main memory, it is disk I/O takes place very likely a process will be in the Ready state – An interrupt is issued when disk I/O complete at any particular time which causes the operating system to place • A process may be larger than all of main the affected process in the Ready state memory Real and Thrashing Virtual Memory • Real memory • A state in which the system spends most – Main memory, the actual RAM of its time swapping pieces rather than • Virtual memory executing instructions. – Memory on disk • To avoid this, the operating system tries to – Allows for effective multiprogramming and guess which pieces are least likely to be used in relieves the user of tight constraints of main the near future. memory • The guess is based on recent history A Processes Performance Principle of Locality in VM Environment • Program and data references within a • Note that during process tend to cluster the lifetime of the • Only a few pieces of a process will be process, needed over a short period of time references are • Therefore it is possible to make intelligent confined to a guesses about which pieces will be subset of pages. needed in the future • This suggests that virtual memory may work efficiently Support Needed for Paging Virtual Memory • Hardware must support paging and • Each process has its own page table segmentation • Each page table entry contains the frame • Operating system must be able to manage number of the corresponding page in main the movement of pages and/or segments memory between secondary memory and main • Two extra bits are needed to indicate: memory – whether the page is in main memory or not – Whether the contents of the page has been altered since it was last loaded (see next slide) Paging Table Address Translation Two-Level Page Tables Hierarchical Page Table • Page tables are also stored in virtual memory • When a process is running, part of its page table is in main memory Address Translation for Page tables Hierarchical page table grow proportionally • A drawback of the type of page tables just discussed is that their size is proportional to that of the virtual address space. • An alternative is Inverted Page Tables Inverted Page Table Inverted Page Table • Used on PowerPC, UltraSPARC, and IA- Each entry in the page table includes: 64 architecture • Page number • Page number portion of a virtual address • Process identifier is mapped into a hash value – The process that owns this page. • Hash value points to inverted page table • Control bits • Fixed proportion of real memory is – includes flags, such as valid, referenced, etc required for the tables regardless of the • Chain pointer number of processes – the index value of the next entry in the chain. Translation Lookaside Inverted Page Table Buffer • Each virtual memory reference can cause two physical memory accesses – One to fetch the page table – One to fetch the data • To overcome this problem a high-speed cache is set up for page table entries – Called a Translation Lookaside Buffer (TLB) – Contains page table entries that have been most recently used Looking into the TLB Operation Process Page Table • Given a virtual address, • First checks if page is already in main – processor examines the TLB memory • If page table entry is present (TLB hit), – If not in main memory a page fault is issued – the frame number is retrieved and the real • The TLB is updated to include the new address is formed page entry • If page table entry is not found in the TLB (TLB miss), – the page number is used to index the process page table Translation Lookaside TLB operation Buffer Translation Lookaside Associative Mapping Buffer • As the TLB only contains some of the page table entries we cannot simply index into the TLB based on the page number – Each TLB entry must include the page number as well as the complete page table entry • The process is able to simultaneously query numerous TLB entries to determine if there is a page number match TLB and Page Size Cache Operation • Smaller page size, less amount of internal fragmentation • But Smaller page size, more pages required per process – More pages per process means larger page tables • Larger page tables means large portion of page tables in virtual memory Further complications Page Size to Page Size • Secondary memory is designed to • Small page size, large number of pages efficiently transfer large blocks of data so a will be found in main memory large page size is better • As time goes on during execution, the pages in memory will all contain portions of the process near recent references. Page faults low. • Increased page size causes pages to contain locations further from any recent reference. Page faults rise. Page Size Example Page Size Segmentation Segment Organization • Segmentation allows the programmer to • Starting address corresponding segment view memory as consisting of multiple in main memory address spaces or segments. • Each entry contains the length of the – May be unequal, dynamic size segment – Simplifies handling of growing data structures • A bit is needed to determine if segment is – Allows programs to be altered and recompiled already in main memory independently • Another bit is needed to determine if the – Lends itself to sharing data among processes segment has been modified since it was – Lends itself to protection loaded in main memory Address Translation in Segment Table Entries Segmentation Combined Paging and Combined Paging and Segmentation Segmentation • Paging is transparent to the programmer • Segmentation is visible to the programmer • Each segment is broken into fixed-size pages Address Translation Protection and sharing • Segmentation lends itself to the implementation of protection and sharing policies. • As each entry has a base address and length, inadvertent memory access can be controlled • Sharing can be achieved by segments referencing multiple processes Protection Relationships Roadmap • Hardware and Control Structures • Operating System Software • UNIX and Solaris Memory Management • Linux Memory Management • Windows Memory Management Memory Management Key Design Elements Decisions • Whether or not to use virtual memory techniques • The use of paging or segmentation or both • The algorithms employed for various aspects of memory management • Key aim: Minimise page faults – No definitive best policy Demand Paging Fetch Policy and Prepaging • Determines when a page should be • Demand paging brought into memory – only brings pages into main memory when a • Two main types: reference is made to a location on the page – Demand Paging – Many page faults when process first started – Prepaging • Prepaging – brings in more pages than needed – More efficient to bring in pages that reside contiguously on the disk – Don’t confuse with “swapping” Placement Policy Replacement Policy • Determines where in real memory a • When all of the frames in main memory process piece is to reside are occupied and it is necessary to bring in • Important in a segmentation system a new page, the replacement policy • Paging or combined paging with determines which page currently in segmentation hardware performs address memory is to be replaced. translation Replacement Policy: But… Frame Locking • Which page is replaced? • Frame Locking • Page removed should be the page least – If frame is locked, it may not be replaced likely to be referenced in the near future – Kernel of the operating system – How is that determined? – Key control structures – Principal of locality again – I/O buffers • Most policies predict the future behavior – Associate a lock bit with each frame on the basis of past behavior Basic Replacement Examples Algorithms • There are certain basic algorithms that are • An example of the implementation of these used for the selection of a page to replace, policies will use a page address stream they include formed by executing the program is – Optimal – 2 3 2 1 5 2 4 5 3 2 5 2 – Least recently used (LRU) • Which means that the first page – First-in-first-out (FIFO) referenced is 2, –Clock – the second page referenced is 3, • Examples – And so on. Optimal Policy Optimal policy Example • Selects for replacement that page for which the time to the next reference is the longest • But Impossible to have perfect knowledge of future events • The optimal policy produces three page faults after the frame allocation has been filled.
Recommended publications
  • Fluidmem: Open Source Full Memory Disaggregation
    FluidMem: Open Source Full Memory Disaggregation by Blake Caldwell B.S., University of Colorado at Boulder, 2014 M.S., University of Colorado at Boulder, 2015 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Computer Science 2019 This thesis entitled: FluidMem: Open Source Full Memory Disaggregation written by Blake Caldwell has been approved for the Department of Computer Science Prof. Richard Han Prof. Eric Keller Prof. Sangtae Ha Prof. Kenneth Anderson Date The final copy of this thesis has been examined by the signatories, and we find that both the content and the form meet acceptable presentation standards of scholarly work in the above mentioned discipline. iii Caldwell, Blake (Ph.D., Computer Science) FluidMem: Open Source Full Memory Disaggregation Thesis directed by Prof. Richard Han To satisfy the performance demands of memory-intensive applications facing DRAM short- ages, the focus of previous work has been on incorporating remote memory to expand capacity. However, the emergence of resource balancing as a priority for cloud computing requires the capa- bility to dynamically size virtual machine memory up and down. Furthermore, hardware-based or kernel space implementations hamper flexibility with respect to making customizations or integrat- ing the continuing open source advancements in software infrastructure for the data center. This thesis presents an architecture to meet the performance, bi-directional sizing, and flexibility challenges of memory disaggregation in the cloud. The implementation, called FluidMem, is open source software [35] that integrates with the Linux kernel, KVM hypervisor, and multiple key- values stores.
    [Show full text]
  • Virtual Memory
    M08_STAL6329_06_SE_C08.QXD 2/21/08 9:31 PM Page 345 CHAPTER VIRTUAL MEMORY 8.1 Hardware and Control Structures Locality and Virtual Memory Paging Segmentation Combined Paging and Segmentation Protection and Sharing 8.2 Operating System Software Fetch Policy Placement Policy Replacement Policy Resident Set Management Cleaning Policy Load Control 8.3 UNIX and Solaris Memory Management Paging System Kernel Memory Allocator 8.4 Linux Memory Management Linux Virtual Memory Kernel Memory Allocation 8.5 Windows Memory Management Windows Virtual Address Map Windows Paging 8.6 Summary 8.7 Recommended Reading and Web Sites 8.8 Key Terms, Review Questions, and Problems APPENDIX 8A Hash Tables 345 M08_STAL6329_06_SE_C08.QXD 2/21/08 9:31 PM Page 346 346 CHAPTER 8 / VIRTUAL MEMORY Table 8.1 Virtual Memory Terminology Virtual memory A storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.The addresses a program may use to reference mem- ory are distinguished from the addresses the memory system uses to identify physi- cal storage sites, and program-generated addresses are translated automatically to the corresponding machine addresses. The size of virtual storage is limited by the ad- dressing scheme of the computer system and by the amount of secondary memory available and not by the actual number of main storage locations. Virtual address The address assigned to a location in virtual memory to allow that location to be ac- cessed as though it were part of main memory. Virtual address space The virtual storage assigned to a process. Address space The range of memory addresses available to a process.
    [Show full text]
  • Ret2dir: Rethinking Kernel Isolation Vasileios P
    ret2dir: Rethinking Kernel Isolation Vasileios P. Kemerlis, Michalis Polychronakis, and Angelos D. Keromytis, Columbia University https://www.usenix.org/conference/usenixsecurity14/technical-sessions/presentation/kemerlis This paper is included in the Proceedings of the 23rd USENIX Security Symposium. August 20–22, 2014 • San Diego, CA ISBN 978-1-931971-15-7 Open access to the Proceedings of the 23rd USENIX Security Symposium is sponsored by USENIX ret2dir: Rethinking Kernel Isolation Vasileios P. Kemerlis Michalis Polychronakis Angelos D. Keromytis Columbia University {vpk, mikepo, angelos}@cs.columbia.edu Abstract equally attractive target. Continuing the increasing trend of the previous years, in 2013 there were 355 reported Return-to-user (ret2usr) attacks redirect corrupted kernel kernel vulnerabilities according to the National Vulnera- pointers to data residing in user space. In response, sev- bility Database, 140 more than in 2012 [73]. Admittedly, eral kernel-hardening approaches have been proposed to the exploitation of user-level software has become much enforce a more strict address space separation, by pre- harder, as recent versions of popular OSes come with nu- venting arbitrary control flow transfers and dereferences merous protections and exploit mitigations. The princi- from kernel to user space. Intel and ARM also recently ple of least privilege is better enforced in user accounts introduced hardware support for this purpose in the form and system services, compilers offer more protections of the SMEP, SMAP, and PXN processor features. Un- against common software flaws, and highly targeted ap- fortunately, although mechanisms like the above prevent plications, such as browsers and document viewers, have the explicit sharing of the virtual address space among started to employ sandboxing.
    [Show full text]
  • System Analysis and Tuning Guide Opensuse Leap 42.1 System Analysis and Tuning Guide Opensuse Leap 42.1
    System Analysis and Tuning Guide openSUSE Leap 42.1 System Analysis and Tuning Guide openSUSE Leap 42.1 An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources. Publication Date: November 05, 2018 SUSE LLC 10 Canal Park Drive Suite 200 Cambridge MA 02141 USA https://www.suse.com/documentation Copyright © 2006–2018 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Docu- mentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see http://www.suse.com/company/legal/ . All other third party trademarks are the prop- erty of their respective owners. A trademark symbol (®, ™ etc.) denotes a SUSE or Novell trademark; an asterisk (*) denotes a third party trademark. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xi I BASICS 1 1 General Notes on System Tuning 2
    [Show full text]
  • Protecting Commodity Operating Systems Through Strong Kernel Isolation
    Protecting Commodity Operating Systems through Strong Kernel Isolation Vasileios P. Kemerlis Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2015 c 2015 Vasileios P. Kemerlis All Rights Reserved ABSTRACT Protecting Commodity Operating Systems through Strong Kernel Isolation Vasileios P. Kemerlis Today’s operating systems are large, complex, and plagued with vulnerabilities that allow perpetrators to exploit them for profit. The constant rise in the number of software weak- nesses, coupled with the sophistication of modern adversaries, make the need for effective and adaptive defenses more critical than ever. In this dissertation, we develop a set of novel protection mechanisms, and introduce new concepts and techniques to secure commodity operating systems against attacks that exploit vulnerabilities in kernel code. Modern OSes opt for a shared process/kernel model to minimize the overhead of opera- tions that cross protection domains. However, this design choice provides a unique vantage point to local attackers, as it allows them to control—both in terms of permissions and contents—part of the memory that is accessible by the kernel, easily circumventing protec- tions like kernel-space ASLR and WˆX. Attacks that leverage the weak separation between user and kernel space, characterized as return-to-user (ret2usr) attacks, have been the de facto kernel exploitation technique in virtually every major OS, while they are not limited to the x86 platform, but have also targeted ARM and others. Given the multi-OS and cross-architecture nature of ret2usr threats, we propose kGuard: a kernel protection mechanism, realized as a cross-platform compiler extension, which can safeguard any 32- or 64-bit OS kernel from ret2usr attacks.
    [Show full text]
  • Practice Problems: Memory
    Lectures on Operating Systems (Mythili Vutukuru, IIT Bombay) Practice Problems: Memory 1. Provide one advantage of using the slab allocator in Linux to allocate kernel objects, instead of simply allocating them from a dynamic memory heap. Ans: A slab allocator is fast because memory is preallocated. Further, it avoids fragmentation of kernel memory. 2. Consider a program that memory maps a large file, and accesses bytes in the first page of the file. Now, a student runs this program on several machines running different versions of Linux, and finds that the actual physical memory consumed by the process (RSS or resident set size) varies from OS to OS. Provide one reason to explain this observation. Ans: Different operating systems may have different heuristics to allocate physical memory to a process. For example, systems may differ in the heuristics for prepaging, i.e., how to prefetch pages that may be accessed in the future. This leads to different values of RSS. 3. In a 32-bit architecture machine running Linux, for every physical memory address in RAM, there are at least 2 virtual addresses pointing to it. That is, every physical address is mapped at least twice into the virtual address space of some set of processes. [T/F] Ans: F 4. Consider a system with N bytes of physical RAM, and M bytes of virtual address space per pro- cess. Pages and frames are K bytes in size. Every page table entry is P bytes in size, accounting for the extra flags required and such. Calculate the size of the page table of a process.
    [Show full text]
  • Chapter 8 Virtual Memory Roadmap
    Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 8 Virtual Memory Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Roadmap • Hardware and Control Structures • Operating System Software • UNIX and Solaris Memory Management • Linux Memory Management • Windows Memory Management Terminology Key points in Memory Management 1) Memory references are logical addresses dynamically translated into physical addresses at run time – A process may be swapped in and out of main memory occupying different regions at different times during execution 2) A process may be broken up into pieces that do not need to located contiguously in main memory Breakthrough in Memory Management • If both of those two characteristics are present, – then it is not necessary that all of the pages or all of the segments of a process be in main memory during execution. • If the next instruction, and the next data location are in memory then execution can proceed – at least for a time Execution of a Process • Operating system brings into main memory a few pieces of the program • Resident set - portion of process that is in main memory • An interrupt is generated when an address is needed that is not in main memory • Operating system places the process in a blocking state Execution of a Process • Piece of process that contains the logical address is brought into main memory – Operating system issues a disk I/O Read request – Another process is dispatched to run while the disk I/O takes place – An interrupt is issued when disk I/O complete
    [Show full text]
  • Chapter 8 Chapter 8 Virtual Memory
    Operating Systems: Internals and Design Chapter 8 Principles Virtual Memory Seventh Edition William Stallings Operating Systems: Internals and Design Principles You’re gonna need a bigger boat. — Steven Spielberg, JAWS, 1975 Hardware and Control Structures n Two characteristics fundamental to memory management: 1) all memory references are logical addresses that are dynamically translated into physical addresses at run time 2) a process may be broken up into a number of pieces that don’t need to be contiguously located in main memory during execution n If these two characteristics are present, it is not necessary that all of the pages or segments of a process be in main memory during execution Terminology n Operating system brings into main memory a few pieces of the program n Resident set - portion of process that is in main memory n An interrupt is generated when an address is needed that is not in main memory n Operating system places the process in a blocking state Continued . Execution of a Process n Piece of process that contains the logical address is brought into main memory n operating system issues a disk I/O Read request n another process is dispatched to run while the disk I/O takes place n an interrupt is issued when disk I/O is complete, which causes the operating system to place the affected process in the Ready state Implications n More processes may be maintained in main memory n only load in some of the pieces of each process n with so many processes in main memory, it is very likely a process will be in the Ready state
    [Show full text]
  • An Analysis of How Static and Shared Libraries Affect Memory Usage on an IP-STB
    An analysis of how static and shared libraries affect memory usage on an IP-STB Master of Science Thesis in the Programme Networks and Distributed Systems Dongping Huang Chalmers University of Technology University of Gothenburg Department of Computer Science and Engineering Göteborg, Sweden, September 2010 The Author grants to Chalmers University of Technology and University of Gothenburg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. An analysis of how code locality affects the dynamic memory usage on an embedded set- top box running Linux Dongping Huang © Dongping Huang, September 2010. Examiner: Björn von Sydow Chalmers University of Technology University of Gothenburg Department of Computer Science and Engineering SE-412 96 Göteborg Sweden Telephone + 46 (0)31-772 1000 Department of Computer Science and Engineering Göteborg, Sweden September 2010 Abstract An IP-STB is a small and stand alone embedded system which is able to provide a series of advanced entertaining services.
    [Show full text]
  • Working Sets Past and Present
    Purdue University Purdue e-Pubs Department of Computer Science Technical Reports Department of Computer Science 1978 Working Sets Past and Present Peter J. Denning Report Number: 78-276 Denning, Peter J., "Working Sets Past and Present" (1978). Department of Computer Science Technical Reports. Paper 208. https://docs.lib.purdue.edu/cstech/208 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. WORKING SETS PAST AND PRESENT· Peter J. Denning Computer Science Department Purdue University W. Lafayette, IN 47907 CSD-TR-276 July 1978 (revised May 1979) Abstract A program's working set is the collection of segments (or pages) recently referenced. This concept has led to efficient methods for measuring a program's intrinsic memory demand; it has assisted in understanding and in modeling program behavior; and it has been used as the basis of optimal multiprograrnmed memory management. The total cost of a working set dispatcher is no larger than the total cost of other common dispatchers. This paper outlines the argument why it is unlikely that anyone will find a cheaper nonlookahead ~emory policy that delivers significantly better performance. Index Terms \'lorking sets, memory management. virtual memory, multiprogram­ ming, optimal multiprogramming, lifetime curves, program measureI'lent, program behavior. stochastic program models, phase/transition behavior. program locality, multiprogrammed load control1ers.~ispatchers,work­ ing set dispatchers, memor/ space-time product. *I'lork reported herein \'1as supported in part by NSF Grants GJ-41289 and ~ICS78-01729 at Purdue University. A condensed, preliminary draft of this paper was presented as an invited lecture at the International Symposium on Operating Systems, IRIA Laboria, Rocquenc:ourt, France, October 2-4, 1978 [DENN78d).
    [Show full text]
  • Computer Systems II
    Computer Systems II Virtual Memory Recap: Paging • Memory is divided into frames • Process is divided into pages (same size as frames) Frame Main Memory 0 • Pages are loaded into frames (need not be 1 contiguous) 2 • Page table translates page addresses to frames 3 addresses 4 5 6 Process A (1) Process B (0) Process C (0) Process D (0) 7 Process A (2) Process B (1) Process C (1) Process D (1) 8 Process A (3) Process D (2) Process B (2) Process C (2) 9 Process A (4) Process C (3) Process D (3) Process B (3) 10 Process D (4) Process B (4) 11 Process B (5) Process D (5) 12 13 14 15 Recap: Page Tables • One page table for each process Frame Main Memory 0 Process AD (1)(0) • Page table used to translate virtual 1 Process DA (2)(1) addresses to physical addresses 2 Process DA (3)(2) 3 Process DA (4)(3) 4 Process B (0) 5 Process B (1) Process B Process C Process D 6 Process B (2) Page Frame Page Frame Page Frame 0 4 0 10 0 0 7 Process B (3) 1 5 1 11 1 1 8 Process B (4) 2 6 2 12 2 2 9 Process B (5) 3 7 3 13 3 3 10 Process C (0) 4 8 4 14 5 9 5 15 11 Process C (1) 12 Process C (2) 13 Process C (3) 14 Process D (4) 15 Process D (5) Recap: Problems that Still Remain • Should all pages be loaded in memory? – Limits the number of active processes • Solution: Virtual Memory – Load pages only when needed – Less need to swap blocked processes out to disk Virtual Memory allows the OS to load pieces only when needed (demand paging) Virtual Addressing • Virtual (logical) address is mapped to a real address using a page table • Size of physical address limited
    [Show full text]
  • Section 9: Paging
    Section 9: Paging CS 162 July 22, 2020 Contents 1 Vocabulary 2 2 Address Translation (7/20)3 2.1 Page Allocation..........................................3 2.2 Address Translation.......................................4 3 Paging 6 3.1 Demand Paging..........................................6 3.2 Cached Paging..........................................7 3.3 Inverted Page Tables.......................................8 3.3.1 Inverted Page Table...................................8 3.3.2 Linear Inverted Page Table...............................8 3.3.3 Hashed Inverted Page Table...............................9 4 Page Replacement Algorithms 10 4.1 FIFO................................................ 10 4.2 LRU................................................ 10 4.3 MIN................................................ 10 4.4 FIFO vs. LRU.......................................... 11 4.5 Improving Cache Performance.................................. 11 4.5.1 FIFO........................................... 11 4.5.2 LRU............................................ 11 4.5.3 MIN............................................ 11 1 CS 162 Summer 2020 Section 9: Paging 1 Vocabulary • Demand Paging - The process where the operating system only stores pages that are "in demand" in the main memory and stores the rest in persistent storage (disk). Accesses to pages not currently in memory page fault and the page fault handler will retrieve the request page from disk (paged in). When main memory is full, then as new pages are paged in old pages must be paged out through a process called eviction. Many cache eviction algorithms like least recently used can be applied to demand paging, the main memory is acting as the cache for pages which all start on disk. • Working Set - The subset of the address space that a process uses as it executes. Generally we can say that as the cache hit rate increases, more of the working set is being added to the cache.
    [Show full text]