Design Decisions Page Size Page Fault Rate Resident Set

Total Page:16

File Type:pdf, Size:1020Kb

Design Decisions Page Size Page Fault Rate Resident Set Käyttöjärjestelmät, Luento 9 WEEK 5 Page fault rate • More small pages to the same memory space • References from large pages Virtual memory algorithms and more probable to go to a page not yet in memory examples • References from small pages often to other pages -> often used pages selected to the Stallings, Chapter 8 memory • Page size closer to process size • Too small working set size -> larger page fault rate • Large working set size -> nearly all pages in the memory 1 4 Algorithms for memory Design decisions management • Virtual memory or not? • Fetch policy • Paging or segmentation? • Demand paging vs prepaging • What algorithms? • Placement policy CONSIDERING ONLY PAGING: • Replacement policy • Resident set management, selecting the page to be replaced • What page size? • Methods: OPT, LRU, FIFO, random … • Cleaning policy • Demand cleaning vs precleaning, buffering • Load control • Working set size 2 5 Page size Resident set mgt • Number of page frames allocated for a process (resident set • Reduce internal fragmentation ĺ small size) • Fixed: how large, identical for all? • Reduce page table size ĺ large • Variable: based on what? Working set? • Proportional (1x, 2x, …) to disk block size • Replacement policy: • Global: over all pages (all processes) • Optimal value is different for different programs • Local: just the pages of this process • Increase TLB hit ratio ĺ large • Size of the resident set: • Too small: higher multprocessing level, more page faults, trashing • Different page size for different applications? • Too large: smaller CPU utilisation, wasted memory • How does MMU know the page size? Tbl 8.2 [Stal05] 3 6 Syksy 2007, Tiina Niklander 9-1 Käyttöjärjestelmät, Luento 9 Resident set management Clock • Replace the page where the use bit is zero, while scanning reset the use bit (esim. PFF) • MMU sets the use bit to Tbl 8.4 [Stal05] 1 every time a memory reference to that page 7 is made 10 Replacement policy Working set • OPT – optimal algorithm Fig 8.19 [Stal05] • Nothing is better than this, unrealistic to be implemented, • Clairvoyant, knows the future references and choses based on them • LRU, Least recently used • High amount of bookkeeping • Modification: NRU – Not recently used, easier to implement • Only maintain information about recent usage (for example on used bit) • FIFO, First-in-first-out • Easy to implement, but not very useful • Modification: second chance – do not replace page that was just recently used • Clock • Metrics? Number of page faults or page fault rate 8 11 Korvauspolitiikka Working set • Set of pages used during last k references (window size) • Pages not in the working set are candidates to be released or replaced • Suitable size for the working set? Estimate based on page fault rate 1 £ W (t,D) £ min(D,n) 9 12 Syksy 2007, Tiina Niklander 9-2 Käyttöjärjestelmät, Luento 9 Dynamically adjusting resident set UNIX/Solaris: data structures • PFF – Page Fault Frequency Page table - one for each process Tbl 8.5 [Stal05] - one entry for each logical page • VSWS – Variable-interval Sampled Working Set Location in the main memory? P P Pages of P Pages of P … P’s page X = 7; X = 7 … P&Q: write Q X = 5 Q Q’s page X = 5 Pages of Q Pages of Q 13 16 UNIX/4BSD: (core map) Swap area, VM backup store (Fig 4-33 [Tane01]) 1 KB sijainti levyllä? kenen? segmentti? globaali 16 B poistoalg. Fixed allocation – Dynamic allocation – For the whole process only for the pages that in advance are swapped out 14 (Fig 10-16 [Tane01])17 Kaksiviisarinen Clock Fronthand: set Reference = 0 Backhand: if (Reference == 0) UNIX / Solaris (+4BSD) page out Pages not used during the MUISTINHALLINTA sweep are assumed not to be in the working sets of the processes. Suits better for large memories. Speed of the hands (frame/sec)? Scanrate Gap between the hands? (Fig 8.23 [Stal05]) Handspread 15 18 Syksy 2007, Tiina Niklander 9-3 Käyttöjärjestelmät, Luento 9 UNIX/Solaris: Linux: paging Kernel memory allocation • Architecture independent • Alpha: 64b addresses, missing • Based on paging full support for 3 levels, x86 • Locking pages to memory • Page size 8KB, offset 13 bits ”value 0” • Kernel allocates and deallocates all the time (large number of) • x86: 32b address, only small areas and buffers 2-levels, • For example: proc, vnode, file descriptor • Page size 4KB, offset 12 bits Several p. One page • Kernel need to allocate and deallocate fragments of pages (not • 3-level page table • page directory (PD), 1 page only full pages) (Fig 8.25 [Stal05]) • Dynamic partitioning for allocations within one page • page middle directory (PMD), multiple pages • page table (PT), multiple pages • => Lazy Buddy System • Delay combining until there are ’too many’ fragments of certain size • Page directory always in memory • This the page table address given to MMU at process switch 19 • Other pages can be on disk 22 Linux: address translation • Split address to 4 elements • index to page directory (PD-offset) • index to page middle directory (PMD-offset) Linux • index to page table (PT-offset) • offset MUISTINHALLINTA PMD_base = PD_base + PD[PD_offset] PT_base = PMD_base + PMD[PMD_offset] Page_base = PT_base + PT[PT_offset] Ch 8.4 [Stal 05] fyys.os = Page-base + offset • X86-trick • Page middle directory (PMD) reduced to one item, which compiler optimizes away. • MMU assumes that the page directory refers directly to the page table (PT) 20 • Suitable to all architectures that support only two-level paging 23 Linux: memory management Linux: placement (allocation) • 32-bittisissä architectures • Reserves a consecutive area (region) for consecutive • 3 GB reserved for user processes pages • 1 GB for kernel (including the page tables) • at the beginning of the physical memory • More efficient fetching and storing with disks • Kernel mode allowed to make references to the full 4 GB • Optimize the disk utilisation, with the cost of memory • Kernel’s memory allocation based on slabs allocation • Only for the kernel, efficient allocation of small memory areas, avoids allocating full pages • Buddy System: 1, 2, 4, 8, 16 or 32 page frames • Allocates pages in large groups, increases internal fragmentation • Implementation: table with list of different sizes unallocated page regions 21 24 Syksy 2007, Tiina Niklander 9-4 Käyttöjärjestelmät, Luento 9 Linux: replacement Slap allocation • Globaali Clock using M-bit + sort of LRU • Counter of dynamically allocated page frames • 8 bit counter of age vrt. U-bitti • MMU increases the counter at each page reference • Background process (kswapd) reduces the counter each second • if too few page frames free, deallocated unused frames • Uses pagebuffer • To store deallocated pages for a short duration 25 Fig 9.27 – Silberschatz, Galvin & Gagne: 28 Operating system concepts with Java Linux http://home.earthlink.net/~jknapka/linux-mm/vmoutline.html demand paging, levy page cache page cache read-ahead swap_out active_list swap cache age up (+3) (age > 0, in use) refill_inactive_scan Windows 2000 age down (/2) inactive_dirty_list (age = 0, not used MUISTINHALLINTA clean or dirty) kreclaimd page_launder alloc inactive_clean_list Ch 8.5 [Stal 05] kswapd (age = 0, not used, Ch 11.5 [Tane 01] clean) ”It does not seem possible to understand the VM system in a modular bdflush way. The interfaces between, say, the zone allocator and the swap policy are many and varied… What a nightmare” (Joe Knapka) 26 29 ”0x0000????:” W2K Linux: kernel memory allocation • 32-bit addresses • Kernel memory locked to the memory => 4 GB virt. os. • page size 4K - 64K • Dynamic allocation based on pages Pentium: 4KB • Dynamically loadable modules fully to the kernel area Alpha: 8K • buddy system on the pages kmalloc() ”-1” • Additional need for small, short term memory allocations -> use slab • One page ’cut’ to smaller areas (slabs) (Fig 8.26 [Stal05]) • buddy system within each page • x86: 32, 64, 128, 252, 508, 2040, 4080 tavua ks. (page size 4KB) Fig. 11.23 [Tane 01] ”0xFFFF????:” 27 30 Syksy 2007, Tiina Niklander 9-5 Käyttöjärjestelmät, Luento 9 W2K: Paging W2K: replacement • No segmentation • Balance set manager -daemon • Each second 1 sec check if enough free frames • Variable resident set size per process • Working set manager • Start with a fixed set of frames • Running when needed • If large number of free frames, process can get more frames • Dynamic working set size • A greedy one is not allowed to allocate last 512 frames • Released page frames of processes • If the free memory reduces, the resident set of processes can be • Starts with large idle processes decreased • Swapper-daemon • Demand fetch (no prefetching) • every 4 sec: search for processes, whose threads (all of them) have been • Disk I/O using larger blocks passive for long duration • 1-8 pages at each time • Parts of the OS and buffer pool locked to memory • Caching of released pages 31 34 W2K: paging (Fig 11.26 [Tane01]) W2K: page frames • not used Pentium Fig 11.24 [Tane01] (seur. kalvo) (Fig 11.27 [Tane01]) 32 35 disk W2K mapped modified page W2K reference page writer Modified writer address zero page thread space WS dealloc Free (idle proc) copy Standby on Zeroed swapper normal write? use thread stack 4 s. page fault Fig 11.27 [Tane01] not in swap area ”The code is so complex that the designers loathe [Tane01] to touch parts of it for fear of breaking something Fig 11.28 [Tane01] somewhere” Andrew Tanenbaum [Tane01, p. 823] 33 36 Syksy 2007, Tiina Niklander 9-6.
Recommended publications
  • Fluidmem: Open Source Full Memory Disaggregation
    FluidMem: Open Source Full Memory Disaggregation by Blake Caldwell B.S., University of Colorado at Boulder, 2014 M.S., University of Colorado at Boulder, 2015 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Computer Science 2019 This thesis entitled: FluidMem: Open Source Full Memory Disaggregation written by Blake Caldwell has been approved for the Department of Computer Science Prof. Richard Han Prof. Eric Keller Prof. Sangtae Ha Prof. Kenneth Anderson Date The final copy of this thesis has been examined by the signatories, and we find that both the content and the form meet acceptable presentation standards of scholarly work in the above mentioned discipline. iii Caldwell, Blake (Ph.D., Computer Science) FluidMem: Open Source Full Memory Disaggregation Thesis directed by Prof. Richard Han To satisfy the performance demands of memory-intensive applications facing DRAM short- ages, the focus of previous work has been on incorporating remote memory to expand capacity. However, the emergence of resource balancing as a priority for cloud computing requires the capa- bility to dynamically size virtual machine memory up and down. Furthermore, hardware-based or kernel space implementations hamper flexibility with respect to making customizations or integrat- ing the continuing open source advancements in software infrastructure for the data center. This thesis presents an architecture to meet the performance, bi-directional sizing, and flexibility challenges of memory disaggregation in the cloud. The implementation, called FluidMem, is open source software [35] that integrates with the Linux kernel, KVM hypervisor, and multiple key- values stores.
    [Show full text]
  • Virtual Memory
    M08_STAL6329_06_SE_C08.QXD 2/21/08 9:31 PM Page 345 CHAPTER VIRTUAL MEMORY 8.1 Hardware and Control Structures Locality and Virtual Memory Paging Segmentation Combined Paging and Segmentation Protection and Sharing 8.2 Operating System Software Fetch Policy Placement Policy Replacement Policy Resident Set Management Cleaning Policy Load Control 8.3 UNIX and Solaris Memory Management Paging System Kernel Memory Allocator 8.4 Linux Memory Management Linux Virtual Memory Kernel Memory Allocation 8.5 Windows Memory Management Windows Virtual Address Map Windows Paging 8.6 Summary 8.7 Recommended Reading and Web Sites 8.8 Key Terms, Review Questions, and Problems APPENDIX 8A Hash Tables 345 M08_STAL6329_06_SE_C08.QXD 2/21/08 9:31 PM Page 346 346 CHAPTER 8 / VIRTUAL MEMORY Table 8.1 Virtual Memory Terminology Virtual memory A storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.The addresses a program may use to reference mem- ory are distinguished from the addresses the memory system uses to identify physi- cal storage sites, and program-generated addresses are translated automatically to the corresponding machine addresses. The size of virtual storage is limited by the ad- dressing scheme of the computer system and by the amount of secondary memory available and not by the actual number of main storage locations. Virtual address The address assigned to a location in virtual memory to allow that location to be ac- cessed as though it were part of main memory. Virtual address space The virtual storage assigned to a process. Address space The range of memory addresses available to a process.
    [Show full text]
  • Ret2dir: Rethinking Kernel Isolation Vasileios P
    ret2dir: Rethinking Kernel Isolation Vasileios P. Kemerlis, Michalis Polychronakis, and Angelos D. Keromytis, Columbia University https://www.usenix.org/conference/usenixsecurity14/technical-sessions/presentation/kemerlis This paper is included in the Proceedings of the 23rd USENIX Security Symposium. August 20–22, 2014 • San Diego, CA ISBN 978-1-931971-15-7 Open access to the Proceedings of the 23rd USENIX Security Symposium is sponsored by USENIX ret2dir: Rethinking Kernel Isolation Vasileios P. Kemerlis Michalis Polychronakis Angelos D. Keromytis Columbia University {vpk, mikepo, angelos}@cs.columbia.edu Abstract equally attractive target. Continuing the increasing trend of the previous years, in 2013 there were 355 reported Return-to-user (ret2usr) attacks redirect corrupted kernel kernel vulnerabilities according to the National Vulnera- pointers to data residing in user space. In response, sev- bility Database, 140 more than in 2012 [73]. Admittedly, eral kernel-hardening approaches have been proposed to the exploitation of user-level software has become much enforce a more strict address space separation, by pre- harder, as recent versions of popular OSes come with nu- venting arbitrary control flow transfers and dereferences merous protections and exploit mitigations. The princi- from kernel to user space. Intel and ARM also recently ple of least privilege is better enforced in user accounts introduced hardware support for this purpose in the form and system services, compilers offer more protections of the SMEP, SMAP, and PXN processor features. Un- against common software flaws, and highly targeted ap- fortunately, although mechanisms like the above prevent plications, such as browsers and document viewers, have the explicit sharing of the virtual address space among started to employ sandboxing.
    [Show full text]
  • System Analysis and Tuning Guide Opensuse Leap 42.1 System Analysis and Tuning Guide Opensuse Leap 42.1
    System Analysis and Tuning Guide openSUSE Leap 42.1 System Analysis and Tuning Guide openSUSE Leap 42.1 An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources. Publication Date: November 05, 2018 SUSE LLC 10 Canal Park Drive Suite 200 Cambridge MA 02141 USA https://www.suse.com/documentation Copyright © 2006–2018 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Docu- mentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see http://www.suse.com/company/legal/ . All other third party trademarks are the prop- erty of their respective owners. A trademark symbol (®, ™ etc.) denotes a SUSE or Novell trademark; an asterisk (*) denotes a third party trademark. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xi I BASICS 1 1 General Notes on System Tuning 2
    [Show full text]
  • Protecting Commodity Operating Systems Through Strong Kernel Isolation
    Protecting Commodity Operating Systems through Strong Kernel Isolation Vasileios P. Kemerlis Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2015 c 2015 Vasileios P. Kemerlis All Rights Reserved ABSTRACT Protecting Commodity Operating Systems through Strong Kernel Isolation Vasileios P. Kemerlis Today’s operating systems are large, complex, and plagued with vulnerabilities that allow perpetrators to exploit them for profit. The constant rise in the number of software weak- nesses, coupled with the sophistication of modern adversaries, make the need for effective and adaptive defenses more critical than ever. In this dissertation, we develop a set of novel protection mechanisms, and introduce new concepts and techniques to secure commodity operating systems against attacks that exploit vulnerabilities in kernel code. Modern OSes opt for a shared process/kernel model to minimize the overhead of opera- tions that cross protection domains. However, this design choice provides a unique vantage point to local attackers, as it allows them to control—both in terms of permissions and contents—part of the memory that is accessible by the kernel, easily circumventing protec- tions like kernel-space ASLR and WˆX. Attacks that leverage the weak separation between user and kernel space, characterized as return-to-user (ret2usr) attacks, have been the de facto kernel exploitation technique in virtually every major OS, while they are not limited to the x86 platform, but have also targeted ARM and others. Given the multi-OS and cross-architecture nature of ret2usr threats, we propose kGuard: a kernel protection mechanism, realized as a cross-platform compiler extension, which can safeguard any 32- or 64-bit OS kernel from ret2usr attacks.
    [Show full text]
  • Practice Problems: Memory
    Lectures on Operating Systems (Mythili Vutukuru, IIT Bombay) Practice Problems: Memory 1. Provide one advantage of using the slab allocator in Linux to allocate kernel objects, instead of simply allocating them from a dynamic memory heap. Ans: A slab allocator is fast because memory is preallocated. Further, it avoids fragmentation of kernel memory. 2. Consider a program that memory maps a large file, and accesses bytes in the first page of the file. Now, a student runs this program on several machines running different versions of Linux, and finds that the actual physical memory consumed by the process (RSS or resident set size) varies from OS to OS. Provide one reason to explain this observation. Ans: Different operating systems may have different heuristics to allocate physical memory to a process. For example, systems may differ in the heuristics for prepaging, i.e., how to prefetch pages that may be accessed in the future. This leads to different values of RSS. 3. In a 32-bit architecture machine running Linux, for every physical memory address in RAM, there are at least 2 virtual addresses pointing to it. That is, every physical address is mapped at least twice into the virtual address space of some set of processes. [T/F] Ans: F 4. Consider a system with N bytes of physical RAM, and M bytes of virtual address space per pro- cess. Pages and frames are K bytes in size. Every page table entry is P bytes in size, accounting for the extra flags required and such. Calculate the size of the page table of a process.
    [Show full text]
  • Chapter 8 Virtual Memory Roadmap
    Operating Systems: Internals and Design Principles, 6/E William Stallings Chapter 8 Virtual Memory Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Roadmap • Hardware and Control Structures • Operating System Software • UNIX and Solaris Memory Management • Linux Memory Management • Windows Memory Management Terminology Key points in Memory Management 1) Memory references are logical addresses dynamically translated into physical addresses at run time – A process may be swapped in and out of main memory occupying different regions at different times during execution 2) A process may be broken up into pieces that do not need to located contiguously in main memory Breakthrough in Memory Management • If both of those two characteristics are present, – then it is not necessary that all of the pages or all of the segments of a process be in main memory during execution. • If the next instruction, and the next data location are in memory then execution can proceed – at least for a time Execution of a Process • Operating system brings into main memory a few pieces of the program • Resident set - portion of process that is in main memory • An interrupt is generated when an address is needed that is not in main memory • Operating system places the process in a blocking state Execution of a Process • Piece of process that contains the logical address is brought into main memory – Operating system issues a disk I/O Read request – Another process is dispatched to run while the disk I/O takes place – An interrupt is issued when disk I/O complete
    [Show full text]
  • Chapter 8 Chapter 8 Virtual Memory
    Operating Systems: Internals and Design Chapter 8 Principles Virtual Memory Seventh Edition William Stallings Operating Systems: Internals and Design Principles You’re gonna need a bigger boat. — Steven Spielberg, JAWS, 1975 Hardware and Control Structures n Two characteristics fundamental to memory management: 1) all memory references are logical addresses that are dynamically translated into physical addresses at run time 2) a process may be broken up into a number of pieces that don’t need to be contiguously located in main memory during execution n If these two characteristics are present, it is not necessary that all of the pages or segments of a process be in main memory during execution Terminology n Operating system brings into main memory a few pieces of the program n Resident set - portion of process that is in main memory n An interrupt is generated when an address is needed that is not in main memory n Operating system places the process in a blocking state Continued . Execution of a Process n Piece of process that contains the logical address is brought into main memory n operating system issues a disk I/O Read request n another process is dispatched to run while the disk I/O takes place n an interrupt is issued when disk I/O is complete, which causes the operating system to place the affected process in the Ready state Implications n More processes may be maintained in main memory n only load in some of the pieces of each process n with so many processes in main memory, it is very likely a process will be in the Ready state
    [Show full text]
  • An Analysis of How Static and Shared Libraries Affect Memory Usage on an IP-STB
    An analysis of how static and shared libraries affect memory usage on an IP-STB Master of Science Thesis in the Programme Networks and Distributed Systems Dongping Huang Chalmers University of Technology University of Gothenburg Department of Computer Science and Engineering Göteborg, Sweden, September 2010 The Author grants to Chalmers University of Technology and University of Gothenburg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. An analysis of how code locality affects the dynamic memory usage on an embedded set- top box running Linux Dongping Huang © Dongping Huang, September 2010. Examiner: Björn von Sydow Chalmers University of Technology University of Gothenburg Department of Computer Science and Engineering SE-412 96 Göteborg Sweden Telephone + 46 (0)31-772 1000 Department of Computer Science and Engineering Göteborg, Sweden September 2010 Abstract An IP-STB is a small and stand alone embedded system which is able to provide a series of advanced entertaining services.
    [Show full text]
  • Working Sets Past and Present
    Purdue University Purdue e-Pubs Department of Computer Science Technical Reports Department of Computer Science 1978 Working Sets Past and Present Peter J. Denning Report Number: 78-276 Denning, Peter J., "Working Sets Past and Present" (1978). Department of Computer Science Technical Reports. Paper 208. https://docs.lib.purdue.edu/cstech/208 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact [email protected] for additional information. WORKING SETS PAST AND PRESENT· Peter J. Denning Computer Science Department Purdue University W. Lafayette, IN 47907 CSD-TR-276 July 1978 (revised May 1979) Abstract A program's working set is the collection of segments (or pages) recently referenced. This concept has led to efficient methods for measuring a program's intrinsic memory demand; it has assisted in understanding and in modeling program behavior; and it has been used as the basis of optimal multiprograrnmed memory management. The total cost of a working set dispatcher is no larger than the total cost of other common dispatchers. This paper outlines the argument why it is unlikely that anyone will find a cheaper nonlookahead ~emory policy that delivers significantly better performance. Index Terms \'lorking sets, memory management. virtual memory, multiprogram­ ming, optimal multiprogramming, lifetime curves, program measureI'lent, program behavior. stochastic program models, phase/transition behavior. program locality, multiprogrammed load control1ers.~ispatchers,work­ ing set dispatchers, memor/ space-time product. *I'lork reported herein \'1as supported in part by NSF Grants GJ-41289 and ~ICS78-01729 at Purdue University. A condensed, preliminary draft of this paper was presented as an invited lecture at the International Symposium on Operating Systems, IRIA Laboria, Rocquenc:ourt, France, October 2-4, 1978 [DENN78d).
    [Show full text]
  • Computer Systems II
    Computer Systems II Virtual Memory Recap: Paging • Memory is divided into frames • Process is divided into pages (same size as frames) Frame Main Memory 0 • Pages are loaded into frames (need not be 1 contiguous) 2 • Page table translates page addresses to frames 3 addresses 4 5 6 Process A (1) Process B (0) Process C (0) Process D (0) 7 Process A (2) Process B (1) Process C (1) Process D (1) 8 Process A (3) Process D (2) Process B (2) Process C (2) 9 Process A (4) Process C (3) Process D (3) Process B (3) 10 Process D (4) Process B (4) 11 Process B (5) Process D (5) 12 13 14 15 Recap: Page Tables • One page table for each process Frame Main Memory 0 Process AD (1)(0) • Page table used to translate virtual 1 Process DA (2)(1) addresses to physical addresses 2 Process DA (3)(2) 3 Process DA (4)(3) 4 Process B (0) 5 Process B (1) Process B Process C Process D 6 Process B (2) Page Frame Page Frame Page Frame 0 4 0 10 0 0 7 Process B (3) 1 5 1 11 1 1 8 Process B (4) 2 6 2 12 2 2 9 Process B (5) 3 7 3 13 3 3 10 Process C (0) 4 8 4 14 5 9 5 15 11 Process C (1) 12 Process C (2) 13 Process C (3) 14 Process D (4) 15 Process D (5) Recap: Problems that Still Remain • Should all pages be loaded in memory? – Limits the number of active processes • Solution: Virtual Memory – Load pages only when needed – Less need to swap blocked processes out to disk Virtual Memory allows the OS to load pieces only when needed (demand paging) Virtual Addressing • Virtual (logical) address is mapped to a real address using a page table • Size of physical address limited
    [Show full text]
  • Section 9: Paging
    Section 9: Paging CS 162 July 22, 2020 Contents 1 Vocabulary 2 2 Address Translation (7/20)3 2.1 Page Allocation..........................................3 2.2 Address Translation.......................................4 3 Paging 6 3.1 Demand Paging..........................................6 3.2 Cached Paging..........................................7 3.3 Inverted Page Tables.......................................8 3.3.1 Inverted Page Table...................................8 3.3.2 Linear Inverted Page Table...............................8 3.3.3 Hashed Inverted Page Table...............................9 4 Page Replacement Algorithms 10 4.1 FIFO................................................ 10 4.2 LRU................................................ 10 4.3 MIN................................................ 10 4.4 FIFO vs. LRU.......................................... 11 4.5 Improving Cache Performance.................................. 11 4.5.1 FIFO........................................... 11 4.5.2 LRU............................................ 11 4.5.3 MIN............................................ 11 1 CS 162 Summer 2020 Section 9: Paging 1 Vocabulary • Demand Paging - The process where the operating system only stores pages that are "in demand" in the main memory and stores the rest in persistent storage (disk). Accesses to pages not currently in memory page fault and the page fault handler will retrieve the request page from disk (paged in). When main memory is full, then as new pages are paged in old pages must be paged out through a process called eviction. Many cache eviction algorithms like least recently used can be applied to demand paging, the main memory is acting as the cache for pages which all start on disk. • Working Set - The subset of the address space that a process uses as it executes. Generally we can say that as the cache hit rate increases, more of the working set is being added to the cache.
    [Show full text]