Operating Systems I Swapping Swapping Motivation Demand

Total Page:16

File Type:pdf, Size:1020Kb

Operating Systems I Swapping Swapping Motivation Demand Swapping F Active processes use more physical memory than system has Address Binding can be fixed Swap out or relocatable P1 Operating Systems I at runtime P2 Backing Store (Swap Space) Virtual Memory OS Main Memory Swapping Motivation F Consider 100K proc, 1MB/s disk, 8ms seek – 108 ms * 2 = 216 ms F Logical address space larger than physical – If used for context switch, want large quantum! memory F Small processes faster – “Virtual Memory” F Pending I/O (DMA) – on special disk – don’t swap F Abstraction for programmer – DMA to OS buffers F Performance ok? F Unix uses swapping variant – Error handling not used – Each process has “too large” address space – Maximum arrays – Demand Paging Demand Paging Paging Implementation Validation F Less I/O needed Page out Bit A1 F Less memory needed A1 A2 A3 A3 Page 0 0 1 v F Faster response B1 B2 0 Page 1 1 0 i F More users B1 1 Page 0 0 3 F Page 2 2 3 v No pages in memory 2 1 initially Page 3 3 0 i Main Memory 3 Page 2 2 – Pure demand Logical Page Table paging Memory Physical Memory 1 Page Fault Performance of Demand Paging F Page not in memory – interrupt OS => page fault Page Fault Rate F OS looks in table: 0 < p < 1.0 (no page faults to every is fault) – invalid reference? => abort Effective Access Time – not in memory? => bring it in = (1-p) (memory access) + p (page fault overhead) F Get empty frame (from list) Page Fault Overhead F Swap page into frame = swap page out + swap page in + restart F Reset tables (valid bit = 1) F Restart instruction Performance Example Page Replacement F memory access time = 100 nanoseconds F F swap fault overhead = 25 msec Page fault => What if no free frames? F page fault rate = 1/1000 – terminate user process (ugh!) – swap out process (reduces degree of multiprog) F EAT = (1-p) x 100 + p x (25 msec) – replace other page with needed page = (1-p) x 100 + p x 25,000,000 F Page replacement: = 100 + 24,999,900 x p – if free frame, use it = 100 + 24,999,900 x 1/1000 = 25 microseconds! – use algorithm to select victim frame F Want less than 10% degradation – write page to disk, changing tables 110 > 100 + 24,999,900 x p – read in new page 10 > 24,999,9000 x p – restart process p < .0000004 or 1 fault in 2,500,000 accesses! Page Replacement Page Replacement Algorithms “Dirty” Bit - avoid page out 0 1 v F Every system has its own (0) 1 02 iv (2) F Want lowest page fault rate Page 0 2 3 v 0 F Evaluate by running it on a particular string Page 1 3 0 i (1) 1 Page 0 0 3 of memory references (reference string) and Page 2 Page Table 2 victim computing number of page faults 1 Page 3 (3) F Example: 1,2,3,4,1,2,5,1,2,3,4,5 0 1 vi (4) 3 Page 2 2 Logical Memory 1 0 i Physical Memory Page Table 2 First-In-First-Out (FIFO) Optimal 1,2,3,4,1,2,5,1,2,3,4,5 vs. 1 4 5 3 Frames / Process 9 Page Faults F 2 1 3 Replace the page that will not be used for 3 2 4 the longest period of time 1,2,3,4,1,2,5,1,2,3,4,5 1 5 4 1 4 4 Frames / Process 10 Page Faults! 4 Frames / Process 2 1 5 2 6 Page Faults 3 2 Belady’s Anomaly 3 4 3 4 5 How do we know this? Use as benchmark Least Recently Used LRU Implementation F Replace the page that has not been used for F the longest period of time Counter implementation – every page has a counter; every time page is 1,2,3,4,1,2,5,1,2,3,4,5 referenced, copy clock to counter – when a page needs to be changed, compare the 1 5 counters to determine which to change F 2 Stack implementation 8 Page Faults – keep a stack of page numbers 3 5 4 – page referenced: move to top 4 3 No Belady’s Anomoly - “Stack” Algorithm – no search needed for replacement - N frames subset of N+1 LRU Approximations Second-Chance F LRU good, but hardware support expensive F FIFO replacement, but … F Some hardware support by reference bit – Get first in FIFO – with each page, initially = 0 – Look at reference bit u – when page is referenced, set = 1 bit == 0 then replace u bit == 1 then set bit = 0, get next in FIFO – replace the one which is 0 (no order) F – enhance by having 8 bits and shifting If page referenced enough, never replaced F – approximate LRU Implement with circular queue 3 Second-Chance Enhanced Second-Chance F 2-bits, reference bit and modify bit (a) (b) F (0,0) neither recently used nor modified 1 1 0 1 – best page to replace 0 2 0 2 F (0,1) not recently used but modified Next 1 3 0 3 – needs write-out Vicitm F 1 4 0 4 (1,0) recently used but clean – probably used again soon F (1,1) recently used and modified If all 1, degenerates to FIFO – used soon, needs write-out F Circular queue in each class -- (Macintosh) Counting Algorithms Page Buffering F Keep a counter of number of references F Pool of frames – LFU - replace page with smallest count – start new process immediately, before writing old u if does all in beginning, won’t be replaced u write out when system idle u decay values by shift – list of modified pages – MFU - smallest count just brought in and will u write out when system idle probably be used – pool of free frames, remember content F Not too common (expensive) and not too u page fault => check pool good Allocation of Frames Fixed Allocation F How many fixed frames per process? F Equal allocation F Two allocation schemes: – ex: 93 frames, 5 procs = 18 per proc (3 in pool) – fixed allocation F Proportional Allocation – priority allocation – number of frames proportional to size – ex: 64 frames, s1 = 10, s2 = 127 u f1 = 10 / 137 x 64 = 5 u f2 = 127 / 137 x 64 = 59 F Treat processes equal 4 Priority Allocation Thrashing F Use a proportional scheme based on priority F If a process does not have “enough” pages, F If process generates a page fault the page-fault rate is very high – select replacement a process with lower – low CPU utilization priority – OS thinks it needs increased multiprogramming F “Global” versus “Local” replacement – adds another procces to system – local consistent (not influenced by others) F Thrashing is when a process is busy – global more efficient (used more often) swapping pages in and out Thrashing Cause of Thrashing F Why does paging work? – Locality model u process migrates from one locality to another u localities may overlap CPU F Why does thrashing occur? utilization – sum of localities > total memory size F How do we fix thrashing? degree of muliprogramming – Working Set Model – Page Fault Frequency Working-Set Model Working Set Example F Working set window W = a fixed number of F T = 5 page references F 1 2 3 2 3 1 2 4 3 4 7 4 3 3 4 1 1 2 2 2 1 – total number of pages references in time T F D = sum of size of W’s W={1,2,3} W={3,4,7} W={1,2} – if T too small, will not encompass locality – if T too large, will encompass several localities – if T => infinity, will encompass entire program F if D > m => thrashing, so suspend a process F Modify LRU appx to include Working Set 5 Page Fault Frequency Prepaging increase number of frames F Pure demand paging has many page faults upper bound initially – use working set lower bound – does cost of prepaging unused frames outweigh decrease Page Fault Rate number of cost of page-faulting? frames Number of Frames F Establish “acceptable” page-fault rate – If rate too low, process loses frame – If rate too high, process gains frame Page Size Program Structure F Old - Page size fixed, New -choose page size F consider: F How do we pick the right page size? Tradeoffs: int A[1024][1024]; – Fragmentation for (j=0; j<1024; j++) – Table size for (i=0; i<1024; i++) – Minimize I/O u transfer small (.1ms), latency + seek time large (10ms) A[i][j] = 0; F – Locality suppose: u small finer resolution, but more faults – process has 1 frame – ex: 200K process (1/2 used), 1 fault / 200k, 100K faults/1 byte – 1 row per page F Historical trend towards larger page sizes – => 1024x1024 page faults! – CPU, mem faster proportionally than disks Program Structure Priority Processes int A[1024][1024]; F Consider for (i=0; i<1024; i++) – low priority process faults, for (j=0; j<1024; j++) u bring page in A[i][j] = 0; – low priority process in ready queue for awhile, F 1024 page faults waiting while high priority process runs F stack vs. hash table – high priority process faults u F low priority page clean, not used in a while Compiler => perfect! – separate code from data F Lock-bit (like for I/O) until used once – keep routines that call each other together F LISP (pointers) vs. Pascal (no-pointers) 6 Real-Time Processes Virtual Memory and WinNT F Page Replacement Algorithm F Real-time – FIFO – bounds on delay – Missing page, plus adjacent pages – hard-real time: systems crash, lives lost F u air-traffic control, factor automation Working set – soft-real time: application sucks – default is 30 u audio, video – take victim frame periodically F Paging adds unexpected delays – if no fault, reduce set size by 1 – don’t do it F Reserve pool – lock bits for real-time processes – hard page faults – soft page faults Virtual Memory and Linux Virtual Memory and WinNT F Regions of virtual memory F Shared pages – paging disk (normal) – level of indirection for easier updates – file (text segment, memory mapped file) – same virtual entry F New Virtual Memory F Page File – exec() creates new page table – stores only modified logical pages – fork() copies page table – code and memory mapped files on disk already u reference to common pages u if written, then copied F Page Replacement Algorithm – second chance (with more bits) Application Performance Studies Capacity Planning Then and Now and F Demand Paging in Windows NT Capacity Planning in the good old days – used to be just mainframes – simple CPU-load based queuing theory Mikhail Mikhailov Saqib Syed – Unix Ganga Kannan F Capacity Planning today Divya Prakash Mark Claypool – distributed systems David Finkel Sujit Kumar – networks of workstations WPI BMC Software, Inc.
Recommended publications
  • Overcoming Traditional Problems with OS Huge Page Management
    MEGA: Overcoming Traditional Problems with OS Huge Page Management Theodore Michailidis Alex Delis Mema Roussopoulos University of Athens University of Athens University of Athens Athens, Greece Athens, Greece Athens, Greece [email protected] [email protected] [email protected] ABSTRACT KEYWORDS Modern computer systems now feature memory banks whose Huge pages, Address Translation, Memory Compaction aggregate size ranges from tens to hundreds of GBs. In this context, contemporary workloads can and do often consume ACM Reference Format: Theodore Michailidis, Alex Delis, and Mema Roussopoulos. 2019. vast amounts of main memory. This upsurge in memory con- MEGA: Overcoming Traditional Problems with OS Huge Page Man- sumption routinely results in increased virtual-to-physical agement. In The 12th ACM International Systems and Storage Con- address translations, and consequently and more importantly, ference (SYSTOR ’19), June 3–5, 2019, Haifa, Israel. ACM, New York, more translation misses. Both of these aspects collectively NY, USA, 11 pages. https://doi.org/10.1145/3319647.3325839 do hamper the performance of workload execution. A solu- tion aimed at dramatically reducing the number of address translation misses has been to provide hardware support for 1 INTRODUCTION pages with bigger sizes, termed huge pages. In this paper, we Computer memory capacities have been increasing sig- empirically demonstrate the benefits and drawbacks of using nificantly over the past decades, leading to the development such huge pages. In particular, we show that it is essential of memory hungry workloads that consume vast amounts for modern OS to refine their software mechanisms to more of main memory. This upsurge in memory consumption rou- effectively manage huge pages.
    [Show full text]
  • Asynchronous Page Faults Aix Did It Red Hat Author Gleb Natapov August 10, 2010
    Asynchronous page faults Aix did it Red Hat Author Gleb Natapov August 10, 2010 Abstract Host memory overcommit may cause guest memory to be swapped. When guest vcpu access memory swapped out by a host its execution is suspended until memory is swapped back. Asynchronous page fault is a way to try and use guest vcpu more efficiently by allowing it to execute other tasks while page is brought back into memory. Part I How KVM Handles Guest Memory and What Inefficiency it Has With Regards to Host Swapping Mapping guest memory into host memory But we do it on demand Page fault happens on first guest access What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn()
    [Show full text]
  • What Is an Operating System III 2.1 Compnents II an Operating System
    Page 1 of 6 What is an Operating System III 2.1 Compnents II An operating system (OS) is software that manages computer hardware and software resources and provides common services for computer programs. The operating system is an essential component of the system software in a computer system. Application programs usually require an operating system to function. Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
    [Show full text]
  • Measuring Software Performance on Linux Technical Report
    Measuring Software Performance on Linux Technical Report November 21, 2018 Martin Becker Samarjit Chakraborty Chair of Real-Time Computer Systems Chair of Real-Time Computer Systems Technical University of Munich Technical University of Munich Munich, Germany Munich, Germany [email protected] [email protected] OS program program CPU .text .bss + + .data +/- + instructions cache branch + coherency scheduler misprediction core + pollution + migrations data + + interrupt L1i$ miss access + + + + + + context mode + + (TLB flush) TLB + switch data switch miss L1d$ +/- + (KPTI TLB flush) miss prefetch +/- + + + higher-level readahead + page cache miss walk + + multicore + + (TLB shootdown) TLB coherency page DRAM + page fault + cache miss + + + disk + major minor I/O Figure 1. Event interaction map for a program running on an Intel Core processor on Linux. Each event itself may cause processor cycles, and inhibit (−), enable (+), or modulate (⊗) others. Abstract that our measurement setup has a large impact on the results. Measuring and analyzing the performance of software has More surprisingly, however, they also suggest that the setup reached a high complexity, caused by more advanced pro- can be negligible for certain analysis methods. Furthermore, cessor designs and the intricate interaction between user we found that our setup maintains significantly better per- formance under background load conditions, which means arXiv:1811.01412v2 [cs.PF] 20 Nov 2018 programs, the operating system, and the processor’s microar- chitecture. In this report, we summarize our experience on it can be used to improve high-performance applications. how performance characteristics of software should be mea- CCS Concepts • Software and its engineering → Soft- sured when running on a Linux operating system and a ware performance; modern processor.
    [Show full text]
  • Guide to Openvms Performance Management
    Guide to OpenVMS Performance Management Order Number: AA–PV5XA–TE May 1993 This manual is a conceptual and tutorial guide for experienced users responsible for optimizing performance on OpenVMS systems. Revision/Update Information: This manual supersedes the Guide to VMS Performance Management, Version 5.2. Software Version: OpenVMS VAX Version 6.0 Digital Equipment Corporation Maynard, Massachusetts May 1993 Digital Equipment Corporation makes no representations that the use of its products in the manner described in this publication will not infringe on existing or future patent rights, nor do the descriptions contained in this publication imply the granting of licenses to make, use, or sell equipment or software in accordance with the description. Possession, use, or copying of the software described in this publication is authorized only pursuant to a valid written license from Digital or an authorized sublicensor. © Digital Equipment Corporation 1993. All rights reserved. The postpaid Reader’s Comments forms at the end of this document request your critical evaluation to assist in preparing future documentation. The following are trademarks of Digital Equipment Corporation: ACMS, ALL–IN–1, Bookreader, CI, DBMS, DECnet, DECwindows, Digital, MicroVAX, OpenVMS, PDP–11, VAX, VAXcluster, VAX DOCUMENT, VMS, and the DIGITAL logo. All other trademarks and registered trademarks are the property of their respective holders. ZK4521 This document was prepared using VAX DOCUMENT Version 2.1. Contents Preface ............................................................ ix 1 Introduction to Performance Management 1.1 Knowing Your Workload ....................................... 1–2 1.1.1 Workload Management .................................... 1–3 1.1.2 Workload Distribution ..................................... 1–4 1.1.3 Code Sharing ............................................ 1–4 1.2 Evaluating User Complaints .
    [Show full text]
  • Virtual Memory §5.4 Virt Virtual Memory U Al Memo Use Main Memory As a “Cache” For
    Chapter 5 Large and Fast: Exppgloiting Memory Hierarchy Part II Virtual Memory §5.4 Virt Virtual Memory u al Memo Use main memory as a “cache” for secondary (disk) storage r y Managed jointly by CPU hardware and the operating system (OS) Programs share main memory Each gets a private virtual address space holdinggqy its frequently used code and data Protected from other programs CPU and OS translate virtual addresses to physical addresses VM “block” is called a page VM translation “miss” is called a page fault Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2 Address Translation Fixed-size pages (e.g., 4K) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 3 Page Fault Penalty On page fault, the page must be fetched from disk Takes millions of clock cycles Handled by OS code Try to minimize page fault rate Fully associative placement Smart replacement algorithms Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4 Page Tables Stores placement information Array of page table entries, indexed by virtual page number Page table register in CPU points to page table in physical memory If page is present in memory PTE stores the physical page number Plus other status bits (referenced, dirty, …) If page is not present PTE can refer to location in swap space on dis k Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 5 Translation Using a Page Table Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6 Mapping Pages to Storage Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 7 Replacement and Writes
    [Show full text]
  • Operating Systems
    UC Santa Barbara Operating Systems Christopher Kruegel Department of Computer Science UC Santa Barbara http://www.cs.ucsb.edu/~chris/ Virtual Memory and Paging UC Santa Barbara • What if a program is too big to be loaded in memory • What if a higher degree of multiprogramming is desirable • Physical memory is split in page frames • Virtual memory is split in pages • OS (with help from the hardware) manages the mapping between pages and page frames 2 Mapping Pages to Page Frames UC Santa Barbara • Virtual memory: 64KB • Physical memory: 32KB • Page size: 4KB • Virtual memory pages: 16 • Physical memory pages: 8 3 Memory Management Unit UC Santa Barbara • Automatically performs the mapping from virtual addresses into physical addresses 4 Memory Management Unit UC Santa Barbara • Addresses are split into a page number and an offset • Page numbers are used to look up a table in the MMU with as many entries as the number of virtual pages • Each entry in the table contains a bit that states if the virtual page is actually mapped to a physical one • If it is so, the entry contains the number of physical page used • If not, a page fault is generated and the OS has to deal with it 5 Page Tables UC Santa Barbara • Page tables contain an entry for each virtual table • If virtual memory is big (e.g., 32 bit and 64 bit addresses) the table can become of unmanageable size • Solution: instead of keeping them in the MMU move them to main memory • Problem: page tables are used each time an access to memory is performed.
    [Show full text]
  • An Evolutionary Study of Linux Memory Management for Fun and Profit Jian Huang, Moinuddin K
    An Evolutionary Study of Linux Memory Management for Fun and Profit Jian Huang, Moinuddin K. Qureshi, and Karsten Schwan, Georgia Institute of Technology https://www.usenix.org/conference/atc16/technical-sessions/presentation/huang This paper is included in the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16). June 22–24, 2016 • Denver, CO, USA 978-1-931971-30-0 Open access to the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16) is sponsored by USENIX. An Evolutionary Study of inu emory anagement for Fun and rofit Jian Huang, Moinuddin K. ureshi, Karsten Schwan Georgia Institute of Technology Astract the patches committed over the last five years from 2009 to 2015. The study covers 4587 patches across Linux We present a comprehensive and uantitative study on versions from 2.6.32.1 to 4.0-rc4. We manually label the development of the Linux memory manager. The each patch after carefully checking the patch, its descrip- study examines 4587 committed patches over the last tions, and follow-up discussions posted by developers. five years (2009-2015) since Linux version 2.6.32. In- To further understand patch distribution over memory se- sights derived from this study concern the development mantics, we build a tool called MChecker to identify the process of the virtual memory system, including its patch changes to the key functions in mm. MChecker matches distribution and patterns, and techniues for memory op- the patches with the source code to track the hot func- timizations and semantics. Specifically, we find that tions that have been updated intensively.
    [Show full text]
  • CS423 Spring 2014 MP3: Virtual Memory Page Fault Profiler Due: April 7Th, 11:00 Am
    CS423 Spring 2014 MP3: Virtual Memory Page Fault Profiler Due: April 7th, 11:00 am 1. Goals and Overview • Understand the Linux virtual to physical page mapping and page fault rate. • Design a lightweight tool that can profile page fault rate. • Implement the profiler tool as a Linux kernel module. • Learn how to use the kernel-level APIs for character devices, vmalloc(), and mmap(). • Test the kernel-level profiler by using a given user-level benchmark program. • Analyze, plot, and document the profiled data as a function of the workload characteristics. 2. Development Setup The development setup is the same as MP2. After MP1 and MP2, you should be familiar with the Linux kernel programming environment provided by the department virtual machines. You will work on the provided Virtual Machine and you will develop kernel modules for the Linux Kernel 3.5.0 provided in this Virtual Machine. Please reboot your Virtual Machine before starting MP3. Also, it is recommended that you backup your work often by making copies to external media or by using the check-pointing feature of your Virtual Machine (by executing “Take Snapshot” in the right-click menu of your VM). It is very easy to damage your Virtual Machine doing kernel programming and you risk losing all your work. 3. Introduction Due to the ever growing performance gap between memory and hard disk, the management efficiency of the operating system virtual memory system becomes more important to the whole system performance. For example, an inefficient replacement of memory pages can seriously harm the response time and the throughput of user-level programs.
    [Show full text]
  • Virtual Memory March 18, 2008
    OOppeerraatitinngg SSyysstetemmss CCMMPPSSCC 447733 Virtual Memory March 18, 2008 - Lecture 16 Instructor: Trent Jaeger • Last class: – Paging • Today: – Virtual Memory Virtual Memory • What if programs require more memory than available physical memory? – Use overlays • Difficult to program though! – Virtual Memory. • Supports programs that are larger than available physical memory. • Allows several programs to reside in physical memory (or at-least the relevant portions of them). • Allows non-contiguous allocation without making programming difficult. Example Virtual Address Space-1 (1 MB) Virtual Address Physical Space-2 (1 MB) Memory . (16 KB) . Virtual Address Space-n (1 MB) Page Faults • If a Page-table mapping indicates an absence of the page in physical memory, hardware raises a “Page- Fault”. • OS traps this fault and the interrupt handler services the fault by initiating a disk-read request. • Once page is brought in from disk to main memory, page-table entry is updated and the process which faulted is restarted. – May involve replacing another page and invalidating the corresponding page-table entry. Page Table When Some Pages Are Not in Main Memory Page Fault • If there is a reference to a page, first reference to that page will trap to operating system: – page fault • Operating system looks at another table to decide: – Invalid reference -- abort – Just not in memory • Get empty frame • Swap page into frame • Reset tables • Set validation bit = v • Restart the instruction that caused the page fault Steps in Handling a Page
    [Show full text]
  • User Level Page Faults
    August 12, 2020 DRAFT User Level Page Faults Qingyang Li CMU-CS-20-124 August School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Thesis Committee: Seth Goldstein, Chair Someone else Submitted in partial fulfillment of the requirements for the degree of Masters in Computer Science. Copyright c 2020 Qingyang Li August 12, 2020 DRAFT Keywords: User level interrupt, TLB, page table, MMU, page fault August 12, 2020 DRAFT For my grandparents: I am resting in the shade of trees that you planted. August 12, 2020 DRAFT iv August 12, 2020 DRAFT Abstract Memory management has always been a responsibility shared by the hardware and the operating system. The MMU (memory management unit) walks the kernel- managed page tables to ensure safety and isolation between process’ data. While users can alter some aspects of the memory system they control (e.g., permissions) these operations come with a high overhead. We propose User Level Page Tables as a hardware/software mechanism that of- fers a low overhead mechanism for users to manage their application memory per- missions while still having hardware enforcing those permissions. Our mechanism allows users to modify page permissions with a single write in user space and with- out changing privilege levels. Users can also handle many types of page faults using handler functions that they install, without crossing into kernel space. To realize this mechanism, we modify the hardware address translation pipeline, specifically the MMU and TLB components. We also modify the Linux kernel by adding a set of “shadow” user-level page tables. We evaluate our approach of User Level Page Tables by looking at the overhead of watchpoints.
    [Show full text]
  • Basic Algorithms
    Basic Algorithms • Use “use” and “modify” bits 1. Scan for first frame with u=0, m=0 2. If 1) fails look for frame with u=0, m=1, setting the use bits to 0 during scan 3. If 2) failed repeating 1) and 2) will find a replacement 1 2 Resident Set Size • Fixed-allocation – Gives a process a fixed number of pages within which to execute – When a page fault occurs, one of the pages of that process must be replaced • Variable-allocation – Number of pages allocated to a process varies over the lifetime of the process 3 Fixed Allocation, Local Scope • Decide ahead of time the amount of allocation to give a process – If allocation is too small, there will be a high page fault rate – If allocation is too large there will be too few programs in main memory 4 Variable Allocation, Global Scope • Easiest to implement • Adopted by many operating systems • Operating system keeps list of free frames • Free frame is added to resident set of process when a page fault occurs • If no free frame, replaces one from another process 5 Variable Allocation, Local Scope • When new process added, allocate number of page frames based on application type, program request, or other criteria • When page fault occurs, select page from among the resident set of the process that suffers the fault • Reevaluate allocation from time to time 6 Cleaning Policy • Demand cleaning – A page is written out only when it has been selected for replacement • Precleaning – Pages are written out in batches 7 Cleaning Policy • Best approach uses page buffering – Replaced pages are placed
    [Show full text]