Demand Paging
Total Page:16
File Type:pdf, Size:1020Kb
Operating Systems: Internals and Design Principles, 6/E Roadmap William Stallings • Hardware and Control Structures • Operating System Software Chapter 8 • UNIX and Solaris Memory Management Virtual Memory • Linux Memory Management • Windows Memory Management Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Key points in Terminology Memory Management 1) Memory references are logical addresses dynamically translated into physical addresses at run time – A process may be swapped in and out of main memory occupying different regions at different times during execution 2) A process may be broken up into pieces that do not need to located contiguously in main memory Breakthrough in Execution of a Process Memory Management • If both of those two characteristics are • Operating system brings into main present, memory a few pieces of the program – then it is not necessary that all of the pages or • Resident set - portion of process that is in all of the segments of a process be in main main memory memory during execution. • An interrupt is generated when an address • If the next instruction, and the next data is needed that is not in main memory location are in memory then execution can proceed • Operating system places the process in a blocking state – at least for a time Implications of Execution of a Process this new strategy • Piece of process that contains the logical • More processes may be maintained in address is brought into main memory main memory – Operating system issues a disk I/O Read – Only load in some of the pieces of each request process – Another process is dispatched to run while the – With so many processes in main memory, it is disk I/O takes place very likely a process will be in the Ready state – An interrupt is issued when disk I/O complete at any particular time which causes the operating system to place • A process may be larger than all of main the affected process in the Ready state memory Real and Thrashing Virtual Memory • Real memory • A state in which the system spends most – Main memory, the actual RAM of its time swapping pieces rather than • Virtual memory executing instructions. – Memory on disk • To avoid this, the operating system tries to – Allows for effective multiprogramming and guess which pieces are least likely to be used in relieves the user of tight constraints of main the near future. memory • The guess is based on recent history A Processes Performance Principle of Locality in VM Environment • Program and data references within a • Note that during process tend to cluster the lifetime of the • Only a few pieces of a process will be process, needed over a short period of time references are • Therefore it is possible to make intelligent confined to a guesses about which pieces will be subset of pages. needed in the future • This suggests that virtual memory may work efficiently Support Needed for Paging Virtual Memory • Hardware must support paging and • Each process has its own page table segmentation • Each page table entry contains the frame • Operating system must be able to manage number of the corresponding page in main the movement of pages and/or segments memory between secondary memory and main • Two extra bits are needed to indicate: memory – whether the page is in main memory or not – Whether the contents of the page has been altered since it was last loaded (see next slide) Paging Table Address Translation Two-Level Page Tables Hierarchical Page Table • Page tables are also stored in virtual memory • When a process is running, part of its page table is in main memory Address Translation for Page tables Hierarchical page table grow proportionally • A drawback of the type of page tables just discussed is that their size is proportional to that of the virtual address space. • An alternative is Inverted Page Tables Inverted Page Table Inverted Page Table • Used on PowerPC, UltraSPARC, and IA- Each entry in the page table includes: 64 architecture • Page number • Page number portion of a virtual address • Process identifier is mapped into a hash value – The process that owns this page. • Hash value points to inverted page table • Control bits • Fixed proportion of real memory is – includes flags, such as valid, referenced, etc required for the tables regardless of the • Chain pointer number of processes – the index value of the next entry in the chain. Translation Lookaside Inverted Page Table Buffer • Each virtual memory reference can cause two physical memory accesses – One to fetch the page table – One to fetch the data • To overcome this problem a high-speed cache is set up for page table entries – Called a Translation Lookaside Buffer (TLB) – Contains page table entries that have been most recently used Looking into the TLB Operation Process Page Table • Given a virtual address, • First checks if page is already in main – processor examines the TLB memory • If page table entry is present (TLB hit), – If not in main memory a page fault is issued – the frame number is retrieved and the real • The TLB is updated to include the new address is formed page entry • If page table entry is not found in the TLB (TLB miss), – the page number is used to index the process page table Translation Lookaside TLB operation Buffer Translation Lookaside Associative Mapping Buffer • As the TLB only contains some of the page table entries we cannot simply index into the TLB based on the page number – Each TLB entry must include the page number as well as the complete page table entry • The process is able to simultaneously query numerous TLB entries to determine if there is a page number match TLB and Page Size Cache Operation • Smaller page size, less amount of internal fragmentation • But Smaller page size, more pages required per process – More pages per process means larger page tables • Larger page tables means large portion of page tables in virtual memory Further complications Page Size to Page Size • Secondary memory is designed to • Small page size, large number of pages efficiently transfer large blocks of data so a will be found in main memory large page size is better • As time goes on during execution, the pages in memory will all contain portions of the process near recent references. Page faults low. • Increased page size causes pages to contain locations further from any recent reference. Page faults rise. Page Size Example Page Size Segmentation Segment Organization • Segmentation allows the programmer to • Starting address corresponding segment view memory as consisting of multiple in main memory address spaces or segments. • Each entry contains the length of the – May be unequal, dynamic size segment – Simplifies handling of growing data structures • A bit is needed to determine if segment is – Allows programs to be altered and recompiled already in main memory independently • Another bit is needed to determine if the – Lends itself to sharing data among processes segment has been modified since it was – Lends itself to protection loaded in main memory Address Translation in Segment Table Entries Segmentation Combined Paging and Combined Paging and Segmentation Segmentation • Paging is transparent to the programmer • Segmentation is visible to the programmer • Each segment is broken into fixed-size pages Address Translation Protection and sharing • Segmentation lends itself to the implementation of protection and sharing policies. • As each entry has a base address and length, inadvertent memory access can be controlled • Sharing can be achieved by segments referencing multiple processes Protection Relationships Roadmap • Hardware and Control Structures • Operating System Software • UNIX and Solaris Memory Management • Linux Memory Management • Windows Memory Management Memory Management Key Design Elements Decisions • Whether or not to use virtual memory techniques • The use of paging or segmentation or both • The algorithms employed for various aspects of memory management • Key aim: Minimise page faults – No definitive best policy Demand Paging Fetch Policy and Prepaging • Determines when a page should be • Demand paging brought into memory – only brings pages into main memory when a • Two main types: reference is made to a location on the page – Demand Paging – Many page faults when process first started – Prepaging • Prepaging – brings in more pages than needed – More efficient to bring in pages that reside contiguously on the disk – Don’t confuse with “swapping” Placement Policy Replacement Policy • Determines where in real memory a • When all of the frames in main memory process piece is to reside are occupied and it is necessary to bring in • Important in a segmentation system a new page, the replacement policy • Paging or combined paging with determines which page currently in segmentation hardware performs address memory is to be replaced. translation Replacement Policy: But… Frame Locking • Which page is replaced? • Frame Locking • Page removed should be the page least – If frame is locked, it may not be replaced likely to be referenced in the near future – Kernel of the operating system – How is that determined? – Key control structures – Principal of locality again – I/O buffers • Most policies predict the future behavior – Associate a lock bit with each frame on the basis of past behavior Basic Replacement Examples Algorithms • There are certain basic algorithms that are • An example of the implementation of these used for the selection of a page to replace, policies will use a page address stream they include formed by executing the program is – Optimal – 2 3 2 1 5 2 4 5 3 2 5 2 – Least recently used (LRU) • Which means that the first page – First-in-first-out (FIFO) referenced is 2, –Clock – the second page referenced is 3, • Examples – And so on. Optimal Policy Optimal policy Example • Selects for replacement that page for which the time to the next reference is the longest • But Impossible to have perfect knowledge of future events • The optimal policy produces three page faults after the frame allocation has been filled.