Memory Management Or Sometimes As a Combination of Both

Total Page:16

File Type:pdf, Size:1020Kb

Memory Management Or Sometimes As a Combination of Both Virtual Memory (Chapter 4, Tanenbaum) Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory Paul Lu) Introduction So far, we separated the programmer's view of memory from that of the operating system using a mapping mechanism. This allows the OS to move user programs around and simplifies sharing of memory between them. However, we also assumed that a user program had to be loaded completely into the memory before it could run. Problem: Waste of memory, because a program only needs a small amount of memory at any given time. Solution: Virtual memory; a program can run with only some of its virtual address space in main memory. Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 2 Paul Lu) Principles of operation The basic idea with virtual memory is to create an illusion of memory that is as large as a disk (in gigabytes) and as fast as memory (in nanoseconds). The key principle is locality of reference, which recognizes that a significant percentage of memory accesses in a running program are made to a subset of its pages. Or simply put, a running program only needs access to a portion of its virtual address space at a given time. With virtual memory, a logical (virtual) address translates to: ± Main memory (small but fast), or ± Paging device (large but slow), or ± None (not allocated, not used, free.) Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 3 Paul Lu) A virtual view Main Memory (physical) Free Virtual Address Space (logical) Paging Device Backing Storage Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 4 Paul Lu) Virtual memory Virtual memory (sub-)system can be implemented as an extension of paged or segmented memory management or sometimes as a combination of both. In this scheme, the operating system has the ability to execute a program which is only partially loaded in memory. Note: the idea was originally explored earlier in ªoverlaysº. However now, with virtual memory, the fragmentation and its management is done by the operating system. Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 5 Paul Lu) Missing pages What happens when an executing program references an address that is not in main memory? Here, both hardware (H/ W) and software (S/W) cooperate and solve the problem: · The page table is extended with an extra bit, present. Initially, all the present bits are cleared (H/W and S/W). · While doing the address translation, the MMU checks to see if this bit is set. Access to a page whose present bit is not set causes a special hardware trap, called page fault (H/W). · When a page fault occurs the operating system brings the page into memory, sets the corresponding present bit, and restarts the execution of the instruction (S/W). Most likely, the page carrying the address will be on the paging device, but possibly does not exist at all! Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 6 Paul Lu) Multi-level pagingÐrevisited Second-level Paging Device Page Tables To frames Page Page or Number Offset disk Top-level P1 P2 Page Table 0 0 0 1 1 57 3 0 Frame Present bit . 57 . 840 840 Note: This example barely illustrates the present bit. The technique can be applied to any non-contiguous memory allocation schemes. Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 7 Paul Lu) Page fault handlingÐby picture Page Table Address invalid Reference Operating Segmentation 1 Page System fault fault 0 1 Restart execution Page is on Paging Device Update the page table Find an empty frame and bring in the missing page Program's Paging Logical Device Address Space Main Memory Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 8 Paul Lu) Page fault handlingÐby words When a page fault occurs, the system: · marks the current process as blocked (waiting for a page), · finds an empty frame or make a frame empty in main memory, · determines the location of the requested page on paging device, · performs an I/O operation to fetch the page to main memory, · triggers a ªpage fetched'' event (e.g., special form of I/O completion interrupt) to wake up the process. Since the fourth (and occasionally the second) step involves I/ O operations, it makes sense to perform this operation with a special system process (e.g., in UNIX, pager process.) Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 9 Paul Lu) Additional hardware support Despite the similarities between paging or segmentation and virtual memory, there is a small but an important problem that requires additional hardware support. Consider the following M68030 instruction: DBEQ D0, Next ; Decrement and Branch if Equal Which can be micro-coded as: Fetch (instruction); decrement D0; if D0 is zero, set PC to ªNextº else increment PC. What if the instruction itself and the address Next are on two different pages and the latter page was not in memory? Page fault... From where and how to restart the Copyright ©instruction? 1996±2002 Eskicioglu (Hint: D0 is already decremented.) and Marsland (and Prentice-Hall and July 99 Virtual Memory 10 Paul Lu) Possible support The moral of the previous example is that if we want to have complex instructions and virtual memory, we need to have additional support from the hardware, such as: · Partial effects of the faulted instruction are undone and the instruction is restarted after servicing the fault (VAX-11/780) · The instruction resumes from the point of the fault (IBM-370) · Before executing an instruction make sure that all referenced addresses are available in the memory (for some instructions, CPU generates all page faults!) In practice, some or all of the above approaches are combined. Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 11 Paul Lu) Basic policies The hardware only provides the basic capabilities for virtual memory. The operating system, on the other hand, must make several decisions: · AllocationÐhow much real memory to allocate to each (ready) program? · FetchingÐwhen to bring the pages into main memory? · PlacementÐwhere in the memory the fetched page should be loaded? · ReplacementÐwhat page should be removed from main memory? Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 12 Paul Lu) Allocation policy In general, the allocation policy deals with conflicting requirements: · The fewer the frames allocated for a program, the higher the page fault rate. · The fewer the frames allocated for a program, the more programs can reside in memory; thus, decreasing the need of swapping. · Allocating additional frames to a program beyond a certain number results in little or only moderate gain in performance. The number of allocated pages (also known as resident set size) can be fixed or can be variable during the execution of a program. Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 13 Paul Lu) Fetch policy · Demand paging ± Start a program with no pages loaded; wait until it references a page; then load the page (this is the most common approach used in paging systems.) · Request paging ± Similar to overlays, let the user identify which pages are needed (not practical, leads to over estimation and also user may not know what to ask for.) · Pre-paging ± Start with one or a few pages pre-loaded. As pages are referenced, bring in other (not yet referenced) pages too. Opposite to fetching, the cleaning policy deals with determining when a modified (dirty) page should be written back to the paging device. Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 14 Paul Lu) Placement policy This policy usually follows the rules about paging and segmentation discussed earlier. Given the matching sizes of a page and a frame, placement with paging is straightforward. Segmentation requires more careful placement, especially when not combined with paging. Placement in pure segmentation is an important issue and must consider ªfreeº memory management policies. With the recent developments in non-uniform memory access (NUMA) distributed memory multiprocessor systems, placement does become a major concern. Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 15 Paul Lu) Replacement policy The most studied area of the memory management is the replacement policy or victim selection to satisfy a page fault: · FIFOÐthe frames are treated as a circular list; the oldest (longest resident) page is replaced. · LRUÐthe frame whose contents have not been used for the longest time is replaced. · OPTÐthe page that will not be referenced again for the longest time is replaced (prediction of the future; purely theoretical, but useful for comparison.) · RandomÐa frame is selected at random. Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 16 Paul Lu) More on replacement policy · Replacement scope: ± GlobalÐselect a victim among all processes. ± LocalÐselect a page from the faulted process. · Frame lockingÐframes that belong to resident kernel, or are used for critical purposes, may be locked for improved performance. · Page bufferingÐvictim frames are grouped into two categories: those that hold unmodified (clean) pages and modified (dirty) pages (VAX/VMS uses this approach.) Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Virtual Memory 17 Paul Lu) algorithm All the frames, along with a used 0 1 1 bit, are kept in a circular queue.
Recommended publications
  • Overcoming Traditional Problems with OS Huge Page Management
    MEGA: Overcoming Traditional Problems with OS Huge Page Management Theodore Michailidis Alex Delis Mema Roussopoulos University of Athens University of Athens University of Athens Athens, Greece Athens, Greece Athens, Greece [email protected] [email protected] [email protected] ABSTRACT KEYWORDS Modern computer systems now feature memory banks whose Huge pages, Address Translation, Memory Compaction aggregate size ranges from tens to hundreds of GBs. In this context, contemporary workloads can and do often consume ACM Reference Format: Theodore Michailidis, Alex Delis, and Mema Roussopoulos. 2019. vast amounts of main memory. This upsurge in memory con- MEGA: Overcoming Traditional Problems with OS Huge Page Man- sumption routinely results in increased virtual-to-physical agement. In The 12th ACM International Systems and Storage Con- address translations, and consequently and more importantly, ference (SYSTOR ’19), June 3–5, 2019, Haifa, Israel. ACM, New York, more translation misses. Both of these aspects collectively NY, USA, 11 pages. https://doi.org/10.1145/3319647.3325839 do hamper the performance of workload execution. A solu- tion aimed at dramatically reducing the number of address translation misses has been to provide hardware support for 1 INTRODUCTION pages with bigger sizes, termed huge pages. In this paper, we Computer memory capacities have been increasing sig- empirically demonstrate the benefits and drawbacks of using nificantly over the past decades, leading to the development such huge pages. In particular, we show that it is essential of memory hungry workloads that consume vast amounts for modern OS to refine their software mechanisms to more of main memory. This upsurge in memory consumption rou- effectively manage huge pages.
    [Show full text]
  • Memory Protection at Option
    Memory Protection at Option Application-Tailored Memory Safety in Safety-Critical Embedded Systems – Speicherschutz nach Wahl Auf die Anwendung zugeschnittene Speichersicherheit in sicherheitskritischen eingebetteten Systemen Der Technischen Fakultät der Universität Erlangen-Nürnberg zur Erlangung des Grades Doktor-Ingenieur vorgelegt von Michael Stilkerich Erlangen — 2012 Als Dissertation genehmigt von der Technischen Fakultät Universität Erlangen-Nürnberg Tag der Einreichung: 09.07.2012 Tag der Promotion: 30.11.2012 Dekan: Prof. Dr.-Ing. Marion Merklein Berichterstatter: Prof. Dr.-Ing. Wolfgang Schröder-Preikschat Prof. Dr. Michael Philippsen Abstract With the increasing capabilities and resources available on microcontrollers, there is a trend in the embedded industry to integrate multiple software functions on a single system to save cost, size, weight, and power. The integration raises new requirements, thereunder the need for spatial isolation, which is commonly established by using a memory protection unit (MPU) that can constrain access to the physical address space to a fixed set of address regions. MPU-based protection is limited in terms of available hardware, flexibility, granularity and ease of use. Software-based memory protection can provide an alternative or complement MPU-based protection, but has found little attention in the embedded domain. In this thesis, I evaluate qualitative and quantitative advantages and limitations of MPU-based memory protection and software-based protection based on a multi-JVM. I developed a framework composed of the AUTOSAR OS-like operating system CiAO and KESO, a Java implementation for deeply embedded systems. The framework allows choosing from no memory protection, MPU-based protection, software-based protection, and a combination of the two.
    [Show full text]
  • Asynchronous Page Faults Aix Did It Red Hat Author Gleb Natapov August 10, 2010
    Asynchronous page faults Aix did it Red Hat Author Gleb Natapov August 10, 2010 Abstract Host memory overcommit may cause guest memory to be swapped. When guest vcpu access memory swapped out by a host its execution is suspended until memory is swapped back. Asynchronous page fault is a way to try and use guest vcpu more efficiently by allowing it to execute other tasks while page is brought back into memory. Part I How KVM Handles Guest Memory and What Inefficiency it Has With Regards to Host Swapping Mapping guest memory into host memory But we do it on demand Page fault happens on first guest access What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn()
    [Show full text]
  • What Is an Operating System III 2.1 Compnents II an Operating System
    Page 1 of 6 What is an Operating System III 2.1 Compnents II An operating system (OS) is software that manages computer hardware and software resources and provides common services for computer programs. The operating system is an essential component of the system software in a computer system. Application programs usually require an operating system to function. Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
    [Show full text]
  • HALO: Post-Link Heap-Layout Optimisation
    HALO: Post-Link Heap-Layout Optimisation Joe Savage Timothy M. Jones University of Cambridge, UK University of Cambridge, UK [email protected] [email protected] Abstract 1 Introduction Today, general-purpose memory allocators dominate the As the gap between memory and processor speeds continues landscape of dynamic memory management. While these so- to widen, efficient cache utilisation is more important than lutions can provide reasonably good behaviour across a wide ever. While compilers have long employed techniques like range of workloads, it is an unfortunate reality that their basic-block reordering, loop fission and tiling, and intelligent behaviour for any particular workload can be highly subop- register allocation to improve the cache behaviour of pro- timal. By catering primarily to average and worst-case usage grams, the layout of dynamically allocated memory remains patterns, these allocators deny programs the advantages of largely beyond the reach of static tools. domain-specific optimisations, and thus may inadvertently Today, when a C++ program calls new, or a C program place data in a manner that hinders performance, generating malloc, its request is satisfied by a general-purpose allocator unnecessary cache misses and load stalls. with no intimate knowledge of what the program does or To help alleviate these issues, we propose HALO: a post- how its data objects are used. Allocations are made through link profile-guided optimisation tool that can improve the fixed, lifeless interfaces, and fulfilled by
    [Show full text]
  • Measuring Software Performance on Linux Technical Report
    Measuring Software Performance on Linux Technical Report November 21, 2018 Martin Becker Samarjit Chakraborty Chair of Real-Time Computer Systems Chair of Real-Time Computer Systems Technical University of Munich Technical University of Munich Munich, Germany Munich, Germany [email protected] [email protected] OS program program CPU .text .bss + + .data +/- + instructions cache branch + coherency scheduler misprediction core + pollution + migrations data + + interrupt L1i$ miss access + + + + + + context mode + + (TLB flush) TLB + switch data switch miss L1d$ +/- + (KPTI TLB flush) miss prefetch +/- + + + higher-level readahead + page cache miss walk + + multicore + + (TLB shootdown) TLB coherency page DRAM + page fault + cache miss + + + disk + major minor I/O Figure 1. Event interaction map for a program running on an Intel Core processor on Linux. Each event itself may cause processor cycles, and inhibit (−), enable (+), or modulate (⊗) others. Abstract that our measurement setup has a large impact on the results. Measuring and analyzing the performance of software has More surprisingly, however, they also suggest that the setup reached a high complexity, caused by more advanced pro- can be negligible for certain analysis methods. Furthermore, cessor designs and the intricate interaction between user we found that our setup maintains significantly better per- formance under background load conditions, which means arXiv:1811.01412v2 [cs.PF] 20 Nov 2018 programs, the operating system, and the processor’s microar- chitecture. In this report, we summarize our experience on it can be used to improve high-performance applications. how performance characteristics of software should be mea- CCS Concepts • Software and its engineering → Soft- sured when running on a Linux operating system and a ware performance; modern processor.
    [Show full text]
  • Guide to Openvms Performance Management
    Guide to OpenVMS Performance Management Order Number: AA–PV5XA–TE May 1993 This manual is a conceptual and tutorial guide for experienced users responsible for optimizing performance on OpenVMS systems. Revision/Update Information: This manual supersedes the Guide to VMS Performance Management, Version 5.2. Software Version: OpenVMS VAX Version 6.0 Digital Equipment Corporation Maynard, Massachusetts May 1993 Digital Equipment Corporation makes no representations that the use of its products in the manner described in this publication will not infringe on existing or future patent rights, nor do the descriptions contained in this publication imply the granting of licenses to make, use, or sell equipment or software in accordance with the description. Possession, use, or copying of the software described in this publication is authorized only pursuant to a valid written license from Digital or an authorized sublicensor. © Digital Equipment Corporation 1993. All rights reserved. The postpaid Reader’s Comments forms at the end of this document request your critical evaluation to assist in preparing future documentation. The following are trademarks of Digital Equipment Corporation: ACMS, ALL–IN–1, Bookreader, CI, DBMS, DECnet, DECwindows, Digital, MicroVAX, OpenVMS, PDP–11, VAX, VAXcluster, VAX DOCUMENT, VMS, and the DIGITAL logo. All other trademarks and registered trademarks are the property of their respective holders. ZK4521 This document was prepared using VAX DOCUMENT Version 2.1. Contents Preface ............................................................ ix 1 Introduction to Performance Management 1.1 Knowing Your Workload ....................................... 1–2 1.1.1 Workload Management .................................... 1–3 1.1.2 Workload Distribution ..................................... 1–4 1.1.3 Code Sharing ............................................ 1–4 1.2 Evaluating User Complaints .
    [Show full text]
  • Virtual Memory §5.4 Virt Virtual Memory U Al Memo Use Main Memory As a “Cache” For
    Chapter 5 Large and Fast: Exppgloiting Memory Hierarchy Part II Virtual Memory §5.4 Virt Virtual Memory u al Memo Use main memory as a “cache” for secondary (disk) storage r y Managed jointly by CPU hardware and the operating system (OS) Programs share main memory Each gets a private virtual address space holdinggqy its frequently used code and data Protected from other programs CPU and OS translate virtual addresses to physical addresses VM “block” is called a page VM translation “miss” is called a page fault Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2 Address Translation Fixed-size pages (e.g., 4K) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 3 Page Fault Penalty On page fault, the page must be fetched from disk Takes millions of clock cycles Handled by OS code Try to minimize page fault rate Fully associative placement Smart replacement algorithms Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4 Page Tables Stores placement information Array of page table entries, indexed by virtual page number Page table register in CPU points to page table in physical memory If page is present in memory PTE stores the physical page number Plus other status bits (referenced, dirty, …) If page is not present PTE can refer to location in swap space on dis k Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 5 Translation Using a Page Table Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6 Mapping Pages to Storage Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 7 Replacement and Writes
    [Show full text]
  • Chapter 4 Memory Management and Virtual Memory Asst.Prof.Dr
    Chapter 4 Memory management and Virtual Memory Asst.Prof.Dr. Supakit Nootyaskool IT-KMITL Object • To discuss the principle of memory management. • To understand the reason that memory partitions are importance for system organization. • To describe the concept of Paging and Segmentation. 4.1 Difference in main memory in the system • The uniprogramming system (example in DOS) allows only a program to be present in memory at a time. All resources are provided to a program at a time. Example in a memory has a program and an OS running 1) Operating system (on Kernel space) 2) A running program (on User space) • The multiprogramming system is difference from the mentioned above by allowing programs to be present in memory at a time. All resource must be organize and sharing to programs. Example by two programs are in the memory . 1) Operating system (on Kernel space) 2) Running programs (on User space + running) 3) Programs are waiting signals to execute in CPU (on User space). The multiprogramming system uses memory management to organize location to the programs 4.2 Memory management terms Frame Page Segment 4.2 Memory management terms Frame Page A fixed-lengthSegment block of main memory. 4.2 Memory management terms Frame Page A fixed-length block of data that resides in secondary memory. A page of data may temporarily beSegment copied into a frame of main memory. A variable-lengthMemory management block of data that residesterms in secondary memory. A segment may temporarily be copied into an available region of main memory or the segment may be divided into pages which can be individuallyFrame copied into mainPage memory.
    [Show full text]
  • Memory Management
    Memory Management These slides are created by Dr. Huang of George Mason University. Students registered in Dr. Huang’s courses at GMU can make a single machine readable copy and print a single copy of each slide for their own reference as long as the slide contains the copyright statement, and the GMU facilities are not used to produce the paper copies. Permission for any other use, either in machine-readable or printed form, must be obtained form the author in writing. CS471 1 Memory 0000 A a set of data entries 0001 indexed by addresses 0002 0003 Typically the basic data 0004 0005 unit is byte 0006 0007 In 32 bit machines, 4 bytes 0008 0009 grouped to words 000A 000B Have you seen those 000C 000D DRAM chips in your PC ? 000E 000F CS471 2 1 Logical vs. Physical Address Space The addresses used by the RAM chips are called physical addresses. In primitive computing devices, the address a programmer/processor use is the actual address. – When the process fetches byte 000A, the content of 000A is provided. CS471 3 In advanced computers, the processor operates in a separate address space, called logical address, or virtual address. A Memory Management Unit (MMU) is used to map logical addresses to physical addresses. – Various mapping technologies to be discussed – MMU is a hardware component – Modern processors have their MMU on the chip (Pentium, Athlon, …) CS471 4 2 Continuous Mapping: Dynamic Relocation Virtual Physical Memory Memory Space Processor 4000 The processor want byte 0010, the 4010th byte is fetched CS471 5 MMU for Dynamic Relocation CS471 6 3 Segmented Mapping Virtual Physical Memory Memory Space Processor Obviously, more sophisticated MMU needed to implement this CS471 7 Swapping A process can be swapped temporarily out of memory to a backing store (a hard drive), and then brought back into memory for continued execution.
    [Show full text]
  • Operating Systems
    UC Santa Barbara Operating Systems Christopher Kruegel Department of Computer Science UC Santa Barbara http://www.cs.ucsb.edu/~chris/ Virtual Memory and Paging UC Santa Barbara • What if a program is too big to be loaded in memory • What if a higher degree of multiprogramming is desirable • Physical memory is split in page frames • Virtual memory is split in pages • OS (with help from the hardware) manages the mapping between pages and page frames 2 Mapping Pages to Page Frames UC Santa Barbara • Virtual memory: 64KB • Physical memory: 32KB • Page size: 4KB • Virtual memory pages: 16 • Physical memory pages: 8 3 Memory Management Unit UC Santa Barbara • Automatically performs the mapping from virtual addresses into physical addresses 4 Memory Management Unit UC Santa Barbara • Addresses are split into a page number and an offset • Page numbers are used to look up a table in the MMU with as many entries as the number of virtual pages • Each entry in the table contains a bit that states if the virtual page is actually mapped to a physical one • If it is so, the entry contains the number of physical page used • If not, a page fault is generated and the OS has to deal with it 5 Page Tables UC Santa Barbara • Page tables contain an entry for each virtual table • If virtual memory is big (e.g., 32 bit and 64 bit addresses) the table can become of unmanageable size • Solution: instead of keeping them in the MMU move them to main memory • Problem: page tables are used each time an access to memory is performed.
    [Show full text]
  • A Hybrid Swapping Scheme Based on Per-Process Reclaim for Performance Improvement of Android Smartphones (August 2018)
    Received August 19, 2018, accepted September 14, 2018, date of publication October 1, 2018, date of current version October 25, 2018. Digital Object Identifier 10.1109/ACCESS.2018.2872794 A Hybrid Swapping Scheme Based On Per-Process Reclaim for Performance Improvement of Android Smartphones (August 2018) JUNYEONG HAN 1, SUNGEUN KIM1, SUNGYOUNG LEE1, JAEHWAN LEE2, AND SUNG JO KIM2 1LG Electronics, Seoul 07336, South Korea 2School of Software, Chung-Ang University, Seoul 06974, South Korea Corresponding author: Sung Jo Kim ([email protected]) This work was supported in part by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education under Grant 2016R1D1A1B03931004 and in part by the Chung-Ang University Research Scholarship Grants in 2015. ABSTRACT As a way to increase the actual main memory capacity of Android smartphones, most of them make use of zRAM swapping, but it has limitation in increasing its capacity since it utilizes main memory. Unfortunately, they cannot use secondary storage as a swap space due to the long response time and wear-out problem. In this paper, we propose a hybrid swapping scheme based on per-process reclaim that supports both secondary-storage swapping and zRAM swapping. It attempts to swap out all the pages in the working set of a process to a zRAM swap space rather than killing the process selected by a low-memory killer, and to swap out the least recently used pages into a secondary storage swap space. The main reason being is that frequently swap- in/out pages use the zRAM swap space while less frequently swap-in/out pages use the secondary storage swap space, in order to reduce the page operation cost.
    [Show full text]