Topic 8: Memory Management

Total Page:16

File Type:pdf, Size:1020Kb

Topic 8: Memory Management Topic 8: Memory Management Tongping Liu University of Massachusetts Amherst 1 Objectives • To provide various ways of organizing memory hardware • To discuss various memory management techniques, including partitions, swapping and paging • To provide detailed description for paging and virtual memory techniques University of Massachusetts Amherst 2 Outline • Background • Segmentation • Swapping • Paging System University of Massachusetts Amherst 3 Memory Hierarchy n CPU can directly access main CPU memory and registers only, so (Processor/ALU) programs and data must be Internal brought from disk into memory Memory n Memory access is bottleneck l Cache between memory and registers n Memory Hierarchy I/O l Cache: small, fast, expensive; SRAM; Devices (disks) l Main memory: medium-speed, not that expensive; DRAM l Disk: many gigabytes, slow, cheap, non-volatile storage University of Massachusetts Amherst 4 Main Memory • The ideal main memory is – Very large – Very fast – Non-volatile (doesn’t go away when power is turned off) • The real main memory is – Not very large – Not very fast – Affordable (cost) ! Þ Pick any two… • Memory management: make the real world look as much like the ideal world as possible J Department of Computer Science @ UTSA 5 University of Massachusetts Amherst 5 Background about Memory • Memory is an array of words containing program instructions and data • How do we execute a program? Fetch an instruction à decode à may fetch operands à execute à may store results • Memory hardware sees a stream of ADDRESSES University of Massachusetts Amherst 6 Simple Memory Management • Management for a single process – Memory can be divided simply 0xFFFF Device drivers Operating system (ROM) User program (ROM) (RAM) User program User program (RAM) Operating system (RAM) Operating system (RAM) 0 (RAM) • Memory protection may not be an issue (only one program) • Flexibility may still be useful (allow OS changes, etc.) ChapterUniversity 4 of Massachusetts Amherst 7 7 Real Memory Management • Multiple processes need 1 CPU 0.9 – 0.8 More processes will 0.7 increase CPU utilization 0.6 – At 20% I/O wait, 3–4 0.5 0.4 processes fully utilize CPU Utilization CPU 0.3 – 0.2 At 80% I/O wait, even 0.1 10 processes aren’t 0 enough 0 1 2 3 4 5 6 7 8 9 10 Degree of Multiprogramming • More processes => 80% I/O Wait 50% I/O Wait 20% I/O Wait memory management University of Massachusetts Amherst 8 Purpose of Memory Management • Keeping multiple processes in memory is essential to improve the CPU utilization • Manage and protect main memory while sharing it among multiple processes with these requirements: – Relocation – Protection – Sharing University of Massachusetts Amherst 9 Memory Management • Relocation: – Assign load addresses for position-dependent code and data of a program • Protection: – Each program cannot access others’ memory, only accesses locations that have been allocated to it. • Sharing: – Allow several processes to access the same memory University of Massachusetts Amherst 10 Partitioning • Partition: divide memory to partitions, and dynamically assign to different processes – Fixed Partitioning – Dynamic Partitioning – Segmentation – Paging University of Massachusetts Amherst 11 Fixed Partitioning • Equal-size partitions – Any process whose size is less than or equal to the partition size can be loaded into an available partition • The operating system can swap a process out of a partition University of Massachusetts Amherst 12 Fixed Partitioning Problems • A program may not fit in a partition. – The programmer must design the program with overlays • Main memory use is inefficient. – Any program, no matter how small, occupies an entire partition. – This results in internal fragmentation. University of Massachusetts Amherst 13 Dynamic Partitioning • Unequal-size partitioning • Reduce both problems – but doesn’t solve completely • In right figure, – Smaller programs can be placed in smaller partitions, reducing internal fragmentation – Programs larger than 16M cannot be accommodated without overlay University of Massachusetts Amherst 14 Dynamic Partitioning • Assign each process to the smallest partition – Minimize internal fragmentation • Separate input queues for each partition • Single input queue 900K 900K Partition 4 Partition 4 700K 700K Partition 3 Partition 3 600K 600K Partition 2 Partition 2 500K 500K Partition 1 Partition 1 100K 100K OS OS 0 0 University of Massachusetts Amherst 15 Memory Management Policies • Best-fit algorithm – Chooses the block that is closest in size to the request – Worst performer overall – Since smallest block is found for process, the smallest amount of fragmentation is left – Memory compaction must be done more often University of Massachusetts Amherst 16 Memory Management Policies • First-fit algorithm – Scans memory form the beginning and chooses the first available block that is large enough – Fastest – May have many process loaded in the front end of memory that must be searched over when trying to find a free block University of Massachusetts Amherst 17 Memory Management Policies • Next-fit – Scans memory from the location of the last placement – More often allocate a block of memory at the end of memory where the largest block is found – The largest block of memory is broken up into smaller blocks – Compaction is required to obtain a large block at the end of memory University of Massachusetts Amherst 18 Protection via Base and Limit Registers • A pair of base and limit registers define the logic address range of a process. Every memory access is checked by hardware to ensure the correctness University of Massachusetts Amherst 19 Dynamic Partitioning • Typically, a process is putted into a partition – Which is not very convenient • Segmentation aims to solve this issue University of Massachusetts Amherst 20 Outline • Background • Segmentation • Swapping • Paging System University of Massachusetts Amherst 21 Segmentation • Segmentation is a technique for breaking memory up into logical pieces • Each “piece” is a group of related information – data segments for each process – code segments for each process – data segments for the OS…… • One segment is continuous both virtually and physically • Segmentation uses logic addresses University of Massachusetts Amherst 22 Segmentation View P1 data P2 code print function P2 data P1 code OS Code OS data OS stack Logical Address Space University of Massachusetts Amherst 23 Difference from Dynamic Partitioning • Segmentation is similar to dynamic partitioning – Suffer from external fragmentation • Difference: – With segmentation, a program may occupy more than one partition, and these partitions need not be contiguous. – Eliminates internal fragmentation University of Massachusetts Amherst 24 Logical vs. Physical Address • Logical address – Generated by the CPU, and is used as a reference to access the physical memory location by CPU – Logic Address is called as virtual address on a system with virtual memory • Logical Address Space – All logical addresses generated by a program’s perspective. • Physical address – Address sent to the RAM (or ROM, or IO) for a read or write operation University of Massachusetts Amherst 25 Address Binding Mapping from one address space to another • Compile time: If memory of a process is known at compile time, absolute address will be generated; Must recompile if starting location changes (e.g., DOS programs) • Load time: Compiler generates relocatable code, loader translates them to absolute addresses typically by added to base address • Execution time: Binding is delayed until execution time, if a process can be moved during its execution from one memory segment to another (e.g., dynamic library) University of Massachusetts Amherst 26 Address Binding of Linux • Commonly referred to as symbol (or pointer) relocation – At compile time (external functions and variables are copied into a target application by a compiler or linker) – At load time (when the dynamic linker resolves symbols in a shared library) – At execution time (when the running program resolves symbols manually, e.g., using dlopen). University of Massachusetts Amherst 27 Logical vs. Physical Address • Logical and physical addresses are the same in compile-time address-binding schemes • Logical (virtual) and physical addresses differ in execution-time address-binding scheme – The mapping form logical address to physical address is done by a hardware called memory management unit (MMU). Translator Physical User process Virtual (MMU) Physical memory address address University of Massachusetts Amherst 28 Why Virtual Memory? N • Basic idea: allow OS to allocate 2 more memory than the real Auxiliary regions • Program uses virtual addresses – Addresses local to the process Stack – Limited by # of bits in address (32/64) – 32 bits: 4G Heap • Virtual memory >> physical Text memory 0 Department of Computer Science @ UTSA 29 University of Massachusetts Amherst 29 Motivations for Virtual Memory 1. Use physical DRAM as the cache for disk 2. Simplify memory management – Multiple processes resident in main memory • Each with its own address space – Only “active” code and data is actually in memory 3. Provide protection – One process can’t interfere with another except via IPC • They operate in different address spaces – User process cannot access privileged information • Different sections of address spaces have different permissions University of Massachusetts Amherst 30 Virtual Memory for Multiprogramming • Virtual memory (VM) is helpful in multiprogramming
Recommended publications
  • Virtual Memory
    54 Virtual Memory 54.1 Introduction ....................................................................................54-1 54.2 Early Virtual Memory Systems ....................................................54-2 Address Mapping • Multiprogramming • Thrashing 54.3 Cache Systems .................................................................................54-4 54.4 Object Systems ................................................................................54-5 54.5 Virtual Memory in Other Systems ..............................................54-6 54.6 Structure of Virtual Memory ........................................................54-7 Element 1: Providing the Virtual Address to the Memory Mapping Unit • Element 2: Mapping the Address • Element 3: Checking the Translation Lookaside Buffer • Element 4: Managing the RAM Contents • Summary 54.7 Cache Memories ...........................................................................54-10 54.8 Multiprogramming ......................................................................54-11 54.9 Performance and the Principle of Locality ...............................54-11 54.10 Object-Oriented Virtual Memory ..............................................54-14 Two-Level Mapping • Protection of Handles and Objects • Protection of Procedures 54.11 Distributed Shared Memory .......................................................54-17 54.12 World Wide Web: A Global Name Space ..................................54-18 Peter J. Denning 54.13 Conclusion .....................................................................................54-18
    [Show full text]
  • University of California at Berkeley College of Engineering Department of Electrical Engineering and Computer Science
    University of California at Berkeley College of Engineering Department of Electrical Engineering and Computer Science EECS 61C, Fall 2003 Lab 2: Strings and pointers; the GDB debugger PRELIMINARY VERSION Goals To learn to use the gdb debugger to debug string and pointer programs in C. Reading Sections 5.1-5.5, in K&R GDB Reference Card (linked to class page under “resources.”) Optional: Complete GDB documentation (http://www.gnu.org/manual/gdb-5.1.1/gdb.html) Note: GDB currently only works on the following machines: • torus.cs.berkeley.edu • rhombus.cs.berkeley.edu • pentagon.cs.berkeley.edu Please ssh into one of these machines before starting the lab. Basic tasks in GDB There are two ways to start the debugger: 1. In EMACS, type M-x gdb, then type gdb <filename> 2. Run gdb <filename> from the command line The following are fundamental operations in gdb. Please make sure you know the gdb commands for the following operations before you proceed. 1. How do you run a program in gdb? 2. How do you pass arguments to a program when using gdb? 3. How do you set a breakpoint in a program? 4. How do you set a breakpoint which which only occurs when a set of conditions is true (eg when certain variables are a certain value)? 5. How do you execute the next line of C code in the program after a break? 1 6. If the next line is a function call, you'll execute the call in one step. How do you execute the C code, line by line, inside the function call? 7.
    [Show full text]
  • Compiling and Debugging Basics
    Compiling and Debugging Basics Service CoSiNus IMFT P. Elyakime H. Neau A. Pedrono A. Stoukov Avril 2015 Outline ● Compilers available at IMFT? (Fortran, C and C++) ● Good practices ● Debugging Why? Compilation errors and warning Run time errors and wrong results Fortran specificities C/C++ specificities ● Basic introduction to gdb, valgrind and TotalView IMFT - CoSiNus 2 Compilers on linux platforms ● Gnu compilers: gcc, g++, gfortran ● Intel compilers ( 2 licenses INPT): icc, icpc, ifort ● PGI compiler fortran only (2 licenses INPT): pgf77, pgf90 ● Wrappers mpich2 for MPI codes: mpicc, mpicxx, mpif90 IMFT - CoSiNus 3 Installation ● Gnu compilers: included in linux package (Ubuntu 12.04 LTS, gcc/gfortran version 4.6.3) ● Intel and PGI compilers installed on a centralized server (/PRODCOM), to use it: source /PRODCOM/bin/config.sh # in bash source /PRODCOM/bin/config.csh # in csh/tcsh ● Wrappers mpich2 installed on PRODCOM: FORTRAN : mympi intel # (or pgi or gnu) C/C++ : mympi intel # (or gnu) IMFT - CoSiNus 4 Good practices • Avoid too long source files! • Use a makefile if you have more than one file to compile • In Fortran : ” implicit none” mandatory at the beginning of each program, module and subroutine! • Use compiler’s check options IMFT - CoSiNus 5 Why talk about debugging ? Yesterday, my program was running well: % gfortran myprog.f90 % ./a.out % vmax= 3.3e-2 And today: % gfortran myprog.f90 % ./a.out % Segmentation fault Yet I have not changed anything… Because black magic is not the reason most often, debugging could be helpful! (If you really think that the cause of your problem is evil, no need to apply to CoSiNus, we are not God!) IMFT - CoSiNus 6 Debugging Methodical process to find and fix flows in a code.
    [Show full text]
  • Lecture 15 15.1 Paging
    CMPSCI 377 Operating Systems Fall 2009 Lecture 15 Lecturer: Emery Berger Scribe: Bruno Silva,Jim Partan 15.1 Paging In recent lectures, we have been discussing virtual memory. The valid addresses in a process' virtual address space correspond to actual data or code somewhere in the system, either in physical memory or on the disk. Since physical memory is fast and is a limited resource, we use the physical memory as a cache for the disk (another way of saying this is that the physical memory is \backed by" the disk, just as the L1 cache is \backed by" the L2 cache). Just as with any cache, we need to specify our policies for when to read a page into physical memory, when to evict a page from physical memory, and when to write a page from physical memory back to the disk. 15.1.1 Reading Pages into Physical Memory For reading, most operating systems use demand paging. This means that pages are only read from the disk into physical memory when they are needed. In the page table, there is a resident status bit, which says whether or not a valid page resides in physical memory. If the MMU tries to get a physical page number for a valid page which is not resident in physical memory, it issues a pagefault to the operating system. The OS then loads that page from disk, and then returns to the MMU to finish the translation.1 In addition, many operating systems make some use of pre-fetching, which is called pre-paging when used for pages.
    [Show full text]
  • NASM for Linux
    1 NASM for Linux Microprocessors II 2 NASM for Linux Microprocessors II NASM Package nasm package available as source or as executables Typically /usr/bin/nasm and /usr/bin/ndisasm Assembly NASM Linux requires elf format for object files ELF = Executable and Linking Format Typical header size = 330h bytes for nasm −f elf [−o <output>] <filename> Linking Linux Object files can be linked with gcc gcc [−options] <filename.o> [other_files.o] Disassembly View executable as 32-bit assembly code ndisasm −e 330h –b 32 a.out | less objdump –d a.out | less Fall 2007 Hadassah College Dr. Martin Land Fall 2007 Hadassah College Dr. Martin Land 3 NASM for Linux Microprocessors II 4 NASM for Linux Microprocessors II gcc Stages Example — 1 Stages of Gnu C compilation factorial2.c #include <math.h> main #include <stdio.h> sets j = 12 main() Source Translation Assembly Object Executable calls factorial 10,000,000 times Code Unit Code Code File { int times; prog.c prog.i prog.s prog.o a.out int i , j = 12; preprocess compile assemble link for (times = 0 ; times < 10000000 ; ++times){ i = factorial(j); gcc -E } gcc -S printf("%d\n",i); gcc -c } gcc int factorial(n) int n; factorial calculates n! by recursion { if (n == 0) return 1; else return n * factorial(n-1); } Fall 2007 Hadassah College Dr. Martin Land Fall 2007 Hadassah College Dr. Martin Land 5 NASM for Linux Microprocessors II 6 NASM for Linux Microprocessors II Example — 2 Example — 3 ~/gcc$ gcc factorial2.c Compile program as separate files produces executable a.out factorial2a.c ~/gcc$ time a.out main() { 479001600 int times; int i,j=12; for (times = 0 ; times < 10000000 ; ++times){ real 0m9.281s i = factorial(j); factorial2b.c } #include <math.h> printf("%d\n",i); user 0m8.339s #include <stdio.h> } sys 0m0.008s int factorial(n) int n; { Program a.out runs in 8.339 seconds on 300 MHz if (n == 0) Pentium II return 1; else return n * factorial(n-1); } Fall 2007 Hadassah College Dr.
    [Show full text]
  • Memory Management
    Memory management Virtual address space ● Each process in a multi-tasking OS runs in its own memory sandbox called the virtual address space. ● In 32-bit mode this is a 4GB block of memory addresses. ● These virtual addresses are mapped to physical memory by page tables, which are maintained by the operating system kernel and consulted by the processor. ● Each process has its own set of page tables. ● Once virtual addresses are enabled, they apply to all software running in the machine, including the kernel itself. ● Thus a portion of the virtual address space must be reserved to the kernel Kernel and user space ● Kernel might not use 1 GB much physical memory. ● It has that portion of address space available to map whatever physical memory it wishes. ● Kernel space is flagged in the page tables as exclusive to privileged code (ring 2 or lower), hence a page fault is triggered if user-mode programs try to touch it. ● In Linux, kernel space is constantly present and maps the same physical memory in all processes. ● Kernel code and data are always addressable, ready to handle interrupts or system calls at any time. ● By contrast, the mapping for the user-mode portion of the address space changes whenever a process switch happens Kernel virtual address space ● Kernel address space is the area above CONFIG_PAGE_OFFSET. ● For 32-bit, this is configurable at kernel build time. The kernel can be given a different amount of address space as desired. ● Two kinds of addresses in kernel virtual address space – Kernel logical address – Kernel virtual address Kernel logical address ● Allocated with kmalloc() ● Holds all the kernel data structures ● Can never be swapped out ● Virtual addresses are a fixed offset from their physical addresses.
    [Show full text]
  • ENCM 335 Fall 2018 Lab 3 for the Week of October 1
    page 1 of 11 ENCM 335 Fall 2018 Lab 3 for the Week of October 1 Steve Norman Department of Electrical & Computer Engineering University of Calgary September 2018 Lab instructions and other documents for ENCM 335 can be found at https://people.ucalgary.ca/~norman/encm335fall2018/ Administrative details Each student must hand in their own assignment Later in the course, you may be allowed to work in pairs on some assignments. Due Dates The Due Date for this assignment is 3:30pm Friday, October 5. The Late Due Date is 3:30pm Tuesday, October 9 (not Monday the 8th, because that is a holiday). The penalty for handing in an assignment after the Due Date but before the Late Due Date is 3 marks. In other words, X/Y becomes (X{3)/Y if the assignment is late. There will be no credit for assignments turned in after the Late Due Date; they will be returned unmarked. Marking scheme A 4 marks B 8 marks C unmarked D 2 marks E 8 marks F 2 marks G 4 marks total 28 marks How to package and hand in your assignments Please see the information in the Lab 1 instructions. Function interface comments, continued For Lab 2, you were asked to read a document called \Function Interface Com- ments". ENCM 335 Fall 2018 Lab 3 page 2 of 11 Figure 1: Sketch of a program with a function to find the average element value of an array of ints. The function can be given only one argument, and the function is supposed to work correctly for whatever number of elements the array has.
    [Show full text]
  • Theory of Operating Systems
    Exam Review ● booting ● I/O hardware, DMA, I/O software ● device drivers ● virtual memory 1 booting ● hardware is configured to execute a program in Read-Only Memory (ROM) or flash memory: – the BIOS, basic I/O system – UEFI is the current equivalent ● BIOS knows how to access all the disk drives, chooses one to boot (perhaps with user assistance), loads the first sector (512 bytes) into memory, and starts to execute it (jmp) – first sector often includes a partition table 2 I/O hardware and DMA ● electronics, and sometimes moving parts, e.g. for disks or printers ● status registers and control registers read and set by CPU software – registers can directly control hardware, or be read and set by the device controller ● device controller can be instructed to do Direct Memory Access to transfer data to and from the CPU's memory – DMA typically uses physical addresses 3 Structure of I/O software ● user programs request I/O: read/write, send/recv, etc – daemons and servers work autonomously ● device-independent software converts the request to a device-dependent operation, and also handles requests from device drivers – e.g file systems and protocol stacks – e.g. servers in Minix ● one device driver may manage multiple devices – and handles interrupts ● buffer management required! 4 Device Drivers ● configure the device or device controller – i.e. must know specifics about the hardware ● respond to I/O requests from higher-level software: read, write, ioctl ● respond to interrupts, usually by reading status registers, writing to control registers, and transferring data (either via DMA, or by reading and writing data registers) 5 Memory Management ● linear array of memory locations ● memory is either divided into fixed-sized units (e.g.
    [Show full text]
  • Theory of Operating Systems
    Exam Review ● booting ● I/O hardware, DMA, I/O software ● device drivers ● memory (i.e. address space) management ● virtual memory 1 booting ● hardware is configured to execute a program in Read-Only Memory (ROM) or flash memory: – the BIOS, basic I/O system – UEFI is the current equivalent ● BIOS knows how to access all the disk drives, chooses one to boot (perhaps with user assistance), loads the first sector (512 bytes) into memory, and starts to execute it (jmp) – first sector often includes a partition table 2 I/O hardware and DMA ● electronics, and sometimes (disks, printers) moving parts ● status registers and control registers read and set by CPU software – registers can directly control hardware, or be read and set by the device controller ● device controller can be instructed to do Direct Memory Access to transfer data to and from the CPU's memory – DMA typically uses physical addresses 3 Structure of I/O software ● user programs request I/O: read/write, send/recv, etc – daemons and servers work autonomously ● device-independent software converts the request to a device-dependent operation, and also handles requests from device drivers – e.g file systems and protocol stacks – e.g. servers in Minix ● one device driver may manage multiple devices – and handles interrupts ● buffer management required! 4 Device Drivers ● configure the device or device controller – i.e. must know specifics about the hardware ● respond to I/O requests from higher-level software: read, write, ioctl ● respond to interrupts, usually by reading status registers, writing to control registers, and transferring data (either via DMA, or by reading and writing data registers) 5 Memory Management ● linear array of memory locations ● memory is either divided into fixed-sized units (e.g.
    [Show full text]
  • A Study on Faults and Error Propagation in the Linux Operating System
    A Thesis for the Degree of Ph.D. in Engineering A Study on Faults and Error Propagation in the Linux Operating System March 2016 Graduate School of Science and Technology Keio University Takeshi Yoshimura Acknowledgement I would like to thank my adviser, Prof. Kenji Kono. His guidance helped me in all the time of research. I would like to express my sincere gratitude to Prof. Hiroshi Yamada. This dissertation would not have been possible without their advice and encouragement. I am also grateful to the members of my thesis committee: Prof. Shingo Takada, Prof. Hiroaki Saito, and Prof. Kenichi Kourai. This dissertation was greatly improved by their invaluable feedback. During my Ph.D., I did an internship at NEC. I enjoyed working with Dr. Masato Asahara and the opportunity had a significant impact on my research skills. He also taught me LDA, which is the core of the fault study in this dis- sertation. I am also thankful to my colleagues in the sslab. Their surprising enthusiasm and skills have always inspired me. I appreciate the financial supports from the Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists and the Core Re- search for Evolutional Science and Technology of Japan Science and Technology Agency. Finally, I would like to thank my family, my parents, sister for their support all these years. Without their support and encouragement, many accomplishments in my life including this dissertation would not have been possible. 2 Abstract A Study on Faults and Error Propagation in the Linux Operating System Takeshi Yoshimura Operating systems are crucial for application reliability.
    [Show full text]
  • GDB Debugger CS 211 – Programming Practicum GDB Debugger
    GDB Debugger CS 211 – Programming Practicum GDB Debugger • Part of the GNU Software Tools • Many Debuggers in IDEs are just Wrappers for GDB • Huge amount of commands in GBD, many options available to help debug your program • Even to most basic knowledge can save you lots of time GDB Debugger • Step 1 – Compile your program using the –g flag • gcc –g mazeflawed.c • The –g flag creates additional information for the executable that gdb used to convert machine code lines back to the source code line of the original program GDB Debugger • Step 2 - Open the GDB debugger • gdb a.out • Give the name of the executable created by the compiler GDB Debugger • Step 3 – Run your program with any command line arguments • run <command-line-arguments> • If no command line arguments are needed, just type in run. • For the mazeflawed.c program, you need the datafile name, so: • run mazedata1.txt GDB Debugger • Step 4 – Let GDB tell you on which line the Segmentation Fault occurs Program received signal SIGSEGV, Segmentation fault. 0x00000000004007a9 in main (argc=2, argv=0x7fffffffca78) at mazeflawed.c:52 52 m1.arr[i][j] = '.’; • The above states the Segmentation fault occurred at line 52. • It also shows the code at line 52 GDB Debugger • Step 5 – The list command will display more lines of code • Use the help command to find out about more commands. • GDB has so many commands that no one knows them all. • Most people learn a few key commands. GDB Debugger • Step 6 – The print command will display values stored in variables 52 m1.arr[i][j] = ‘.’; print i $1 = 14 • The print i command shows that the variable i contains the value of 14 at the time of the segmentation fault.
    [Show full text]
  • Unit 2 Virtual Memory
    Virtual Memory UNIT 2 VIRTUAL MEMORY Structure Page Nos. 2.0 Introduction 21 2.1 Objectives 22 2.2 Virtual Memory 22 2.2.1 Principles of Operation 2.2.2 Virtual Memory Management 2.2.3 Protection and Sharing 2.3 Demand Paging 27 2.4 Page Replacement Policies 28 2.4.1 First In First Out (FIFO) 2.4.2 Second Chance (SC) 2.4.3 Least Recently Used (LRU) 2.4.4 Optimal Algorithm (OPT) 2.4.5 Least Frequently Used (LFU) 2.5 Thrashing 30 2.5.1 Working-Set Model 2.5.2 Page-Fault Rate 2.6 Demand Segmentation 32 2.7 Combined Systems 33 2.7.1 Segmented Paging 2.7.2 Paged Segmentation 2.8 Summary 34 2.9 Solutions /Answers 35 2.10 Further Readings 36 2.0 INTRODUCTION In the earlier unit, we have studied Memory Management covering topics like the overlays, contiguous memory allocation, static and dynamic partitioned memory allocation, paging and segmentation techniques. In this unit, we will study an important aspect of memory management known as Virtual memory. Storage allocation has always been an important consideration in computer programming due to the high cost of the main memory and the relative abundance and lower cost of secondary storage. Program code and data required for execution of a process must reside in the main memory but the main memory may not be large enough to accommodate the needs of an entire process. Early computer programmers divided programs into the sections that were transferred into the main memory for the period of processing time.
    [Show full text]