Topic 8: Memory Management
Total Page:16
File Type:pdf, Size:1020Kb
Topic 8: Memory Management Tongping Liu University of Massachusetts Amherst 1 Objectives • To provide various ways of organizing memory hardware • To discuss various memory management techniques, including partitions, swapping and paging • To provide detailed description for paging and virtual memory techniques University of Massachusetts Amherst 2 Outline • Background • Segmentation • Swapping • Paging System University of Massachusetts Amherst 3 Memory Hierarchy n CPU can directly access main CPU memory and registers only, so (Processor/ALU) programs and data must be Internal brought from disk into memory Memory n Memory access is bottleneck l Cache between memory and registers n Memory Hierarchy I/O l Cache: small, fast, expensive; SRAM; Devices (disks) l Main memory: medium-speed, not that expensive; DRAM l Disk: many gigabytes, slow, cheap, non-volatile storage University of Massachusetts Amherst 4 Main Memory • The ideal main memory is – Very large – Very fast – Non-volatile (doesn’t go away when power is turned off) • The real main memory is – Not very large – Not very fast – Affordable (cost) ! Þ Pick any two… • Memory management: make the real world look as much like the ideal world as possible J Department of Computer Science @ UTSA 5 University of Massachusetts Amherst 5 Background about Memory • Memory is an array of words containing program instructions and data • How do we execute a program? Fetch an instruction à decode à may fetch operands à execute à may store results • Memory hardware sees a stream of ADDRESSES University of Massachusetts Amherst 6 Simple Memory Management • Management for a single process – Memory can be divided simply 0xFFFF Device drivers Operating system (ROM) User program (ROM) (RAM) User program User program (RAM) Operating system (RAM) Operating system (RAM) 0 (RAM) • Memory protection may not be an issue (only one program) • Flexibility may still be useful (allow OS changes, etc.) ChapterUniversity 4 of Massachusetts Amherst 7 7 Real Memory Management • Multiple processes need 1 CPU 0.9 – 0.8 More processes will 0.7 increase CPU utilization 0.6 – At 20% I/O wait, 3–4 0.5 0.4 processes fully utilize CPU Utilization CPU 0.3 – 0.2 At 80% I/O wait, even 0.1 10 processes aren’t 0 enough 0 1 2 3 4 5 6 7 8 9 10 Degree of Multiprogramming • More processes => 80% I/O Wait 50% I/O Wait 20% I/O Wait memory management University of Massachusetts Amherst 8 Purpose of Memory Management • Keeping multiple processes in memory is essential to improve the CPU utilization • Manage and protect main memory while sharing it among multiple processes with these requirements: – Relocation – Protection – Sharing University of Massachusetts Amherst 9 Memory Management • Relocation: – Assign load addresses for position-dependent code and data of a program • Protection: – Each program cannot access others’ memory, only accesses locations that have been allocated to it. • Sharing: – Allow several processes to access the same memory University of Massachusetts Amherst 10 Partitioning • Partition: divide memory to partitions, and dynamically assign to different processes – Fixed Partitioning – Dynamic Partitioning – Segmentation – Paging University of Massachusetts Amherst 11 Fixed Partitioning • Equal-size partitions – Any process whose size is less than or equal to the partition size can be loaded into an available partition • The operating system can swap a process out of a partition University of Massachusetts Amherst 12 Fixed Partitioning Problems • A program may not fit in a partition. – The programmer must design the program with overlays • Main memory use is inefficient. – Any program, no matter how small, occupies an entire partition. – This results in internal fragmentation. University of Massachusetts Amherst 13 Dynamic Partitioning • Unequal-size partitioning • Reduce both problems – but doesn’t solve completely • In right figure, – Smaller programs can be placed in smaller partitions, reducing internal fragmentation – Programs larger than 16M cannot be accommodated without overlay University of Massachusetts Amherst 14 Dynamic Partitioning • Assign each process to the smallest partition – Minimize internal fragmentation • Separate input queues for each partition • Single input queue 900K 900K Partition 4 Partition 4 700K 700K Partition 3 Partition 3 600K 600K Partition 2 Partition 2 500K 500K Partition 1 Partition 1 100K 100K OS OS 0 0 University of Massachusetts Amherst 15 Memory Management Policies • Best-fit algorithm – Chooses the block that is closest in size to the request – Worst performer overall – Since smallest block is found for process, the smallest amount of fragmentation is left – Memory compaction must be done more often University of Massachusetts Amherst 16 Memory Management Policies • First-fit algorithm – Scans memory form the beginning and chooses the first available block that is large enough – Fastest – May have many process loaded in the front end of memory that must be searched over when trying to find a free block University of Massachusetts Amherst 17 Memory Management Policies • Next-fit – Scans memory from the location of the last placement – More often allocate a block of memory at the end of memory where the largest block is found – The largest block of memory is broken up into smaller blocks – Compaction is required to obtain a large block at the end of memory University of Massachusetts Amherst 18 Protection via Base and Limit Registers • A pair of base and limit registers define the logic address range of a process. Every memory access is checked by hardware to ensure the correctness University of Massachusetts Amherst 19 Dynamic Partitioning • Typically, a process is putted into a partition – Which is not very convenient • Segmentation aims to solve this issue University of Massachusetts Amherst 20 Outline • Background • Segmentation • Swapping • Paging System University of Massachusetts Amherst 21 Segmentation • Segmentation is a technique for breaking memory up into logical pieces • Each “piece” is a group of related information – data segments for each process – code segments for each process – data segments for the OS…… • One segment is continuous both virtually and physically • Segmentation uses logic addresses University of Massachusetts Amherst 22 Segmentation View P1 data P2 code print function P2 data P1 code OS Code OS data OS stack Logical Address Space University of Massachusetts Amherst 23 Difference from Dynamic Partitioning • Segmentation is similar to dynamic partitioning – Suffer from external fragmentation • Difference: – With segmentation, a program may occupy more than one partition, and these partitions need not be contiguous. – Eliminates internal fragmentation University of Massachusetts Amherst 24 Logical vs. Physical Address • Logical address – Generated by the CPU, and is used as a reference to access the physical memory location by CPU – Logic Address is called as virtual address on a system with virtual memory • Logical Address Space – All logical addresses generated by a program’s perspective. • Physical address – Address sent to the RAM (or ROM, or IO) for a read or write operation University of Massachusetts Amherst 25 Address Binding Mapping from one address space to another • Compile time: If memory of a process is known at compile time, absolute address will be generated; Must recompile if starting location changes (e.g., DOS programs) • Load time: Compiler generates relocatable code, loader translates them to absolute addresses typically by added to base address • Execution time: Binding is delayed until execution time, if a process can be moved during its execution from one memory segment to another (e.g., dynamic library) University of Massachusetts Amherst 26 Address Binding of Linux • Commonly referred to as symbol (or pointer) relocation – At compile time (external functions and variables are copied into a target application by a compiler or linker) – At load time (when the dynamic linker resolves symbols in a shared library) – At execution time (when the running program resolves symbols manually, e.g., using dlopen). University of Massachusetts Amherst 27 Logical vs. Physical Address • Logical and physical addresses are the same in compile-time address-binding schemes • Logical (virtual) and physical addresses differ in execution-time address-binding scheme – The mapping form logical address to physical address is done by a hardware called memory management unit (MMU). Translator Physical User process Virtual (MMU) Physical memory address address University of Massachusetts Amherst 28 Why Virtual Memory? N • Basic idea: allow OS to allocate 2 more memory than the real Auxiliary regions • Program uses virtual addresses – Addresses local to the process Stack – Limited by # of bits in address (32/64) – 32 bits: 4G Heap • Virtual memory >> physical Text memory 0 Department of Computer Science @ UTSA 29 University of Massachusetts Amherst 29 Motivations for Virtual Memory 1. Use physical DRAM as the cache for disk 2. Simplify memory management – Multiple processes resident in main memory • Each with its own address space – Only “active” code and data is actually in memory 3. Provide protection – One process can’t interfere with another except via IPC • They operate in different address spaces – User process cannot access privileged information • Different sections of address spaces have different permissions University of Massachusetts Amherst 30 Virtual Memory for Multiprogramming • Virtual memory (VM) is helpful in multiprogramming