Virtual Memory
Total Page:16
File Type:pdf, Size:1020Kb

Load more
Recommended publications
-
Unit 2 Virtual Memory
Virtual Memory UNIT 2 VIRTUAL MEMORY Structure Page Nos. 2.0 Introduction 21 2.1 Objectives 22 2.2 Virtual Memory 22 2.2.1 Principles of Operation 2.2.2 Virtual Memory Management 2.2.3 Protection and Sharing 2.3 Demand Paging 27 2.4 Page Replacement Policies 28 2.4.1 First In First Out (FIFO) 2.4.2 Second Chance (SC) 2.4.3 Least Recently Used (LRU) 2.4.4 Optimal Algorithm (OPT) 2.4.5 Least Frequently Used (LFU) 2.5 Thrashing 30 2.5.1 Working-Set Model 2.5.2 Page-Fault Rate 2.6 Demand Segmentation 32 2.7 Combined Systems 33 2.7.1 Segmented Paging 2.7.2 Paged Segmentation 2.8 Summary 34 2.9 Solutions /Answers 35 2.10 Further Readings 36 2.0 INTRODUCTION In the earlier unit, we have studied Memory Management covering topics like the overlays, contiguous memory allocation, static and dynamic partitioned memory allocation, paging and segmentation techniques. In this unit, we will study an important aspect of memory management known as Virtual memory. Storage allocation has always been an important consideration in computer programming due to the high cost of the main memory and the relative abundance and lower cost of secondary storage. Program code and data required for execution of a process must reside in the main memory but the main memory may not be large enough to accommodate the needs of an entire process. Early computer programmers divided programs into the sections that were transferred into the main memory for the period of processing time. -
Memory Management.Pdf
Memory Management 55 Memory Management “Multitasking without memory management is like having a party in a closet.” – Charles Petzold. Programming Windows 3.1 “Programs expand to fill the memory that holds them.” Why memory management? Process isolation • Automatic allocation and management • Support for modular programming • Protection and access control • Long term storage • Preparing a program for execution Development of programs • – Source program – Compilation/Assembly to get Object program – Linking / Linkage editors to get Relocatable load module – Loading to get Load module Memory • – Large array of words (or bytes) – Unique address of each word – CPU fetches from and stores into memory addresses Instruction execution cycle • – Fetch an instruction (opcode) from memory – Decode instruction – Fetch operands from memory, if needed – Execute instruction – Store results into memory, if necessary Memory unit sees only the addresses, and not how they are generated (instruction counter, indexing, direct) • Address Binding • – Binding – Mapping from one address space to another – Program must be loaded into memory before execution – Loading of processes may result in relocation of addresses Link external references to entry points as needed ∗ – User process may reside in any part of the memory – Symbolic addresses in source programs (like i) – Compiler binds symbolic addresses to relocatable addresses, assumed to start at location zero Memory Management 56 – Linkage editor or loader binds relocatable addresses to absolute addresses – Types -
Memory Management 1
Memory Management 1 Memory Management “Multitasking without memory management is like having a party in a closet.” – Charles Petzold. Programming Windows 3.1 “Programs expand to fill the memory that holds them.” Why memory management? • Process isolation • Automatic allocation and management • Support for modular programming • Protection and access control • Long term storage Memory management requirements • Process address space – Process runs in its private address space – In user mode, process refers to private stack, data, and code areas – In kernel mode, process refers to kernel data and code areas and uses a different private stack – Processes may need to access address space that is shared among processes * In some cases, this is achieved by explicit requests (shared memory) * In other cases, it may be done automatically by kernel to reduce memory usage (editors) • Relocation – Available main memory shared by a number of processes – Programmer may not know of the other programs resident in memory while his code is executing – Processes are swapped in/out to maximize CPU utilization – Process may not get swapped back into the same memory location; need to relocate the process to a different area of memory – All memory references need to be resolved to correct addresses • Protection – Processes need to be protected against unwanted interference by other processes – Requirement for relocation makes it harder to satisfy protection because the location of a process in memory is unpredictable – Impossible to check the absolute addresses -
Chapter 8: Memory Management
ChapterChapter 8:8: MemoryMemory ManagementManagement ChapterChapter 8:8: MemoryMemory ManagementManagement ■ Background ■ Swapping ■ Contiguous Allocation ■ Paging ■ Segmentation ■ Segmentation with Paging Operating System Concepts 8.2 Silberschatz, Galvin and Gagne ©2009 MemoryMemory ManagementManagement ■ Examine basic (not virtual) memory management ● Swapping ● Contiguous allocation ● Paging ● Segmentation ■ Prepare for study of virtual memory ■ Discuss what CAN be done without virtual memory Operating System Concepts 8.3 Silberschatz, Galvin and Gagne ©2009 BackgroundBackground ■ Program must be brought into memory and placed within a process for it to be run. ● Main memory and registers are only storage CPU can access directly ● Can access registers in one clock cycle or less ● Accessing main memory may take several cycles ● Cache(s) sits between main memory and CPU registers ■ Main memory is of a finite size ● In a multiprogramming environment, processses must SHARE space in main memory ■ Input queue – (job queue) collection of processes on the disk that are waiting to be brought into memory to run the program. ■ User programs go through several steps before being run. Operating System Concepts 8.4 Silberschatz, Galvin and Gagne ©2009 WhereWhere IsIs ProgramProgram LoadedLoaded ?? ■ User (or system) usually does not know where the data that makes up a program eventually will reside in memory. ● Assume it logically resides contiguously, starting at address 0 (although some code / situations require absolute addressing) 0 0 L Program's n logical memory n + L - 1 System's physical memory Operating System Concepts 8.5 Silberschatz, Galvin and Gagne ©2009 BindingBinding ofof InstructionsInstructions andand DataData toto MemoryMemory Address binding (mapping from logical to physical addresses) of instructions and data to memory addresses can happen at at three different stages. -
Topic 8: Memory Management
Topic 8: Memory Management Tongping Liu University of Massachusetts Amherst 1 Objectives • To provide various ways of organizing memory hardware • To discuss various memory management techniques, including partitions, swapping and paging • To provide detailed description for paging and virtual memory techniques University of Massachusetts Amherst 2 Outline • Background • Segmentation • Swapping • Paging System University of Massachusetts Amherst 3 Memory Hierarchy n CPU can directly access main CPU memory and registers only, so (Processor/ALU) programs and data must be Internal brought from disk into memory Memory n Memory access is bottleneck l Cache between memory and registers n Memory Hierarchy I/O l Cache: small, fast, expensive; SRAM; Devices (disks) l Main memory: medium-speed, not that expensive; DRAM l Disk: many gigabytes, slow, cheap, non-volatile storage University of Massachusetts Amherst 4 Main Memory • The ideal main memory is – Very large – Very fast – Non-volatile (doesn’t go away when power is turned off) • The real main memory is – Not very large – Not very fast – Affordable (cost) ! Þ Pick any two… • Memory management: make the real world look as much like the ideal world as possible J Department of Computer Science @ UTSA 5 University of Massachusetts Amherst 5 Background about Memory • Memory is an array of words containing program instructions and data • How do we execute a program? Fetch an instruction à decode à may fetch operands à execute à may store results • Memory hardware sees a stream of ADDRESSES University -
Design and Implementation of an Overlay File System for Cloud-Assisted Mobile Apps
IEEE TRANSACTION ON CLOUD COMPUTING 1 Design and Implementation of an Overlay File System for Cloud-Assisted Mobile Apps Nafize R. Paiker, Jianchen Shan, Cristian Borcea, Narain Gehani, Reza Curtmola, Xiaoning Ding Abstract—With cloud assistance, mobile apps can offload their resource-demanding computation tasks to the cloud. This leads to a scenario where computation tasks in the same program run concurrently on both the mobile device and the cloud. An important challenge is to ensure that the tasks are able to access and share the files on both the mobile and the cloud in a manner that is efficient, consistent, and transparent to locations. Existing distributed file systems and network file systems do not satisfy these requirements. Current systems for offloading tasks either do not support file access for offloaded tasks or do not offload tasks with file access. The paper addresses this issue by designing and implementing an application-level file system called Overlay File System (OFS). To improve efficiency, OFS maintains and buffers local copies of data sets on both the cloud and the mobile device. OFS ensures consistency and guarantees that all the reads get the latest data. It combines write-invalidate and write-update policies to effectively reduce the network traffic incurred by invalidating/updating stale data copies and to reduce the execution delay when the latest data cannot be accessed locally. To guarantee location transparency, OFS creates a unified view of the data that is location independent and is accessible as local storage. We overcome the challenges caused by the special features of mobile systems on an application-level file system, like the lack of root privilege and state loss when application is killed due to the shortage of resource and implement an easy to deploy prototype of OFS. -
Libraries of the Future
UNIVERSITY OF TORONTO SCHOOL OF LIBRARY SCIENCE DATE DUE Afr,^/o-l LIBRARIES OF THE FUTURE LIBRARIES OF THE FUTURE J. C. R. Licklider THE M.I.T. PRESS Massachusetts Institute of Technology Cambridge, Massachusetts C.I COPYRIGHT © 1965 BY THE MASSACHUSETTS INSTITUTE OF TECHNOLOGY ALL RIGHTS RESERVED LIBRARY OF CONGRESS CATALOG CARD NUMBER: 65-13831 MANUFACTURED IN THE UNITED STATES OF AMERICA Foreword This report of research on concepts and problems of "Libraries of the Future" records the result of a two-year inquiry into the applicability of some of the newer tech- niques for handhng information to what goes at present by the name of library work — i.e., the operations con- nected with assembling information in recorded form and of organizing and making it available for use. Mankind has been complaining about the quantity of reading matter and the scarcity of time for reading it at least since the days of Leviticus, and in our own day these complaints have become increasingly numerous and shrill. But as Vannevar Bush pointed out in the article that may be said to have opened the current campaign on the "information problem," The difficulty seems to be, not so much that we publish un- FOREWORD duly in view of the extent and variety of present-day interests, but rather that pubhcation has been extended far beyond our present abihty to make real use of the record. The summa- tion of human experience is being expanded at a prodigious rate, and the means we use for threading through the con- sequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.* It has for some time been increasingly apparent that research libraries are becoming choked from the prolif- eration of publication, and that the resulting problems are not of a kind that respond to merely more of the same —'ever and ever larger bookstacks and ever and ever more complicated catalogues. -
Present Information on Library Automation Protects at Various
, , DOCUMENT RESUME ED 031 281 LI 001 647 By-Veaner, Allen B., Ed.; Fasana, Paul J., Ed. Stanford Conference on Collaborative Library Systems Development. Proceedings of a Conference Held at Stanford University Libraries, October 4-5, 1968. Stanford Univ., Calif. Libraries. Pub Date 69 Grant - OEC -1 -7 -077145 -4428 Note-234p. Available from-Office of the Financial Manager, Stanford University Libraries, Stanford, California 94305 (S7.00) EDRS Price MF-$1.00 HC-S11.80 Descriptors-*Automation, *Cooperative Programs, *Information Systems, *Library Networks, *University Libraries Identifiers-CLSD, *Collaborative Library Systems Development The conference was convened (1) to disseminate information on the development of Stanford's library automation protect, and (2) to disseminate information on the several and pint library automation activities of Chicago, Columbia, and Stanford, and (3) to promote heated discussion and active exchange. of ideas and problems between librarians, university administrators, computer center managers, systems analysts, computer scientists, and information scientists. These conference papers present information on library automation protects at various universities and L specialized information about institutions involved in bibliographic data processing activities. Topics specifically include: the Collaborative Library Systems Development (CLSD), the National Libraries Automation Task Force, the Biomedical Communications Network at the National Library of Medicine, the Book Processing System at the University of Chicago, the application of hardware and software in libraries, data link network and display terminals at Stanford, and other automation protects at Chicago. Columbia, and Stanford. (Author/RM) t rt..".. -Pt!, - 1.4:01,R00,11,, 7: , -.-,k, 001.647 Stanford Conference on Collaborative Library Systems Development PROCEEDINGS ofaConference Held at Stanford UniversityLibraries October 4-5,1968 Edited by Allen B. -
Unit 1 Memory Management
Memory Management UNIT 1 MEMORY MANAGEMENT Structure Page Nos. 1.0 Introduction 5 1.1 Objectives 6 1.2 Overlays and Swapping 6 1.3 Logical and Physical Address Space 8 1.4 Single Process Monitor 9 1.5 Contiguous Allocation Methods 9 1.5.1 Single Partition System 1.5.2 Multiple Partition System 1.6 Paging 12 1.6.1 Principles of Operation 1.6.2 Page Allocation 1.6.3 Hardware Support for Paging 1.6.4 Protection and Sharing 1.7 Segmentation 16 1.7.1 Principles of Operation 1.7.2 Address Translation 1.7.3 Protection and Sharing 1.8 Summary 17 1.9 Solution/Answers 18 1.10 Further Readings 18 1.0 INTRODUCTION In Block 1 we have studied about introductory concepts of the OS, process management and deadlocks. In this unit, we will go through another important function of the Operating System – the memory management. Memory is central to the operation of a modern computer system. Memory is a large array of words or bytes, each location with its own address. Interaction is achieved through a sequence of reads/writes of specific memory address. The CPU fetches from the program from the hard disk and stores in memory. If a program is to be executed, it must be mapped to absolute addresses and loaded into memory. In a multiprogramming environment, in order to improve both the CPU utilisation and the speed of the computer’s response, several processes must be kept in memory. There are many different algorithms depending on the particular situation to manage the memory. -
Memory Management (Chaper 4, Tanenbaum)
Memory Management (Chaper 4, Tanenbaum) Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Memory Mgmt Paul Lu) Introduction The CPU fetches instructions and data of a program from memory; therefore, both the program and its data must reside in the main (RAM and ROM) memory. Modern multiprogramming systems are capable of storing more than one program, together with the data they access, in the main memory. A fundamental task of the memory management component of an operating system is to ensure safe execution of programs by providing: ± Sharing of memory ± Memory protection Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Memory Mgmt 2 Paul Lu) Issues in sharing memory · Transparency Several processes may co-exist, unaware of each other, in the main memory and run regardless of the number and location of processes. · Safety (or protection) Processes must not corrupt each other (nor the OS!) · Efficiency CPU utilization must be preserved and memory must be fairly allocated. Want low overheads for memory management. · Relocation Ability of a program to run in different memory locations. Copyright © 1996±2002 Eskicioglu and Marsland (and Prentice-Hall and July 99 Memory Mgmt 3 Paul Lu) Storage allocation Information stored in main memory can be classified in a variety of ways: · Program (code) and data (variables, constants) · Read-only (code, constants) and read-write (variables) · Address (e.g., pointers) or data (other variables); binding (when memory is allocated for the object): static or dynamic The compiler, linker, loader and run-time libraries all cooperate to manage this information.