Virtual Memory and Address Translation

Total Page:16

File Type:pdf, Size:1020Kb

Virtual Memory and Address Translation Review ! Program addresses are virtual addresses. Ø" Relative offset of program regions can not change during program execution. E.g., heap can not move further from code. Ø" Virtual addresses == physical address inconvenient. " Program location is compiled into the program. ! A single offset register allows the OS to place a process’ virtual Virtual Memory and address space anywhere in physical memory. Address Translation Ø" Virtual address space must be smaller than physical. Ø" Program is swapped out of old location and swapped into new. ! Segmentation creates external fragmentation and requires large regions of contiguous physical memory. Ø" We look to fixed sized units, memory pages, to solve the problem. 1 2 Virtual Memory Realizing Virtual Memory Concept Paging n 2 -1! (fMAX-1,oMAX-1)! ! Key problem: How can one support programs that ! Physical memory partitioned into equal sized require more memory than is physically available? Ø" How can we support programs that do not use all of their page frames memory at once? Ø" Page frames avoid external fragmentation. (f,o)! ! Hide physical size of memory from users n A memory address is a pair (f, o) Ø" Memory is a “large” virtual address space of 2 bytes o Ø" Only portions of VAS are in physical memory at any one Program f — frame number (fmax frames) time (increase memory utilization). o — frame offset (omax bytes/frames) ’ P s Physical address = omax×f + o Physical ! Issues VAS Ø" Placement strategies Memory " Where to place programs in physical memory Ø" Replacement strategies " What to do when there exist more processes than can fit in f memory PA: Ø" Load control strategies 1 " Determining how many processes can be in memory at one log2 (fmax × omax) log2 omax time 0! f o (0,0)! 3 4 Physical Address Specifications Frame/Offset pair v. An absolute index Questions ! Example: A 16-bit address space with (omax =) 512 byte page frames ! The offset is the same in a virtual address and a Ø" Addressing location (3, 6) = 1,542 physical address. (3,6)! 1,542! Ø" A. True o Ø" B. False ! If your level 1 data cache is equal to or smaller than 3 6 Physical 2number of page offset bits then address translation is not Memory necessary for indexing the data cache. PA: 0 0 0 0 0 1 1 0 0 0 0 0 0 1 1 0 Ø" A. True 16 10 9 1 f Ø" B. False 1,542 (0,0)! 0! 5 6 Realizing Virtual Memory Paging Paging Mapping virtual addresses to physical addresses 2n-1 =! (p -1,o -1)! MAX MAX ! Pages map to frames ! A process’s virtual address space is ! Pages are contiguous in a VAS... partitioned into equal sized pages Ø" But pages are arbitrarily located Ø" page = page frame Virtual in physical memory, and (p,o)! Address Ø" Not all pages mapped at all times (f1, o1)! o Space A virtual address is a pair (p, o) p — page number (pmax pages) Virtual o — page offset (o bytes/pages) Physical max Address Virtual address = omax×p + o (p2, o2)! Memory Space p (p1,o1)! (f2, o2)! VA: 1 log2 (pmax×omax) log2 oMAX (0,0)! p o 7 8 Paging Frames and pages Virtual address translation ! A page table maps virtual ! Only mapping virtual pages that are in use does Program (f, o)! what? P pages to physical frames Ø" A. Increases memory utilization. Ø" B. Increases performance for user applications. CPU Ø" C. Allows an OS to run more programs concurrently. Ø" D. Gives the OS freedom to move virtual pages in the virtual P’s p o f o address space. Virtual Physical Address 20 10 9 1 16 10 9 1 ! Address translation and changing address mappings Memory Space are Virtual Addresses Physical Ø" A. Frequent and frequent (p, o)! Addresses Ø" B. Frequent and infrequent f Ø" C. Infrequent and frequent Ø" D. Infrequent and infrequent p Page Table 9 10 Virtual Address Translation Details Virtual Address Translation Details Page table structure Example A system with 16-bit addresses 1 table per process ! Contents: (4,1023)! Ø" 32 KB of physical memory Part of process’s state Ø" Flags — dirty bit, resident bit, clock/ reference bit Ø" 1024 byte pages Ø" Frame number (4,0)! (3,1023)! CPU CPU Physical Addresses p o f o P’s p o f o Virtual Physical 9 9 20 10 9 1 16 10 9 1 Address 15 10 0 14 10 0 Memory Space Virtual Virtual Addresses Addresses Physical Addresses 1 0 0 0 0 0 0 0 PTBR + 0 1 0 f 0 1 1 0 0 1 0 0 p (0,0)! Page Table 11 Page Table 12 Virtual Address Translation Virtual Address Translation Performance Issues Using TLBs to Speedup Address Translation ! Cache recently accessed page-to-frame translations in a TLB Ø For TLB hit, physical page number obtained in 1 cycle ! Problem — VM reference requires 2 memory references! " Ø" For TLB miss, translation is updated in TLB Ø" One access to get the page table entry Ø" Has high hit ratio (why?) Ø" One access to get the data f o Physical ! Page table can be very large; a part of the page table can be on CPU Addresses disk. 16 10 9 1 Ø" For a machine with 64-bit addresses and 1024 byte pages, what is p o the size of a page table? Virtual Addresses 20 10 9 1 ? ! What to do? Ø" Most computing problems are solved by some form of… " Caching Key Value " Indirection p f f TLB p X! 13 Page Table 14 Dealing With Large Page Tables Dealing With Large Page Tables Multi-level paging Multi-level paging ! Add additional levels of indirection to the page table by sub-dividing ! Example: Two-level paging page number into k parts Ø" Create a “tree” of page tables Ø" TLB still used, just not shown Second-Level Ø" The architecture determines the Page Tables CPU Memory number of levels of page table p p o f o 1 2 Virtual Physical Virtual Address Addresses Addresses p2 20 16 10 1 16 10 1 p1 p2 p3 o PTBR + page table + f p3 p1 p2 p1 Third-Level First-Level Page Tables First-Level Second-Level Page Table Page Table Page Table 15 16 The Problem of Large Address Spaces Virtual Address Translation Using Page Registers (aka Hashed/Inverted Page Tables) ! With large address spaces (64-bits) forward mapped page tables ! Each frame is associated with a register containing become cumbersome. Ø" Residence bit: whether or not the frame is occupied Ø" E.g. 5 levels of tables. Ø" Occupier: page number of the page occupying frame Ø" Protection bits ! Instead of making tables proportional to size of virtual address space, make them proportional to the size of physical address ! Page registers: an example space. Ø" Physical memory size: 16 MB Ø" Virtual address space is growing faster than physical. Ø" Page size: 4096 bytes Ø" Number of frames: 4096 ! Use one entry for each physical page with a hash table Ø" Space used for page registers (assuming 8 bytes/register): 32 Ø" Translation table occupies a very small fraction of physical memory Kbytes Ø" Size of translation table is independent of VM size Ø" Percentage overhead introduced by page registers: 0.2% Ø" Size of virtual memory: irrelevant ! Page table has 1 entry per virtual page ! Hashed/Inverted page table has 1 entry per physical frame 17 18 Page Registers Indexing Hashed Page Tables How does a virtual address become a physical address? Using Hash Tables ! Hash page numbers to find corresponding frame number ! CPU generates virtual addresses, where is the Ø" Page frame number is not explicitly stored (1 frame per entry) physical page? Ø" Protection, dirty, used, resident bits also in entry Ø" Hash the virtual address CPU Memory Ø" Must deal with conflicts Virtual ! TLB caches recent translations, so page lookup can p o Address PID f o Physical take several steps running Addresses 20 9 1 16 9 1 Ø" Hash the address Ø" Check the tag of the entry tag check Hash =? =? Ø" Possibly rehash/traverse list of conflicting entries ! TLB is limited in size fmax– 1 Ø" Difficult to make large and accessible in a single cycle. PTBR + PID page 0 1 1 fmax– 2 Ø" They consume a lot of power (27% of on-chip for StrongARM) h(PID, p) 0 Inverted Page Table 19 20 Searching Hahed Page Tables Searching Hashed Page Tables Using Hash Tables Using Hash Tables (Cont’d.) ! Minor complication Ø" Since the number of pages is usually larger than the number of ! Page registers are placed in an array slots in a hash table, two or more items may hash to the same location ! Page i is placed in slot f(i) where f is an agreed-upon hash function ! Two different entries that map to same location are said to collide ! To lookup page i, perform the following: ! Many standard techniques for dealing with collisions Ø" Compute f(i) and use it as an index into the table of page Ø" Use a linked list of items that hash to a particular table entry registers Ø" Rehash index until the key is found or an empty table entry is Ø" Extract the corresponding page register reached (open hashing) Ø" Check if the register tag contains i, if so, we have a hit Ø" Otherwise, we have a miss 21 22 Virtual Memory (Paging) Questions The bigger picture Code Data ! A process’s VAS is its context ! Why use hashed/inverted page tables? Ø" Contains its code, data, and stack Ø" A.
Recommended publications
  • Virtual Memory
    Chapter 4 Virtual Memory Linux processes execute in a virtual environment that makes it appear as if each process had the entire address space of the CPU available to itself. This virtual address space extends from address 0 all the way to the maximum address. On a 32-bit platform, such as IA-32, the maximum address is 232 − 1or0xffffffff. On a 64-bit platform, such as IA-64, this is 264 − 1or0xffffffffffffffff. While it is obviously convenient for a process to be able to access such a huge ad- dress space, there are really three distinct, but equally important, reasons for using virtual memory. 1. Resource virtualization. On a system with virtual memory, a process does not have to concern itself with the details of how much physical memory is available or which physical memory locations are already in use by some other process. In other words, virtual memory takes a limited physical resource (physical memory) and turns it into an infinite, or at least an abundant, resource (virtual memory). 2. Information isolation. Because each process runs in its own address space, it is not possible for one process to read data that belongs to another process. This improves security because it reduces the risk of one process being able to spy on another pro- cess and, e.g., steal a password. 3. Fault isolation. Processes with their own virtual address spaces cannot overwrite each other’s memory. This greatly reduces the risk of a failure in one process trig- gering a failure in another process. That is, when a process crashes, the problem is generally limited to that process alone and does not cause the entire machine to go down.
    [Show full text]
  • Requirements on Security Management for Adaptive Platform AUTOSAR AP R20-11
    Requirements on Security Management for Adaptive Platform AUTOSAR AP R20-11 Requirements on Security Document Title Management for Adaptive Platform Document Owner AUTOSAR Document Responsibility AUTOSAR Document Identification No 881 Document Status published Part of AUTOSAR Standard Adaptive Platform Part of Standard Release R20-11 Document Change History Date Release Changed by Description • Reworded PEE requirement [RS_SEC_05019] AUTOSAR • Added chapter ’Conventions’ and 2020-11-30 R20-11 Release ’Acronyms’ Management • Minor editorial changes to comply with AUTOSAR RS template • Fix spelling in [RS_SEC_05012] • Reworded secure channel AUTOSAR requirements [RS SEC 04001], [RS 2019-11-28 R19-11 Release SEC 04003] and [RS SEC 04003] Management • Changed Document Status from Final to published AUTOSAR 2019-03-29 19-03 Release • Unnecessary requirement [RS SEC Management 05006] removed AUTOSAR 2018-10-31 18-10 Release • Chapter 2.3 ’Protected Runtime Management Environment’ revised AUTOSAR • Moved the Identity and Access 2018-03-29 18-03 Release chapter into RS Identity and Access Management Management (899) AUTOSAR 2017-10-27 17-10 Release • Initial Release Management 1 of 32 Document ID 881: AUTOSAR_RS_SecurityManagement Requirements on Security Management for Adaptive Platform AUTOSAR AP R20-11 Disclaimer This work (specification and/or software implementation) and the material contained in it, as released by AUTOSAR, is for the purpose of information only. AUTOSAR and the companies that have contributed to it shall not be liable for any use of the work. The material contained in this work is protected by copyright and other types of intel- lectual property rights. The commercial exploitation of the material contained in this work requires a license to such intellectual property rights.
    [Show full text]
  • Virtual Memory - Paging
    Virtual memory - Paging Johan Montelius KTH 2020 1 / 32 The process code heap (.text) data stack kernel 0x00000000 0xC0000000 0xffffffff Memory layout for a 32-bit Linux process 2 / 32 Segments - a could be solution Processes in virtual space Address translation by MMU (base and bounds) Physical memory 3 / 32 one problem Physical memory External fragmentation: free areas of free space that is hard to utilize. Solution: allocate larger segments ... internal fragmentation. 4 / 32 another problem virtual space used code We’re reserving physical memory that is not used. physical memory not used? 5 / 32 Let’s try again It’s easier to handle fixed size memory blocks. Can we map a process virtual space to a set of equal size blocks? An address is interpreted as a virtual page number (VPN) and an offset. 6 / 32 Remember the segmented MMU MMU exception no virtual addr. offset yes // < within bounds index + physical address segment table 7 / 32 The paging MMU MMU exception virtual addr. offset // VPN available ? + physical address page table 8 / 32 the MMU exception exception virtual address within bounds page available Segmentation Paging linear address physical address 9 / 32 a note on the x86 architecture The x86-32 architecture supports both segmentation and paging. A virtual address is translated to a linear address using a segmentation table. The linear address is then translated to a physical address by paging. Linux and Windows do not use use segmentation to separate code, data nor stack. The x86-64 (the 64-bit version of the x86 architecture) has dropped many features for segmentation.
    [Show full text]
  • Advanced X86
    Advanced x86: BIOS and System Management Mode Internals Input/Output Xeno Kovah && Corey Kallenberg LegbaCore, LLC All materials are licensed under a Creative Commons “Share Alike” license. http://creativecommons.org/licenses/by-sa/3.0/ ABribuEon condiEon: You must indicate that derivave work "Is derived from John BuBerworth & Xeno Kovah’s ’Advanced Intel x86: BIOS and SMM’ class posted at hBp://opensecuritytraining.info/IntroBIOS.html” 2 Input/Output (I/O) I/O, I/O, it’s off to work we go… 2 Types of I/O 1. Memory-Mapped I/O (MMIO) 2. Port I/O (PIO) – Also called Isolated I/O or port-mapped IO (PMIO) • X86 systems employ both-types of I/O • Both methods map peripheral devices • Address space of each is accessed using instructions – typically requires Ring 0 privileges – Real-Addressing mode has no implementation of rings, so no privilege escalation needed • I/O ports can be mapped so that they appear in the I/O address space or the physical-memory address space (memory mapped I/O) or both – Example: PCI configuration space in a PCIe system – both memory-mapped and accessible via port I/O. We’ll learn about that in the next section • The I/O Controller Hub contains the registers that are located in both the I/O Address Space and the Memory-Mapped address space 4 Memory-Mapped I/O • Devices can also be mapped to the physical address space instead of (or in addition to) the I/O address space • Even though it is a hardware device on the other end of that access request, you can operate on it like it's memory: – Any of the processor’s instructions
    [Show full text]
  • Virtual Memory in X86
    Fall 2017 :: CSE 306 Virtual Memory in x86 Nima Honarmand Fall 2017 :: CSE 306 x86 Processor Modes • Real mode – walks and talks like a really old x86 chip • State at boot • 20-bit address space, direct physical memory access • 1 MB of usable memory • No paging • No user mode; processor has only one protection level • Protected mode – Standard 32-bit x86 mode • Combination of segmentation and paging • Privilege levels (separate user and kernel) • 32-bit virtual address • 32-bit physical address • 36-bit if Physical Address Extension (PAE) feature enabled Fall 2017 :: CSE 306 x86 Processor Modes • Long mode – 64-bit mode (aka amd64, x86_64, etc.) • Very similar to 32-bit mode (protected mode), but bigger address space • 48-bit virtual address space • 52-bit physical address space • Restricted segmentation use • Even more obscure modes we won’t discuss today xv6 uses protected mode w/o PAE (i.e., 32-bit virtual and physical addresses) Fall 2017 :: CSE 306 Virt. & Phys. Addr. Spaces in x86 Processor • Both RAM hand hardware devices (disk, Core NIC, etc.) connected to system bus • Mapped to different parts of the physical Virtual Addr address space by the BIOS MMU Data • You can talk to a device by performing Physical Addr read/write operations on its physical addresses Cache • Devices are free to interpret reads/writes in any way they want (driver knows) System Interconnect (Bus) : all addrs virtual DRAM Network … Disk (Memory) Card : all addrs physical Fall 2017 :: CSE 306 Virt-to-Phys Translation in x86 0xdeadbeef Segmentation 0x0eadbeef Paging 0x6eadbeef Virtual Address Linear Address Physical Address Protected/Long mode only • Segmentation cannot be disabled! • But can be made a no-op (a.k.a.
    [Show full text]
  • Virtual Memory
    CSE 410: Systems Programming Virtual Memory Ethan Blanton Department of Computer Science and Engineering University at Buffalo Introduction Address Spaces Paging Summary References Virtual Memory Virtual memory is a mechanism by which a system divorces the address space in programs from the physical layout of memory. Virtual addresses are locations in program address space. Physical addresses are locations in actual hardware RAM. With virtual memory, the two need not be equal. © 2018 Ethan Blanton / CSE 410: Systems Programming Introduction Address Spaces Paging Summary References Process Layout As previously discussed: Every process has unmapped memory near NULL Processes may have access to the entire address space Each process is denied access to the memory used by other processes Some of these statements seem contradictory. Virtual memory is the mechanism by which this is accomplished. Every address in a process’s address space is a virtual address. © 2018 Ethan Blanton / CSE 410: Systems Programming Introduction Address Spaces Paging Summary References Physical Layout The physical layout of hardware RAM may vary significantly from machine to machine or platform to platform. Sometimes certain locations are restricted Devices may appear in the memory address space Different amounts of RAM may be present Historically, programs were aware of these restrictions. Today, virtual memory hides these details. The kernel must still be aware of physical layout. © 2018 Ethan Blanton / CSE 410: Systems Programming Introduction Address Spaces Paging Summary References The Memory Management Unit The Memory Management Unit (MMU) translates addresses. It uses a per-process mapping structure to transform virtual addresses into physical addresses. The MMU is physical hardware between the CPU and the memory bus.
    [Show full text]
  • An Internet Protocol (IP) Address Is a Numerical Label That Is
    Computer Communication Networks Lecture No. 5 Computer Network Lectures IP address An Internet Protocol (IP) address is a numerical label that is assigned to devices participating in a computer network, that uses the Internet Protocol for communication between its nodes. An IP address serves two principal functions: 1- host or network interface identification 2- location addressing. Its role has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there." The designers of TCP/IP defined an IP address as a 32-bit number and this system, known as Internet Protocol Version 4 or IPv4, is still in use today. However, due to the enormous growth of the Internet and the resulting depletion of available addresses, a new addressing system (IPv6), using 128 bits for the address, was developed in 1995. Although IP addresses are stored as binary numbers, they are usually displayed in human-readable notations, such as 208.77.188.166 (for IPv4), and 2001:db8:0:1234:0:567:1:1 (for IPv6). The Internet Protocol also routes data packets between networks; IP addresses specify the locations of the source and destination nodes in the topology of the routing system. For this purpose, some of the bits in an IP address are used to designate a sub network. As the development of private networks raised the threat of IPv4 address exhaustion, RFC 1918 set aside a group of private address spaces that may be used by anyone on private networks. They are often used with network address translators to connect to the global public Internet.
    [Show full text]
  • X86 Memory Protection and Translation
    x86 Memory Protection and Translation Don Porter CSE 506 Lecture Goal ò Understand the hardware tools available on a modern x86 processor for manipulating and protecting memory ò Lab 2: You will program this hardware ò Apologies: Material can be a bit dry, but important ò Plus, slides will be good reference ò But, cool tech tricks: ò How does thread-local storage (TLS) work? ò An actual (and tough) Microsoft interview question Undergrad Review ò What is: ò Virtual memory? ò Segmentation? ò Paging? Two System Goals 1) Provide an abstraction of contiguous, isolated virtual memory to a program 2) Prevent illegal operations ò Prevent access to other application or OS memory ò Detect failures early (e.g., segfault on address 0) ò More recently, prevent exploits that try to execute program data Outline ò x86 processor modes ò x86 segmentation ò x86 page tables ò Software vs. Hardware mechanisms ò Advanced Features ò Interesting applications/problems x86 Processor Modes ò Real mode – walks and talks like a really old x86 chip ò State at boot ò 20-bit address space, direct physical memory access ò Segmentation available (no paging) ò Protected mode – Standard 32-bit x86 mode ò Segmentation and paging ò Privilege levels (separate user and kernel) x86 Processor Modes ò Long mode – 64-bit mode (aka amd64, x86_64, etc.) ò Very similar to 32-bit mode (protected mode), but bigger ò Restrict segmentation use ò Garbage collect deprecated instructions ò Chips can still run in protected mode with old instructions Translation Overview 0xdeadbeef Segmentation 0x0eadbeef Paging 0x6eadbeef Virtual Address Linear Address Physical Address Protected/Long mode only ò Segmentation cannot be disabled! ò But can be a no-op (aka flat mode) x86 Segmentation ò A segment has: ò Base address (linear address) ò Length ò Type (code, data, etc).
    [Show full text]
  • Virtual Memory
    Saint LouisCarnegie University Mellon Virtual Memory CSCI 224 / ECE 317: Computer Architecture Instructor: Prof. Jason Fritts Slides adapted from Bryant & O’Hallaron’s slides 1 Saint LouisCarnegie University Mellon Data Representation in Memory Memory organization within a process Virtual vs. Physical memory ° Fundamental Idea and Purpose ° Page Mapping ° Address Translation ° Per-Process Mapping and Protection 2 Saint LouisCarnegie University Mellon Recall: Basic Memory Organization • • • Byte-Addressable Memory ° Conceptually a very large array, with a unique address for each byte ° Processor width determines address range: ° 32-bit processor has 2 32 unique addresses ° 64-bit processor has 264 unique addresses Where does a given process reside in memory? ° depends upon the perspective… – virtual memory: process can use most any virtual address – physical memory: location controlled by OS 3 Saint LouisCarnegie University Mellon Virtual Address Space not drawn to scale 0xFFFFFFFF for IA32 (x86) Linux Stack 8MB All processes have the same uniform view of memory Stack ° Runtime stack (8MB limit) ° E. g., local variables Heap ° Dynamically allocated storage ° When call malloc() , calloc() , new() Data ° Statically allocated data ° E.g., global variables, arrays, structures, etc. Heap Text Data Text ° Executable machine instructions 0x08000000 ° Read-only data 0x00000000 4 Saint LouisCarnegie University Mellon not drawn to scale Memory Allocation Example 0xFF…F Stack char big_array[1<<24]; /* 16 MB */ char huge_array[1<<28]; /* 256 MB */ int beyond; char *p1, *p2, *p3, *p4; int useless() { return 0; } int main() { p1 = malloc(1 <<28); /* 256 MB */ p2 = malloc(1 << 8); /* 256 B */ p3 = malloc(1 <<28); /* 256 MB */ p4 = malloc(1 << 8); /* 256 B */ /* Some print statements ..
    [Show full text]
  • CS 351: Systems Programming
    Virtual Memory CS 351: Systems Programming Michael Saelee <[email protected]> Computer Science Science registers cache (SRAM) main memory (DRAM) local hard disk drive (HDD/SSD) remote storage (networked drive / cloud) previously: SRAM ⇔ DRAM Computer Science Science registers cache (SRAM) main memory (DRAM) local hard disk drive (HDD/SSD) remote storage (networked drive / cloud) next: DRAM ⇔ HDD, SSD, etc. i.e., memory as a “cache” for disk Computer Science Science main goals: 1. maximize memory throughput 2. maximize memory utilization 3. provide address space consistency & memory protection to processes Computer Science Science throughput = # bytes per second - depends on access latencies (DRAM, HDD) and “hit rate” Computer Science Science utilization = fraction of allocated memory that contains “user” data (aka payload) - vs. metadata and other overhead required for memory management Computer Science Science address space consistency → provide a uniform “view” of memory to each process 0xffffffff Computer Science Science Kernel virtual memory Memory (code, data, heap, stack) invisible to 0xc0000000 user code User stack (created at runtime) %esp (stack pointer) Memory mapped region for shared libraries 0x40000000 brk Run-time heap (created by malloc) Read/write segment ( , ) .data .bss Loaded from the Read-only segment executable file (.init, .text, .rodata) 0x08048000 0 Unused address space consistency → provide a uniform “view” of memory to each process Computer Science Science memory protection → prevent processes from directly accessing
    [Show full text]
  • Chapter 9: Memory Management
    Chapter 9: Memory Management I Background I Swapping I Contiguous Allocation I Paging I Segmentation I Segmentation with Paging Operating System Concepts 9.1 Silberschatz, Galvin and Gagne 2002 Background I Program must be brought into memory and placed within a process for it to be run. I Input queue – collection of processes on the disk that are waiting to be brought into memory to run the program. I User programs go through several steps before being run. Operating System Concepts 9.2 Silberschatz, Galvin and Gagne 2002 Binding of Instructions and Data to Memory Address binding of instructions and data to memory addresses can happen at three different stages. I Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes. I Load time: Must generate relocatable code if memory location is not known at compile time. I Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers). Operating System Concepts 9.3 Silberschatz, Galvin and Gagne 2002 Multistep Processing of a User Program Operating System Concepts 9.4 Silberschatz, Galvin and Gagne 2002 Logical vs. Physical Address Space I The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. ✦ Logical address – generated by the CPU; also referred to as virtual address. ✦ Physical address – address seen by the memory unit. I Logical and physical addresses are the same in compile- time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme.
    [Show full text]
  • Internet Address Space: Economic Considerations in the Management of Ipv4”, OECD Digital Economy Papers, No
    Please cite this paper as: OECD (2008-06-18), “Internet Address Space: Economic Considerations in the Management of IPv4”, OECD Digital Economy Papers, No. 145, OECD Publishing, Paris. http://dx.doi.org/10.1787/230461618475 OECD Digital Economy Papers No. 145 Internet Address Space ECONOMIC CONSIDERATIONS IN THE MANAGEMENT OF IPV4 OECD DSTI/ICCP(2007)20/FINAL FOREWORD The report provides an analysis of economic considerations associated with the transition from IPv4 to IPv6. It provides background analysis supporting the forthcoming ICCP-organised Ministerial-level meeting on ―The Future of the Internet Economy‖, to take place in Seoul, Korea on 17-18 June 2008. This report was prepared by Ms. Karine Perset of the OECD‘s Directorate for Science Technology and Industry. It was declassified by the ICCP Committee at its 54th Session on 5-7 March 2008. It is published under the responsibility of the Secretary-General of the OECD. This paper has greatly benefited from the expert input of Geoff Huston from APNIC, David Conrad from the IANA, Patrick Grossetête from CISCO Systems, Bill Woodcock from Packet Clearing House, Marcelo Bagnulo Braun from the University of Madrid, Alain Durand from Comcast, and Vincent Bataille from Mulot Déclic, although interpretations, unless otherwise stated, are those of the author. 2 DSTI/ICCP(2007)20/FINAL TABLE OF CONTENTS FOREWORD ................................................................................................................................................... 2 MAIN POINTS ..............................................................................................................................................
    [Show full text]