CS5460: Operating Systems

Total Page:16

File Type:pdf, Size:1020Kb

CS5460: Operating Systems CS5460: Operating Systems Lecture 13: Memory Management (Chapter 8) CS 5460: Operating Systems Where are we? Basic OS structure,• HW/SW interface, interrupts, scheduling • •Concurrency • Memory management Storage management Other topics CS 5460: Operating Systems Example Virtual Address Space 0xFFFFFFFF Typical address space has 4 parts User stack segment SP – Code: binary image of program – Data: static variables (globals) HP – Heap : explicitly allocated data (malloc) User heap – Stack: implicitly allocated data User Kernel mapped into all processes User data segment MMU hardware: PC User code segment – Remaps virtual addresses to physical 0x80000000 – Supports read-only, supervisor-only Kernel heap – Detects accesses to unmapped regions How can we load two processes Kernel data segment into memory at same time? – Assume each has similar layout Kernel Kernel code segment 0x00000000 CS 5460: Operating Systems Mapping Addresses How can process (virtual) addresses be mapped to physical addresses? – Compile time: Compiler generates physical addresses directly » Advantages: No MMU hardware, no runtime translation overhead » Disadvantages: Inflexible, hard to multiprogram, inefficient use of DRAM – Load time: OS loader “fixes” addresses when it loads program » Advantages: Can support static multiprogramming » Disadvantages: MMU hardware, inflexible, hard to share data, … – Dynamic: Compiler generates address, but OS/HW reinterpret » Advantages: Very flexible, can use memory efficiently » Disadvantages: MMU hardware req’d, runtime translation overhead For “real” OSes, processes only use virtual addresses – Small-sized embedded systems use physical addresses CS 5460: Operating Systems Uniprogramming (e.g., DOS) 0xFFFFFFFF One process at a time Reserved for User code compiled to sit in DOS kernel fixed range (e.g., [0,640 KB]) 0xA0000 Stack – No hardware virtualization of SP addresses OS in separate addresses HP – E.g., above 640KB Heap (Dynamically allocated) Goals: Uninitialized data (BSS segment) – Safety: None (good and bad) – Efficiency: Poor (I/O and compute Static data not overlapped, response time) (Data segment) PC Code (Text segment) 0x00000000 CS 5460: Operating Systems Multiprogramming: Static Relocation 0xFFFFFFFF OS loader relocates programs – OS stored in reserved “high” region Reserved for – Compiler maps process starting at 0 OS kernel – When process started, OS loader: » Allocates contiguous physical memory Stack » Uses relocation info in binary to fix up SP1 addresses to relocated region – TSRs in DOS based on this technique HP 1 Heap Problems: Data – Finding/creating contiguous holes PC1 – Dealing with processes that grow/shrink Code Goals: Stack SP0 – Safety: None! à process can destroy other processes HP0 – Efficiency: Poor à only one segment Heap per process; slow load times; no Data sharing PC0 Code 0x00000000 CS 5460: Operating Systems Dynamic Relocation Physical Addresses 0xFFFFFFFF Idea: Stack SP 0 Data – Programs all laid out the same HP0 – Relocate addresses when used Heap Data – “Requires” hardware support Data Code Heap Two views of memory: PC0 – Virtual: Process’s view OS kernel Code – Physical: Machine’s view 0x00000000 Stack Many variants 0xFFFFFFFF – Base and bounds Stack Heap SP1 – Segmentation HP 1 Stack – Paging Heap Virtual Data – Segmented paging Addresses Code PC1 Code OS kernel OS kernel 0x00000000 CS 5460: Operating Systems Base and Bounds Trap Each process mapped to contiguous physical region Virtual address Bounds register Two hardware registers >? – Base: Starting physical address – Bounds: Size in bytes + Base register Physical On each reference: address – Check against bounds Virtual Physical 0xFFFFFFFF – Add base to get physical address OS kernel 0x00000 0x7ffff Evaluation: Stack Bounds – Good points: … Heap 1 P1 VAs Data – Bad points: … Code 0x00000 Base1 OS handled specially 0x7ffff Stack Bounds Example: Cray-1 Heap 0 P0 VAs Data Base0 Code 0x00000 0x00000000 CS 5460: Operating Systems Base and Bounds Each process has private address space – No relocation done at load time Operating system handled specially – Runs with relocation turned off (i.e., ignores Base and Bounds) – Only OS can modify Base and Bounds registers Good points: – Very simple hardware Bad points: – Only one contiguous segment per process à inhibits sharing – External fragmentation à need to find or make holes – Hard to grow segments CS 5460: Operating Systems Segmentation Idea: Create N separate segments – Each segment has separate base and bounds register – Segment number is fixed portion of virtual address Seg# Offset >? Error! (Trap) Base Bounds Base Bounds Base Bounds + Physical … address Base Bounds Base Bounds CS 5460: Operating Systems Segmentation Example Virtual address space is 2000 Base Bounds bytes in size • 0 1000 400 Segment 4 segments up to 500 bytes • 1 0 500 each Table • 2 600 300 – Starting at 0, 500, 1000, 1500 • 3 1500 400 What if processor accesses… – VA 0 Virtual Address Physical Address 2000 – VA 1040 Seg3 Seg3 – VA 1900 – VA 920 Seg0 – VA 1898 Seg2 1000 What if we allocate: Seg1 Seg2 – 100-byte segment – 200-byte segment Seg1 Seg0 0 CS 5460: Operating Systems Segmentation Discussion Good features: – More flexible than base and bounds à enables sharing (How?) – Reduces severity of fragmentation (How?) – Small hardware table (e.g., 8 segments) à can fit all in processor Problems: – Still have fragmentation à How? What kind? – Hard to grow segments à Why? – Non-contiguous virtual address space à Real problem? Possible solutions: – Fragmentation: Copy and compact – Growing segments: Copy and compact – Paging CS 5460: Operating Systems Paging Problem w/ segmentation à variable-sized segments Solution à Paging! – Insist that all “chunks” be the same size (typically 512-8K bytes) – Call them “pages” rather than “segments” – Allocation is done in terms of full page-aligned pages à no bounds – MMU maps virtual page numbers to physical page numbers Virtual Page# Offset Wired concatenate Physical Page# Other Physical Physical Page# Offset Physical Page# Other address Physical Page# Other What other info? Physical Page# Other CS 5460: Operating Systems Paging Discussion How does this help? – No external fragmentation! – No forced holes in virtual address space – Easy translation à everything aligned on power-of-2 addresses – Easy for OS to manage/allocate free memory pool What problems are introduced? – What if you do not need entire page? Internal fragmentation – Page table may be large » Where should we put it? – How can we do fast translation if not stored in processor? – How big should you make your pages? » Large: Smaller table, demand paging more efficient » Small: Less fragmentation, finer grained sharing, larger page table CS 5460: Operating Systems Paging Examples Page Table PPN Valid R/O Super • 0 3 Y N Y • 1 8 N N N Assume 1000-byte pages • 2 5 Y Y N What if processor accesses: • 3 7 Y N N – VA 0 • 4 1 N Y Y – VA 1040 – VA 2900 Virtual Address Physical Address – VA 920 VP1 – VA 4998 VP3 VP2 VP4 VP3 VP0 VP2 VP1 VP4 Free VP0 List CS 5460: Operating Systems x86 Paging x86 typically uses 4 KB pages Virtual addresses are 32 bits How big is the offset field of a virtual address? How big is the virtual page number field? How many pages are in a virtual address space? How big is a flat page table? – Assume PTE (page table entry) is 32 bits CS 5460: Operating Systems Key Idea From Today Address space virtualization – Programs see virtual addresses – Kernel can see both virtual and physical addresses – Virtual and physical address spaces need not be the same size – You must understand this to understand modern operating systems Kernel + HW supports the virtual to physical mapping – Has to be fast – There are different ways to do it – Modern OSes use paging CS 5460: Operating Systems .
Recommended publications
  • Virtual Memory
    Chapter 4 Virtual Memory Linux processes execute in a virtual environment that makes it appear as if each process had the entire address space of the CPU available to itself. This virtual address space extends from address 0 all the way to the maximum address. On a 32-bit platform, such as IA-32, the maximum address is 232 − 1or0xffffffff. On a 64-bit platform, such as IA-64, this is 264 − 1or0xffffffffffffffff. While it is obviously convenient for a process to be able to access such a huge ad- dress space, there are really three distinct, but equally important, reasons for using virtual memory. 1. Resource virtualization. On a system with virtual memory, a process does not have to concern itself with the details of how much physical memory is available or which physical memory locations are already in use by some other process. In other words, virtual memory takes a limited physical resource (physical memory) and turns it into an infinite, or at least an abundant, resource (virtual memory). 2. Information isolation. Because each process runs in its own address space, it is not possible for one process to read data that belongs to another process. This improves security because it reduces the risk of one process being able to spy on another pro- cess and, e.g., steal a password. 3. Fault isolation. Processes with their own virtual address spaces cannot overwrite each other’s memory. This greatly reduces the risk of a failure in one process trig- gering a failure in another process. That is, when a process crashes, the problem is generally limited to that process alone and does not cause the entire machine to go down.
    [Show full text]
  • Security and Performance Analysis of Custom Memory Allocators Tiffany
    Security and Performance Analysis of Custom Memory Allocators by Tiffany Tang Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2019 c Massachusetts Institute of Technology 2019. All rights reserved. Author.............................................................. Department of Electrical Engineering and Computer Science June 10, 2019 Certified by. Howard E. Shrobe Principal Research Scientist and Associate Director of Security, CSAIL Thesis Supervisor Certified by. Hamed Okhravi Senior Technical Staff, MIT Lincoln Laboratory Thesis Supervisor Accepted by . Katrina LaCurts, Chair, Masters of Engineering Thesis Committee 2 DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Assistant Secre- tary of Defense for Research and Engineering. 3 Security and Performance Analysis of Custom Memory Allocators by Tiffany Tang Submitted to the Department of Electrical Engineering and Computer Science on June 10, 2019, in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science Abstract Computer programmers use custom memory allocators as an alternative to built- in or general-purpose memory allocators with the intent to improve performance and minimize human error. However, it is difficult to achieve both memory safety and performance gains on custom memory allocators.
    [Show full text]
  • CSE421 Midterm Solutions —SOLUTION SET— 09 Mar 2012
    CSE421 Midterm Solutions —SOLUTION SET— 09 Mar 2012 This midterm exam consists of three types of questions: 1. 10 multiple choice questions worth 1 point each. These are drawn directly from lecture slides and intended to be easy. 2. 6 short answer questions worth 5 points each. You can answer as many as you want, but we will give you credit for your best four answers for a total of up to 20 points. You should be able to answer the short answer questions in four or five sentences. 3. 2 long answer questions worth 20 points each. Please answer only one long answer question. If you answer both, we will only grade one. Your answer to the long answer should span a page or two. Please answer each question as clearly and succinctly as possible. Feel free to draw pic- tures or diagrams if they help you to do so. No aids of any kind are permitted. The point value assigned to each question is intended to suggest how to allocate your time. So you should work on a 5 point question for roughly 5 minutes. CSE421 Midterm Solutions 09 Mar 2012 Multiple Choice 1. (10 points) Answer all ten of the following questions. Each is worth one point. (a) In the story that GWA (Geoff) began class with on Monday, March 4th, why was the Harvard student concerned about his grade? p He never attended class. He never arrived at class on time. He usually fell asleep in class. He was using drugs. (b) All of the following are inter-process (IPC) communication mechanisms except p shared files.
    [Show full text]
  • Paging: Smaller Tables
    20 Paging: Smaller Tables We now tackle the second problem that paging introduces: page tables are too big and thus consume too much memory. Let’s start out with a linear page table. As you might recall1, linear page tables get pretty 32 12 big. Assume again a 32-bit address space (2 bytes), with 4KB (2 byte) pages and a 4-byte page-table entry. An address space thus has roughly 232 one million virtual pages in it ( 212 ); multiply by the page-table entry size and you see that our page table is 4MB in size. Recall also: we usually have one page table for every process in the system! With a hundred active processes (not uncommon on a modern system), we will be allocating hundreds of megabytes of memory just for page tables! As a result, we are in search of some techniques to reduce this heavy burden. There are a lot of them, so let’s get going. But not before our crux: CRUX: HOW TO MAKE PAGE TABLES SMALLER? Simple array-based page tables (usually called linear page tables) are too big, taking up far too much memory on typical systems. How can we make page tables smaller? What are the key ideas? What inefficiencies arise as a result of these new data structures? 20.1 Simple Solution: Bigger Pages We could reduce the size of the page table in one simple way: use bigger pages. Take our 32-bit address space again, but this time assume 16KB pages. We would thus have an 18-bit VPN plus a 14-bit offset.
    [Show full text]
  • Virtual Memory Basics
    Operating Systems Virtual Memory Basics Peter Lipp, Daniel Gruss 2021-03-04 Table of contents 1. Address Translation First Idea: Base and Bound Segmentation Simple Paging Multi-level Paging 2. Address Translation on x86 processors Address Translation pointers point to objects etc. transparent: it is not necessary to know how memory reference is converted to data enables number of advanced features programmers perspective: Address Translation OS in control of address translation pointers point to objects etc. transparent: it is not necessary to know how memory reference is converted to data programmers perspective: Address Translation OS in control of address translation enables number of advanced features pointers point to objects etc. transparent: it is not necessary to know how memory reference is converted to data Address Translation OS in control of address translation enables number of advanced features programmers perspective: transparent: it is not necessary to know how memory reference is converted to data Address Translation OS in control of address translation enables number of advanced features programmers perspective: pointers point to objects etc. Address Translation OS in control of address translation enables number of advanced features programmers perspective: pointers point to objects etc. transparent: it is not necessary to know how memory reference is converted to data Address Translation - Idea / Overview Shared libraries, interprocess communication Multiple regions for dynamic allocation
    [Show full text]
  • Virtual Memory - Paging
    Virtual memory - Paging Johan Montelius KTH 2020 1 / 32 The process code heap (.text) data stack kernel 0x00000000 0xC0000000 0xffffffff Memory layout for a 32-bit Linux process 2 / 32 Segments - a could be solution Processes in virtual space Address translation by MMU (base and bounds) Physical memory 3 / 32 one problem Physical memory External fragmentation: free areas of free space that is hard to utilize. Solution: allocate larger segments ... internal fragmentation. 4 / 32 another problem virtual space used code We’re reserving physical memory that is not used. physical memory not used? 5 / 32 Let’s try again It’s easier to handle fixed size memory blocks. Can we map a process virtual space to a set of equal size blocks? An address is interpreted as a virtual page number (VPN) and an offset. 6 / 32 Remember the segmented MMU MMU exception no virtual addr. offset yes // < within bounds index + physical address segment table 7 / 32 The paging MMU MMU exception virtual addr. offset // VPN available ? + physical address page table 8 / 32 the MMU exception exception virtual address within bounds page available Segmentation Paging linear address physical address 9 / 32 a note on the x86 architecture The x86-32 architecture supports both segmentation and paging. A virtual address is translated to a linear address using a segmentation table. The linear address is then translated to a physical address by paging. Linux and Windows do not use use segmentation to separate code, data nor stack. The x86-64 (the 64-bit version of the x86 architecture) has dropped many features for segmentation.
    [Show full text]
  • Advanced X86
    Advanced x86: BIOS and System Management Mode Internals Input/Output Xeno Kovah && Corey Kallenberg LegbaCore, LLC All materials are licensed under a Creative Commons “Share Alike” license. http://creativecommons.org/licenses/by-sa/3.0/ ABribuEon condiEon: You must indicate that derivave work "Is derived from John BuBerworth & Xeno Kovah’s ’Advanced Intel x86: BIOS and SMM’ class posted at hBp://opensecuritytraining.info/IntroBIOS.html” 2 Input/Output (I/O) I/O, I/O, it’s off to work we go… 2 Types of I/O 1. Memory-Mapped I/O (MMIO) 2. Port I/O (PIO) – Also called Isolated I/O or port-mapped IO (PMIO) • X86 systems employ both-types of I/O • Both methods map peripheral devices • Address space of each is accessed using instructions – typically requires Ring 0 privileges – Real-Addressing mode has no implementation of rings, so no privilege escalation needed • I/O ports can be mapped so that they appear in the I/O address space or the physical-memory address space (memory mapped I/O) or both – Example: PCI configuration space in a PCIe system – both memory-mapped and accessible via port I/O. We’ll learn about that in the next section • The I/O Controller Hub contains the registers that are located in both the I/O Address Space and the Memory-Mapped address space 4 Memory-Mapped I/O • Devices can also be mapped to the physical address space instead of (or in addition to) the I/O address space • Even though it is a hardware device on the other end of that access request, you can operate on it like it's memory: – Any of the processor’s instructions
    [Show full text]
  • Virtual Memory in X86
    Fall 2017 :: CSE 306 Virtual Memory in x86 Nima Honarmand Fall 2017 :: CSE 306 x86 Processor Modes • Real mode – walks and talks like a really old x86 chip • State at boot • 20-bit address space, direct physical memory access • 1 MB of usable memory • No paging • No user mode; processor has only one protection level • Protected mode – Standard 32-bit x86 mode • Combination of segmentation and paging • Privilege levels (separate user and kernel) • 32-bit virtual address • 32-bit physical address • 36-bit if Physical Address Extension (PAE) feature enabled Fall 2017 :: CSE 306 x86 Processor Modes • Long mode – 64-bit mode (aka amd64, x86_64, etc.) • Very similar to 32-bit mode (protected mode), but bigger address space • 48-bit virtual address space • 52-bit physical address space • Restricted segmentation use • Even more obscure modes we won’t discuss today xv6 uses protected mode w/o PAE (i.e., 32-bit virtual and physical addresses) Fall 2017 :: CSE 306 Virt. & Phys. Addr. Spaces in x86 Processor • Both RAM hand hardware devices (disk, Core NIC, etc.) connected to system bus • Mapped to different parts of the physical Virtual Addr address space by the BIOS MMU Data • You can talk to a device by performing Physical Addr read/write operations on its physical addresses Cache • Devices are free to interpret reads/writes in any way they want (driver knows) System Interconnect (Bus) : all addrs virtual DRAM Network … Disk (Memory) Card : all addrs physical Fall 2017 :: CSE 306 Virt-to-Phys Translation in x86 0xdeadbeef Segmentation 0x0eadbeef Paging 0x6eadbeef Virtual Address Linear Address Physical Address Protected/Long mode only • Segmentation cannot be disabled! • But can be made a no-op (a.k.a.
    [Show full text]
  • Virtual Memory
    CSE 410: Systems Programming Virtual Memory Ethan Blanton Department of Computer Science and Engineering University at Buffalo Introduction Address Spaces Paging Summary References Virtual Memory Virtual memory is a mechanism by which a system divorces the address space in programs from the physical layout of memory. Virtual addresses are locations in program address space. Physical addresses are locations in actual hardware RAM. With virtual memory, the two need not be equal. © 2018 Ethan Blanton / CSE 410: Systems Programming Introduction Address Spaces Paging Summary References Process Layout As previously discussed: Every process has unmapped memory near NULL Processes may have access to the entire address space Each process is denied access to the memory used by other processes Some of these statements seem contradictory. Virtual memory is the mechanism by which this is accomplished. Every address in a process’s address space is a virtual address. © 2018 Ethan Blanton / CSE 410: Systems Programming Introduction Address Spaces Paging Summary References Physical Layout The physical layout of hardware RAM may vary significantly from machine to machine or platform to platform. Sometimes certain locations are restricted Devices may appear in the memory address space Different amounts of RAM may be present Historically, programs were aware of these restrictions. Today, virtual memory hides these details. The kernel must still be aware of physical layout. © 2018 Ethan Blanton / CSE 410: Systems Programming Introduction Address Spaces Paging Summary References The Memory Management Unit The Memory Management Unit (MMU) translates addresses. It uses a per-process mapping structure to transform virtual addresses into physical addresses. The MMU is physical hardware between the CPU and the memory bus.
    [Show full text]
  • An Internet Protocol (IP) Address Is a Numerical Label That Is
    Computer Communication Networks Lecture No. 5 Computer Network Lectures IP address An Internet Protocol (IP) address is a numerical label that is assigned to devices participating in a computer network, that uses the Internet Protocol for communication between its nodes. An IP address serves two principal functions: 1- host or network interface identification 2- location addressing. Its role has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there." The designers of TCP/IP defined an IP address as a 32-bit number and this system, known as Internet Protocol Version 4 or IPv4, is still in use today. However, due to the enormous growth of the Internet and the resulting depletion of available addresses, a new addressing system (IPv6), using 128 bits for the address, was developed in 1995. Although IP addresses are stored as binary numbers, they are usually displayed in human-readable notations, such as 208.77.188.166 (for IPv4), and 2001:db8:0:1234:0:567:1:1 (for IPv6). The Internet Protocol also routes data packets between networks; IP addresses specify the locations of the source and destination nodes in the topology of the routing system. For this purpose, some of the bits in an IP address are used to designate a sub network. As the development of private networks raised the threat of IPv4 address exhaustion, RFC 1918 set aside a group of private address spaces that may be used by anyone on private networks. They are often used with network address translators to connect to the global public Internet.
    [Show full text]
  • X86 Memory Protection and Translation
    x86 Memory Protection and Translation Don Porter CSE 506 Lecture Goal ò Understand the hardware tools available on a modern x86 processor for manipulating and protecting memory ò Lab 2: You will program this hardware ò Apologies: Material can be a bit dry, but important ò Plus, slides will be good reference ò But, cool tech tricks: ò How does thread-local storage (TLS) work? ò An actual (and tough) Microsoft interview question Undergrad Review ò What is: ò Virtual memory? ò Segmentation? ò Paging? Two System Goals 1) Provide an abstraction of contiguous, isolated virtual memory to a program 2) Prevent illegal operations ò Prevent access to other application or OS memory ò Detect failures early (e.g., segfault on address 0) ò More recently, prevent exploits that try to execute program data Outline ò x86 processor modes ò x86 segmentation ò x86 page tables ò Software vs. Hardware mechanisms ò Advanced Features ò Interesting applications/problems x86 Processor Modes ò Real mode – walks and talks like a really old x86 chip ò State at boot ò 20-bit address space, direct physical memory access ò Segmentation available (no paging) ò Protected mode – Standard 32-bit x86 mode ò Segmentation and paging ò Privilege levels (separate user and kernel) x86 Processor Modes ò Long mode – 64-bit mode (aka amd64, x86_64, etc.) ò Very similar to 32-bit mode (protected mode), but bigger ò Restrict segmentation use ò Garbage collect deprecated instructions ò Chips can still run in protected mode with old instructions Translation Overview 0xdeadbeef Segmentation 0x0eadbeef Paging 0x6eadbeef Virtual Address Linear Address Physical Address Protected/Long mode only ò Segmentation cannot be disabled! ò But can be a no-op (aka flat mode) x86 Segmentation ò A segment has: ò Base address (linear address) ò Length ò Type (code, data, etc).
    [Show full text]
  • Chapter 9: Memory Management
    Chapter 9: Memory Management I Background I Swapping I Contiguous Allocation I Paging I Segmentation I Segmentation with Paging Operating System Concepts 9.1 Silberschatz, Galvin and Gagne 2002 Background I Program must be brought into memory and placed within a process for it to be run. I Input queue – collection of processes on the disk that are waiting to be brought into memory to run the program. I User programs go through several steps before being run. Operating System Concepts 9.2 Silberschatz, Galvin and Gagne 2002 Binding of Instructions and Data to Memory Address binding of instructions and data to memory addresses can happen at three different stages. I Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes. I Load time: Must generate relocatable code if memory location is not known at compile time. I Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers). Operating System Concepts 9.3 Silberschatz, Galvin and Gagne 2002 Multistep Processing of a User Program Operating System Concepts 9.4 Silberschatz, Galvin and Gagne 2002 Logical vs. Physical Address Space I The concept of a logical address space that is bound to a separate physical address space is central to proper memory management. ✦ Logical address – generated by the CPU; also referred to as virtual address. ✦ Physical address – address seen by the memory unit. I Logical and physical addresses are the same in compile- time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme.
    [Show full text]