COS 318: Operating Systems Protection and Virtual Memory

Total Page:16

File Type:pdf, Size:1020Kb

COS 318: Operating Systems Protection and Virtual Memory Outline COS 318: Operating Systems u Protection Mechanisms and OS Structures Protection and Virtual Memory u Virtual Memory: Protection and Address Translation Jaswinder Pal Singh and a Fabulous Course Staff Computer Science Department Princeton University (http://www.cs.princeton.edu/courses/cos318/) 2 Some Protection Goals Architecture Support for CPU Protection u CPU • Privileged Mode l Enable kernel to take CPU away to prevent a user from using CPU forever An interrupt or exception/trap (INT) l Users should not have this ability u Memory User mode Kernel (privileged) mode l Prevent a user from accessing others’ data • Regular instructions • Regular instructions • Access user memory • Privileged instructions l Prevent users from modifying kernel code and data • Access user memory structures • Access kernel memory u I/O l Prevent users from performing “illegal” I/Os A special instruction (IRET) u Question l What’s the difference between protection and security? 3 4 1 Privileged Instruction Examples OS Structures and Protection: Monolithic u Memory address mapping u All kernel routines are together, linked in single large executable u Flush or invalidate data cache l Each can call any other u Invalidate TLB entries l Services and utilities User User program program u Load and read system registers u Provides a system call API u u Change processor modes from kernel to user Examples: syscall l Linux, BSD Unix, Windows, … u Change the voltage and frequency of processor syscall u Pros u Halt a processor l Shared kernel space u Reset a processor l Good performance Kernel u Perform I/O operations u Cons (many things) l Instability: crash in any procedure • Other architectural support for protection in system? brings system down l Unweildy/difficult to maintain, extend 5 6 Layered Structure Possible Implementation: Protection Rings u Hiding information at each layer u Layered dependency Privileged instructions u Examples Level N can be executed only l THE (6 layers) . when current privileged • Mostly for functionality splitting . level (CPR) is 0 l MS-DOS (4 layers) Operating system u Pros Level 2 kernel l Layered abstraction Level 0 • Separation of concerns, elegance Level 1 Operating system Level 1 u Cons services l Inefficiency Hardware Level 2 l Inflexibility Applications Level 3 7 8 2 Microkernel Structure Virtual Machine u Services are regular processes u Virtual machine monitor u Micro-kernel obtains services for l Virtualize hardware users by messaging with services Apps Apps l Run several OSes u Examples: User OS OS . OS l Mach, Taos, L4, OS-X program Services l Examples 1 k • IBM VM/370 u Pros? VM1 VMk syscall l Flexibility to modify services • Java VM Virtual Machine Monitor l Fault isolation • VMWare, Xen u Cons? entry l Inefficient (boundary crossings) u What would you use µ-kernel l Inconvenient to share data virtual machine for? Raw Hardware between kernel and services l Just shifts the problem, to level with less protection? Testing? 9 10 Memory Protection The Big Picture u Kernel vs. user mode, plus u DRAM is fast, but relatively expensive u Virtual address spaces and Address Translation u Disk is inexpensive, but slow l 100X less expensive CPU Physical memory Abstraction: virtual memory l 100,000X longer latency No protection Every program isolated from all others and from the OS l 1000X less bandwidth Memory Limited size Illusion of “infinite” memory u Goals Sharing visible to programs Transparent --- can’t tell if l Run programs efficiently physical memory is shared l Make the system safe Disk Virtual addresses are translated to physical addresses 13 3 Issues Consider A Simple System u Many processes u Only physical memory l Applications use it directly l The more processes a system can handle, the better OS u Run three processes x9000 u Address space size l Email, browser, gcc l Many processes whose total size may exceed memory email u What if x7000 l Even one process may exceed physical memory size l browser writes at x7050? browser x5000 u Protection l email needs to expand? l A user process should not crash the system l browser needs more gcc x2500 l A user process should not do bad things to other memory than is on the Free processes machine? x0000 14 15 Need to Handle Handling Protection u Protection u Errors/malice in one process should not affect others u For each process, check each load and store u Finiteness instruction to allow only legal memory references l Not having entire application/data in memory at once gcc l Relocation address Physical CPU Check l Not having programmer worry about it (too much) memory error data 16 17 4 Handling Finiteness Virtual Memory u A process should be able to run regardless of physical u Flexible memory size or where its data are physically placed l Processes (and data) can move in memory as they u Give each process a large, static “fake” address execute, and can be part in memory and part on disk space that is large and contiguous and entirely its own u Simple u As a process runs, relocate or map each load and l Applications generate loads and stores to addresses in store to addresses in actual physical memory the contiguous, large, “fake” address space u Efficient email address Check & Physical l 20/80 rule: 20% of memory gets 80% of references CPU relocate memory l Keep the 20% in physical memory (a form of caching) u Protective data l Protection check integrated with translation mechanism 18 19 Address Mapping and Granularity Generic Address Translation: the MMU u Must have some “mapping” mechanism u CPU view l Map virtual to physical addresses in RAM or disk l Virtual addresses l Each process has its own CPU memory space [0, high] – u Mapping must have some granularity virtual address space Virtual address l Finer granularity provides more flexibility u Memory or I/O device view MMU l Finer granularity requires more mapping information l Physical addresses u Memory Management Unit Physical address (MMU) translates virtual address into physical address for each load and store Physical I/O memory device u Combination of hardware and (privileged) software controls the translation 20 21 5 Where to Keep Translation Information? Address Translation Methods Goals of translation u Base and Bound u Implicit translation for each Registers u Segmentation memory reference u A hit should be very fast u Paging L1 2-4x u Trigger an exception on a miss u Multilevel translation u Protect from user’s errors u Inverted page tables L2-L3 10-20x Memory 100-500x Paging 20M-30Mx Disk 22 Base and Bound Base and Bound (or Limit) Example: Cray-I u Protection l A process can only access physical virtual memory physical memory memory in [base, base+bound] bound 0 u On a context switch l Save/restore base, bound regs code 6250 (base) u Pros virtual address > l Simple error data l Inexpensive (Hardware cost: 2 registers, adder, comparator) + base u Cons bound l Can’t fit all processes in memory, have stack 6250+bound to swap physical address l Fragmentation in memory l Relocate processes when they grow? Each program loaded into contiguous l Compare and add on every instruction regions of physical memory. l Very coarse grained Example on next slide Why not have multiple contiguous segments for each process, and keep their base/bound data in hardware? 25 6 Segmentation Segmentation Example u Every process has table of (assume 2 bit segment ID, 12 bit segment offset) (seg, size) for its segments Virtual address v-segment # p-segment segment physical memory u Treats (seg, size) as a finer- segment offset > start size grained (base, bound) code (00) 0x4000 0x700 error 0 u Protection data (01) 0 0x500 seg size - (10) 0 0 l Every entry contains rights stack (11) 0x2000 0x1000 4ff u On a context switch . virtual memory l Save/restore table in kernel memory 2000 u Pros 0 l Programmer knows program and so 6ff segments, therefore can be efficient 2fff + 1000 l Easy to share data 14ff u Cons 4000 physical address 3000 l Complex management 46ff l Fragmentation 3fff 26 Segmentation Example (Cont’d) Segmentation u Pros l Provides logical protection: programmer “knows program” Virtual memory for strlen(x) physical memory for strlen(x) and therefore how to design and manage segments Main: 240 store 1108, r2 x: 108 a b c \0 l Therefore efficient 244 store pc+8, r31 … l Easy to share data 248 jump 360 24c Main: 4240 store 1108, r2 … 4244 store pc+8, r31 u Cons strlen: 360 loadbyte (r2), r3 4248 jump 360 l … 424c Complex management, programmer burden 420 jump (r31) … l Fragmentation strlen: 4360 loadbyte (r2), r3 … l Difficult to find the right granularity balance between ease of … use and efficiency x: 1108 a b c \0 4420 jump (r31) … … 29 7 Paging Paging example u Use a fixed size unit called page instead of segment Virtual address page table size virtual memory physical memory 0 u Use page table to translate VPage # offset 4 error a i u Various bits in each entry > b VP# PP# j u Context switch k Page table c 0 4 l l Similar to segmentation PPage# d 8 ... 1 3 u What should page size be? ... e 2 12 . 1 e u Pros . f f l Simple allocation PPage# ... g g h h l Easy to share 16 a page size: 4 bytes u Cons i b l Big table PPage # offset j c k d l PTEs even for big holes in Physical address memory l 30 How Many PTEs Do We Need? Segmentation with Paging u Assume 4KB page Virtual address l Needs “low order” 12 bits to address byte within page Vseg # VPage # offset u Worst case for 32-bit address machine l # of processes ´ 220 Page table 20 l 2 PTEs per page table (~4Mbytes), but there might be seg size PPage# ..
Recommended publications
  • Virtual Memory
    Chapter 4 Virtual Memory Linux processes execute in a virtual environment that makes it appear as if each process had the entire address space of the CPU available to itself. This virtual address space extends from address 0 all the way to the maximum address. On a 32-bit platform, such as IA-32, the maximum address is 232 − 1or0xffffffff. On a 64-bit platform, such as IA-64, this is 264 − 1or0xffffffffffffffff. While it is obviously convenient for a process to be able to access such a huge ad- dress space, there are really three distinct, but equally important, reasons for using virtual memory. 1. Resource virtualization. On a system with virtual memory, a process does not have to concern itself with the details of how much physical memory is available or which physical memory locations are already in use by some other process. In other words, virtual memory takes a limited physical resource (physical memory) and turns it into an infinite, or at least an abundant, resource (virtual memory). 2. Information isolation. Because each process runs in its own address space, it is not possible for one process to read data that belongs to another process. This improves security because it reduces the risk of one process being able to spy on another pro- cess and, e.g., steal a password. 3. Fault isolation. Processes with their own virtual address spaces cannot overwrite each other’s memory. This greatly reduces the risk of a failure in one process trig- gering a failure in another process. That is, when a process crashes, the problem is generally limited to that process alone and does not cause the entire machine to go down.
    [Show full text]
  • Openvms Record Management Services Reference Manual
    OpenVMS Record Management Services Reference Manual Order Number: AA-PV6RD-TK April 2001 This reference manual contains general information intended for use in any OpenVMS programming language, as well as specific information on writing programs that use OpenVMS Record Management Services (OpenVMS RMS). Revision/Update Information: This manual supersedes the OpenVMS Record Management Services Reference Manual, OpenVMS Alpha Version 7.2 and OpenVMS VAX Version 7.2 Software Version: OpenVMS Alpha Version 7.3 OpenVMS VAX Version 7.3 Compaq Computer Corporation Houston, Texas © 2001 Compaq Computer Corporation Compaq, AlphaServer, VAX, VMS, the Compaq logo Registered in U.S. Patent and Trademark Office. Alpha, PATHWORKS, DECnet, DEC, and OpenVMS are trademarks of Compaq Information Technologies Group, L.P. in the United States and other countries. UNIX and X/Open are trademarks of The Open Group in the United States and other countries. All other product names mentioned herein may be the trademarks of their respective companies. Confidential computer software. Valid license from Compaq required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. Compaq shall not be liable for technical or editorial errors or omissions contained herein. The information in this document is provided "as is" without warranty of any kind and is subject to change without notice. The warranties for Compaq products are set forth in the express limited warranty statements accompanying such products. Nothing herein should be construed as constituting an additional warranty.
    [Show full text]
  • The Case for Compressed Caching in Virtual Memory Systems
    THE ADVANCED COMPUTING SYSTEMS ASSOCIATION The following paper was originally published in the Proceedings of the USENIX Annual Technical Conference Monterey, California, USA, June 6-11, 1999 The Case for Compressed Caching in Virtual Memory Systems _ Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis aUniversity of Texas at Austin © 1999 by The USENIX Association All Rights Reserved Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. For more information about the USENIX Association: Phone: 1 510 528 8649 FAX: 1 510 548 5738 Email: [email protected] WWW: http://www.usenix.org The Case for Compressed Caching in Virtual Memory Systems Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis Dept. of Computer Sciences University of Texas at Austin Austin, Texas 78751-1182 g fwilson|sfkaplan|smaragd @cs.utexas.edu http://www.cs.utexas.edu/users/oops/ Abstract In [Wil90, Wil91b] we proposed compressed caching for virtual memory—storing pages in compressed form Compressed caching uses part of the available RAM to in a main memory compression cache to reduce disk pag- hold pages in compressed form, effectively adding a new ing. Appel also promoted this idea [AL91], and it was level to the virtual memory hierarchy. This level attempts evaluated empirically by Douglis [Dou93] and by Russi- to bridge the huge performance gap between normal (un- novich and Cogswell [RC96]. Unfortunately Douglis’s compressed) RAM and disk.
    [Show full text]
  • Memory Protection at Option
    Memory Protection at Option Application-Tailored Memory Safety in Safety-Critical Embedded Systems – Speicherschutz nach Wahl Auf die Anwendung zugeschnittene Speichersicherheit in sicherheitskritischen eingebetteten Systemen Der Technischen Fakultät der Universität Erlangen-Nürnberg zur Erlangung des Grades Doktor-Ingenieur vorgelegt von Michael Stilkerich Erlangen — 2012 Als Dissertation genehmigt von der Technischen Fakultät Universität Erlangen-Nürnberg Tag der Einreichung: 09.07.2012 Tag der Promotion: 30.11.2012 Dekan: Prof. Dr.-Ing. Marion Merklein Berichterstatter: Prof. Dr.-Ing. Wolfgang Schröder-Preikschat Prof. Dr. Michael Philippsen Abstract With the increasing capabilities and resources available on microcontrollers, there is a trend in the embedded industry to integrate multiple software functions on a single system to save cost, size, weight, and power. The integration raises new requirements, thereunder the need for spatial isolation, which is commonly established by using a memory protection unit (MPU) that can constrain access to the physical address space to a fixed set of address regions. MPU-based protection is limited in terms of available hardware, flexibility, granularity and ease of use. Software-based memory protection can provide an alternative or complement MPU-based protection, but has found little attention in the embedded domain. In this thesis, I evaluate qualitative and quantitative advantages and limitations of MPU-based memory protection and software-based protection based on a multi-JVM. I developed a framework composed of the AUTOSAR OS-like operating system CiAO and KESO, a Java implementation for deeply embedded systems. The framework allows choosing from no memory protection, MPU-based protection, software-based protection, and a combination of the two.
    [Show full text]
  • Virtual Memory
    Virtual Memory Copyright ©: University of Illinois CS 241 Staff 1 Address Space Abstraction n Program address space ¡ All memory data ¡ i.e., program code, stack, data segment n Hardware interface (physical reality) ¡ Computer has one small, shared memory How can n Application interface (illusion) we close this gap? ¡ Each process wants private, large memory Copyright ©: University of Illinois CS 241 Staff 2 Address Space Illusions n Address independence ¡ Same address can be used in different address spaces yet remain logically distinct n Protection ¡ One address space cannot access data in another address space n Virtual memory ¡ Address space can be larger than the amount of physical memory on the machine Copyright ©: University of Illinois CS 241 Staff 3 Address Space Illusions: Virtual Memory Illusion Reality Giant (virtual) address space Many processes sharing Protected from others one (physical) address space (Unless you want to share it) Limited memory More whenever you want it Today: An introduction to Virtual Memory: The memory space seen by your programs! Copyright ©: University of Illinois CS 241 Staff 4 Multi-Programming n Multiple processes in memory at the same time n Goals 1. Layout processes in memory as needed 2. Protect each process’s memory from accesses by other processes 3. Minimize performance overheads 4. Maximize memory utilization Copyright ©: University of Illinois CS 241 Staff 5 OS & memory management n Main memory is normally divided into two parts: kernel memory and user memory n In a multiprogramming system, the “user” part of main memory needs to be divided to accommodate multiple processes. n The user memory is dynamically managed by the OS (known as memory management) to satisfy following requirements: ¡ Protection ¡ Relocation ¡ Sharing ¡ Memory organization (main mem.
    [Show full text]
  • Extracting Compressed Pages from the Windows 10 Virtual Store WHITE PAPER | EXTRACTING COMPRESSED PAGES from the WINDOWS 10 VIRTUAL STORE 2
    white paper Extracting Compressed Pages from the Windows 10 Virtual Store WHITE PAPER | EXTRACTING COMPRESSED PAGES FROM THE WINDOWS 10 VIRTUAL STORE 2 Abstract Windows 8.1 introduced memory compression in August 2013. By the end of 2013 Linux 3.11 and OS X Mavericks leveraged compressed memory as well. Disk I/O continues to be orders of magnitude slower than RAM, whereas reading and decompressing data in RAM is fast and highly parallelizable across the system’s CPU cores, yielding a significant performance increase. However, this came at the cost of increased complexity of process memory reconstruction and thus reduced the power of popular tools such as Volatility, Rekall, and Redline. In this document we introduce a method to retrieve compressed pages from the Windows 10 Memory Manager Virtual Store, thus providing forensics and auditing tools with a way to retrieve, examine, and reconstruct memory artifacts regardless of their storage location. Introduction Windows 10 moves pages between physical memory and the hard disk or the Store Manager’s virtual store when memory is constrained. Universal Windows Platform (UWP) applications leverage the Virtual Store any time they are suspended (as is the case when minimized). When a given page is no longer in the process’s working set, the corresponding Page Table Entry (PTE) is used by the OS to specify the storage location as well as additional data that allows it to start the retrieval process. In the case of a page file, the retrieval is straightforward because both the page file index and the location of the page within the page file can be directly retrieved.
    [Show full text]
  • Memory Protection File B [RWX] File D [RW] File F [R] OS GUEST LECTURE
    11/17/2018 Operating Systems Protection Goal: Ensure data confidentiality + data integrity + systems availability Protection domain = the set of accessible objects + access rights File A [RW] File C [R] Printer 1 [W] File E [RX] Memory Protection File B [RWX] File D [RW] File F [R] OS GUEST LECTURE XIAOWAN DONG Domain 1 Domain 2 Domain 3 11/15/2018 1 2 Private Virtual Address Space Challenges of Sharing Memory Most common memory protection Difficult to share pointer-based data Process A virtual Process B virtual Private Virtual addr space mechanism in current OS structures addr space addr space Private Page table Physical addr space ◦ Data may map to different virtual addresses Each process has a private virtual in different address spaces Physical address space Physical Access addr space ◦ Set of accessible objects: virtual pages frame no Rights Segment 3 mapped … … Segment 2 ◦ Access rights: access permissions to P1 RW each virtual page Segment 1 P2 RW Segment 1 Recorded in per-process page table P3 RW ◦ Virtual-to-physical translation + P4 RW Segment 2 access permissions … … 3 4 1 11/17/2018 Challenges of Sharing Memory Challenges for Memory Sharing Potential duplicate virtual-to-physical Potential duplicate virtual-to-physical translation information for shared Process A page Physical Process B page translation information for shared memory table addr space table memory Processor ◦ Page table is per-process at page ◦ Single copy of the physical memory, multiple granularity copies of the mapping info (even if identical) TLB L1 cache
    [Show full text]
  • Paging: Smaller Tables
    20 Paging: Smaller Tables We now tackle the second problem that paging introduces: page tables are too big and thus consume too much memory. Let’s start out with a linear page table. As you might recall1, linear page tables get pretty 32 12 big. Assume again a 32-bit address space (2 bytes), with 4KB (2 byte) pages and a 4-byte page-table entry. An address space thus has roughly 232 one million virtual pages in it ( 212 ); multiply by the page-table entry size and you see that our page table is 4MB in size. Recall also: we usually have one page table for every process in the system! With a hundred active processes (not uncommon on a modern system), we will be allocating hundreds of megabytes of memory just for page tables! As a result, we are in search of some techniques to reduce this heavy burden. There are a lot of them, so let’s get going. But not before our crux: CRUX: HOW TO MAKE PAGE TABLES SMALLER? Simple array-based page tables (usually called linear page tables) are too big, taking up far too much memory on typical systems. How can we make page tables smaller? What are the key ideas? What inefficiencies arise as a result of these new data structures? 20.1 Simple Solution: Bigger Pages We could reduce the size of the page table in one simple way: use bigger pages. Take our 32-bit address space again, but this time assume 16KB pages. We would thus have an 18-bit VPN plus a 14-bit offset.
    [Show full text]
  • Chapter 1. Origins of Mac OS X
    1 Chapter 1. Origins of Mac OS X "Most ideas come from previous ideas." Alan Curtis Kay The Mac OS X operating system represents a rather successful coming together of paradigms, ideologies, and technologies that have often resisted each other in the past. A good example is the cordial relationship that exists between the command-line and graphical interfaces in Mac OS X. The system is a result of the trials and tribulations of Apple and NeXT, as well as their user and developer communities. Mac OS X exemplifies how a capable system can result from the direct or indirect efforts of corporations, academic and research communities, the Open Source and Free Software movements, and, of course, individuals. Apple has been around since 1976, and many accounts of its history have been told. If the story of Apple as a company is fascinating, so is the technical history of Apple's operating systems. In this chapter,[1] we will trace the history of Mac OS X, discussing several technologies whose confluence eventually led to the modern-day Apple operating system. [1] This book's accompanying web site (www.osxbook.com) provides a more detailed technical history of all of Apple's operating systems. 1 2 2 1 1.1. Apple's Quest for the[2] Operating System [2] Whereas the word "the" is used here to designate prominence and desirability, it is an interesting coincidence that "THE" was the name of a multiprogramming system described by Edsger W. Dijkstra in a 1968 paper. It was March 1988. The Macintosh had been around for four years.
    [Show full text]
  • HALO: Post-Link Heap-Layout Optimisation
    HALO: Post-Link Heap-Layout Optimisation Joe Savage Timothy M. Jones University of Cambridge, UK University of Cambridge, UK [email protected] [email protected] Abstract 1 Introduction Today, general-purpose memory allocators dominate the As the gap between memory and processor speeds continues landscape of dynamic memory management. While these so- to widen, efficient cache utilisation is more important than lutions can provide reasonably good behaviour across a wide ever. While compilers have long employed techniques like range of workloads, it is an unfortunate reality that their basic-block reordering, loop fission and tiling, and intelligent behaviour for any particular workload can be highly subop- register allocation to improve the cache behaviour of pro- timal. By catering primarily to average and worst-case usage grams, the layout of dynamically allocated memory remains patterns, these allocators deny programs the advantages of largely beyond the reach of static tools. domain-specific optimisations, and thus may inadvertently Today, when a C++ program calls new, or a C program place data in a manner that hinders performance, generating malloc, its request is satisfied by a general-purpose allocator unnecessary cache misses and load stalls. with no intimate knowledge of what the program does or To help alleviate these issues, we propose HALO: a post- how its data objects are used. Allocations are made through link profile-guided optimisation tool that can improve the fixed, lifeless interfaces, and fulfilled by
    [Show full text]
  • Unikernel Monitors: Extending Minimalism Outside of the Box
    Unikernel Monitors: Extending Minimalism Outside of the Box Dan Williams Ricardo Koller IBM T.J. Watson Research Center Abstract Recently, unikernels have emerged as an exploration of minimalist software stacks to improve the security of ap- plications in the cloud. In this paper, we propose ex- tending the notion of minimalism beyond an individual virtual machine to include the underlying monitor and the interface it exposes. We propose unikernel monitors. Each unikernel is bundled with a tiny, specialized mon- itor that only contains what the unikernel needs both in terms of interface and implementation. Unikernel mon- itors improve isolation through minimal interfaces, re- Figure 1: The unit of execution in the cloud as (a) a duce complexity, and boot unikernels quickly. Our ini- unikernel, built from only what it needs, running on a tial prototype, ukvm, is less than 5% the code size of a VM abstraction; or (b) a unikernel running on a spe- traditional monitor, and boots MirageOS unikernels in as cialized unikernel monitor implementing only what the little as 10ms (8× faster than a traditional monitor). unikernel needs. 1 Introduction the application (unikernel) and the rest of the system, as defined by the virtual hardware abstraction, minimal? Minimal software stacks are changing the way we think Can application dependencies be tracked through the in- about assembling applications for the cloud. A minimal terface and even define a minimal virtual machine mon- amount of software implies a reduced attack surface and itor (or in this case a unikernel monitor) for the applica- a better understanding of the system, leading to increased tion, thus producing a maximally isolated, minimal exe- security.
    [Show full text]
  • Virtual Memory - Paging
    Virtual memory - Paging Johan Montelius KTH 2020 1 / 32 The process code heap (.text) data stack kernel 0x00000000 0xC0000000 0xffffffff Memory layout for a 32-bit Linux process 2 / 32 Segments - a could be solution Processes in virtual space Address translation by MMU (base and bounds) Physical memory 3 / 32 one problem Physical memory External fragmentation: free areas of free space that is hard to utilize. Solution: allocate larger segments ... internal fragmentation. 4 / 32 another problem virtual space used code We’re reserving physical memory that is not used. physical memory not used? 5 / 32 Let’s try again It’s easier to handle fixed size memory blocks. Can we map a process virtual space to a set of equal size blocks? An address is interpreted as a virtual page number (VPN) and an offset. 6 / 32 Remember the segmented MMU MMU exception no virtual addr. offset yes // < within bounds index + physical address segment table 7 / 32 The paging MMU MMU exception virtual addr. offset // VPN available ? + physical address page table 8 / 32 the MMU exception exception virtual address within bounds page available Segmentation Paging linear address physical address 9 / 32 a note on the x86 architecture The x86-32 architecture supports both segmentation and paging. A virtual address is translated to a linear address using a segmentation table. The linear address is then translated to a physical address by paging. Linux and Windows do not use use segmentation to separate code, data nor stack. The x86-64 (the 64-bit version of the x86 architecture) has dropped many features for segmentation.
    [Show full text]