OS, Base & Bounds, Virtual Memory Intro

Total Page:16

File Type:pdf, Size:1020Kb

OS, Base & Bounds, Virtual Memory Intro OS, Base & Bounds, Virtual Memory Intro Instructor: Steven Ho Review • Warehouse Scale Computers – Supports many of the applications we have come to depend on – Software must cope with failure, load variation, and latency/bandwidth limitations – Hardware sensitive to cost and energy efficiency • Request Level Parallelism – High request volume, each largely independent – Replication for better throughput, availability • MapReduce – Convenient data-level parallelism on large dataset across large number of machines – Spark is a framework for executing MapReduce algorithms 7/30/2018 CS61C Su18 - Lecture 22 2 Word Count in Spark’s Python API // RDD: primary abstraction of a distributed collection of items file = sc.textFile(“hdfs://…”) // Two kinds of operations: // Actions: RDD → Value // Transformations: RDD → RDD // e.g. flatMap, Map, reduceByKey file.flatMap(lambda line: line.split()) .map(lambda word: (word, 1)) .reduceByKey(lambda a, b: a + b) 7/30/2018 CS61C Su18 - Lecture 22 3 MapReduce Word Count Example • Map phase: (doc name, doc contents) → list(word, count) // “I do I learn”” → [(“I”,1),(“do”,1),(“I”,1),(“learn”,1)] map(key, value): for each word w in value: emit(w, 1) • Reduce phase: (word, list(count)) → (word, count_sum) // (“I”, [1,1]) → (“I”,2) reduce(key, values): result = 0 for each v in values: result += v emit(key, result) 7/30/2018 CS61C Su18 - Lecture 22 4 Word Count in Spark’s Python API // RDD: primary abstraction of a distributed collection of items file = sc.textFile(“hdfs://…”) // Two kinds of operations: // Actions: RDD → Value // Transformations: RDD → RDD // e.g. flatMap, Map, reduceByKey file.flatMap(lambda line: line.split()) .map(lambda word: (word, 1)) .reduceByKey(lambda a, b: a + b) 7/30/2018 CS61C Su18 - Lecture 22 5 MapReduce Word Count Example • Map phase: (doc name, doc contents) → list(word, count) // “I do I learn”” → [(“I”,1),(“do”,1),(“I”,1),(“learn”,1)] map(key, value): for each word w in value: emit(w, 1) • Reduce phase: (word, list(count)) → (word, count_sum) // (“I”, [1,1]) → (“I”,2) reduce(key, values): result = 0 for each v in values: result += v emit(key, result) 7/30/2018 CS61C Su18 - Lecture 22 6 Word Count in Spark’s Python API // RDD: primary abstraction of a distributed collection of items file = sc.textFile(“hdfs://…”) // Two kinds of operations: // Actions: RDD → Value // Transformations: RDD → RDD // e.g. flatMap, Map, reduceByKey file.flatMap(lambda line: line.split()) .map(lambda word: (word, 1)) .reduceByKey(lambda a, b: a + b) 7/30/2018 CS61C Su18 - Lecture 22 7 MapReduce Word Count Example • Map phase: (doc name, doc contents) → list(word, count) // “I do I learn”” → [(“I”,1),(“do”,1),(“I”,1),(“learn”,1)] map(key, value): for each word w in value: emit(w, 1) • Reduce phase: (word, list(count)) → (word, count_sum) // (“I”, [1,1]) → (“I”,2) reduce(key, values): result = 0 for each v in values: result += v b emit(key, result) a 7/30/2018 CS61C Su18 - Lecture 22 8 Agenda • OS Intro • Administrivia • OS Boot Sequence and Operation • Multiprogramming/time-sharing • Introduction to Virtual Memory 7/30/2018 CS61C Su18 - Lecture 22 9 CS61C so far… C Programs #include <stdlib.h> RISC-V Assembly int fib(int n) { return Project 3 fib(n-1) + .foo fib(n-2); lw t0, 4(s0) } CPU addi t1, t0, 3 beq t1, t2, foo nop Labs Project 2 Project 1 Caches Memory 7/30/2018 CS61C Su18 - Lecture 22 10 So how is this any different? Screen Keyboard Storage 7/30/2018 CS61C Su18 - Lecture 22 11 Adding I/O C Programs #include <stdlib.h> RISC-V Assembly int fib(int n) { return Project 3 fib(n-1) + .foo fib(n-2); lw t0, 4(s0) } CPU addi t1, t0, 3 beq t1, t2, foo nop Project 2 Project 1 Screen Keyboard Storage Caches I/O (Input/Output) Memory 7/30/2018 CS61C Su18 - Lecture 22 12 Raspberry Pi ($40 on Amazon) Serial I/O CPU+$s, etc. (USB) Storage I/O Memory (Micro SD Card) Network I/O Screen I/O (Ethernet) (HDMI) 7/30/2018 CS61C Su18 - Lecture 22 13 It’s a real computer! 7/30/2018 CS61C Su18 - Lecture 22 14 But wait… • That’s not the same! When we run Venus, it only executes one program and then stops. • When I switch on my computer, I get this: Yes, but that’s just software! The Operating System (OS) 7/30/2018 CS61C Su18 - Lecture 22 15 Well, “just software” • The biggest piece of software on your machine? • How many lines of code? These are guesstimates: 84 million lines of code! Codebases (in millions of lines of code). CC BY-NC 3.0 — David McCandless © 2015 http://www.informationisbeautiful.net/visualizations/million-lines-of-code/ 7/30/2018 CS61C Su18 - Lecture 22 16 What does the OS do? • One of the first things that runs when your computer starts (right after firmware/bootloader) • Loads, runs and manages programs: – Multiple programs at the same time (time-sharing) – Isolate programs from each other (isolation) – Multiplex resources between applications (e.g., devices) • Services: File System, Network stack, etc. • Finds and controls all the devices in the machine in a general way (using “device drivers”) 7/30/2018 CS61C Su18 - Lecture 22 17 Agenda • OS Intro • Administrivia • OS Boot Sequence and Operation • Multiprogramming/time-sharing • Introduction to Virtual Memory 7/30/2018 CS61C Su18 - Lecture 22 18 Administrivia • Proj4 due on Friday (8/03) – Project Party tonight, Soda 405/411, 4-6pm • HW6 due tonight, HW7 released • Guerilla Session on Wed. @Cory 540AB • The final will be 8/09 7-10PM @VLSB 2040/2060! • We’re almost there! :D 7/30/2018 CS61C Su18 - Lecture 22 19 Agenda • OS Intro • Administrivia • OS Boot Sequence and Operation • Multiprogramming/time-sharing • Introduction to Virtual Memory 7/30/2018 CS61C Su18 - Lecture 22 20 What happens at boot? • When the computer switches on, it does the same as Venus: the CPU executes instructions from some start address (stored in Flash ROM) CPU Memory mapped 0x2000: addi t0, x0, 0x1000 lw t0, 4(s0) … (Code to copy firmware into regular memory and jump into it) PC = 0x2000 (some default value) Address Space 7/30/2018 CS61C Su18 - Lecture 22 21 What happens at boot? • When the computer switches on, it does the same as Venus: the CPU executes instructions from some start address (stored in Flash ROM) 1. BIOS: Find a storage 4. Init: Launch an application device and load first that waits for input in loop sector (block of data) (e.g., Terminal/Desktop/... 2. Bootloader (stored on, e.g., disk): Load the OS kernel from disk into a location in memory 3. OS Boot: Initialize and jump into it. services, drivers, etc. 7/30/2018 CS61C Su18 - Lecture 22 22 Launching Applications • Applications are called “processes” in most OSs. • Created by another process calling into an OS routine (using a “syscall”, more details later). – Depends on OS, but Linux uses fork (see OpenMP threads) to create a new process, and execve to load application. • Loads executable file from disk (using the file system service) and puts instructions & data into memory (.text, .data sections), prepare stack and heap. • Set argc and argv, jump into the main function. 7/30/2018 CS61C Su18 - Lecture 22 23 Supervisor Mode • If something goes wrong in an application, it can crash the entire machine. What about malware, etc.? • The OS may need to enforce resource constraints to applications (e.g., access to devices). • To protect the OS from the application, CPUs have a supervisor mode bit (also need isolation, more later). – You can only access a subset of instructions and (physical) memory when not in supervisor mode (user mode). – You can change out of supervisor mode using a special instruction, but not into it (unless there is an interrupt). 7/30/2018 CS61C Su18 - Lecture 22 24 Syscalls • How to switch back to OS? OS sets timer interrupt, when interrupts trigger, drop into supervisor mode. • What if we want to call into an OS routine? (e.g., to read a file, launch a new process, send data, etc.) – Need to perform a syscall: set up function arguments in registers, and then raise software interrupt – OS will perform the operation and return to user mode • This way, the OS can mediate access to all resources, including devices, the CPU itself, etc. 7/30/2018 CS61C Su18 - Lecture 22 25 Syscalls in Venus • Venus provides many simple syscalls using the ecall RISC-V instruction • How to issue a syscall? – Place the syscall number in a0 – Place arguments to the syscall in the a1 register – Issue the ecall instruction • This is how your RISC-V code has been able to produce output all along • ecall details depend on the ABI (Application Binary Interface) 7/30/2018 CS61C Su18 - Lecture 22 26 Example Syscall • Let’s say we want to print an integer stored in s3: Print integer is syscall #1 li a0, 1 add a1, s3, x0 ecall 7/30/2018 CS61C Su18 - Lecture 22 27 Venus’s Environmental Calls 7/30/2018 CS61C Su18 - Lecture 22 28 Agenda • OS Intro • Administrivia • OS Boot Sequence and Operation • Multiprogramming/time-sharing • Introduction to Virtual Memory 7/30/2018 CS61C Su18 - Lecture 22 29 Multiprogramming • OS runs multiple applications at the same time. • But not really (unless have a core per process) • Switches between processes very quickly. This is called a “context switch”. • When jumping into process, set timer interrupt. – When it expires, store PC, registers, etc. (process state). – Pick a different process to run and load its state. – Set timer, change to user mode, jump to the new PC. • Deciding what process to run is called scheduling.
Recommended publications
  • Security and Performance Analysis of Custom Memory Allocators Tiffany
    Security and Performance Analysis of Custom Memory Allocators by Tiffany Tang Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2019 c Massachusetts Institute of Technology 2019. All rights reserved. Author.............................................................. Department of Electrical Engineering and Computer Science June 10, 2019 Certified by. Howard E. Shrobe Principal Research Scientist and Associate Director of Security, CSAIL Thesis Supervisor Certified by. Hamed Okhravi Senior Technical Staff, MIT Lincoln Laboratory Thesis Supervisor Accepted by . Katrina LaCurts, Chair, Masters of Engineering Thesis Committee 2 DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Assistant Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Assistant Secre- tary of Defense for Research and Engineering. 3 Security and Performance Analysis of Custom Memory Allocators by Tiffany Tang Submitted to the Department of Electrical Engineering and Computer Science on June 10, 2019, in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science Abstract Computer programmers use custom memory allocators as an alternative to built- in or general-purpose memory allocators with the intent to improve performance and minimize human error. However, it is difficult to achieve both memory safety and performance gains on custom memory allocators.
    [Show full text]
  • CSE421 Midterm Solutions —SOLUTION SET— 09 Mar 2012
    CSE421 Midterm Solutions —SOLUTION SET— 09 Mar 2012 This midterm exam consists of three types of questions: 1. 10 multiple choice questions worth 1 point each. These are drawn directly from lecture slides and intended to be easy. 2. 6 short answer questions worth 5 points each. You can answer as many as you want, but we will give you credit for your best four answers for a total of up to 20 points. You should be able to answer the short answer questions in four or five sentences. 3. 2 long answer questions worth 20 points each. Please answer only one long answer question. If you answer both, we will only grade one. Your answer to the long answer should span a page or two. Please answer each question as clearly and succinctly as possible. Feel free to draw pic- tures or diagrams if they help you to do so. No aids of any kind are permitted. The point value assigned to each question is intended to suggest how to allocate your time. So you should work on a 5 point question for roughly 5 minutes. CSE421 Midterm Solutions 09 Mar 2012 Multiple Choice 1. (10 points) Answer all ten of the following questions. Each is worth one point. (a) In the story that GWA (Geoff) began class with on Monday, March 4th, why was the Harvard student concerned about his grade? p He never attended class. He never arrived at class on time. He usually fell asleep in class. He was using drugs. (b) All of the following are inter-process (IPC) communication mechanisms except p shared files.
    [Show full text]
  • Paging: Smaller Tables
    20 Paging: Smaller Tables We now tackle the second problem that paging introduces: page tables are too big and thus consume too much memory. Let’s start out with a linear page table. As you might recall1, linear page tables get pretty 32 12 big. Assume again a 32-bit address space (2 bytes), with 4KB (2 byte) pages and a 4-byte page-table entry. An address space thus has roughly 232 one million virtual pages in it ( 212 ); multiply by the page-table entry size and you see that our page table is 4MB in size. Recall also: we usually have one page table for every process in the system! With a hundred active processes (not uncommon on a modern system), we will be allocating hundreds of megabytes of memory just for page tables! As a result, we are in search of some techniques to reduce this heavy burden. There are a lot of them, so let’s get going. But not before our crux: CRUX: HOW TO MAKE PAGE TABLES SMALLER? Simple array-based page tables (usually called linear page tables) are too big, taking up far too much memory on typical systems. How can we make page tables smaller? What are the key ideas? What inefficiencies arise as a result of these new data structures? 20.1 Simple Solution: Bigger Pages We could reduce the size of the page table in one simple way: use bigger pages. Take our 32-bit address space again, but this time assume 16KB pages. We would thus have an 18-bit VPN plus a 14-bit offset.
    [Show full text]
  • Virtual Memory Basics
    Operating Systems Virtual Memory Basics Peter Lipp, Daniel Gruss 2021-03-04 Table of contents 1. Address Translation First Idea: Base and Bound Segmentation Simple Paging Multi-level Paging 2. Address Translation on x86 processors Address Translation pointers point to objects etc. transparent: it is not necessary to know how memory reference is converted to data enables number of advanced features programmers perspective: Address Translation OS in control of address translation pointers point to objects etc. transparent: it is not necessary to know how memory reference is converted to data programmers perspective: Address Translation OS in control of address translation enables number of advanced features pointers point to objects etc. transparent: it is not necessary to know how memory reference is converted to data Address Translation OS in control of address translation enables number of advanced features programmers perspective: transparent: it is not necessary to know how memory reference is converted to data Address Translation OS in control of address translation enables number of advanced features programmers perspective: pointers point to objects etc. Address Translation OS in control of address translation enables number of advanced features programmers perspective: pointers point to objects etc. transparent: it is not necessary to know how memory reference is converted to data Address Translation - Idea / Overview Shared libraries, interprocess communication Multiple regions for dynamic allocation
    [Show full text]
  • The RISC-V Instruction Set Manual, Volume II: Privileged Architecture, Version 1.10”, Editors Andrew Waterman and Krste Asanovi´C,RISC-V Foundation, May 2017
    The RISC-V Instruction Set Manual Volume II: Privileged Architecture Privileged Architecture Version 1.10 Document Version 1.10 Warning! This draft specification may change before being accepted as standard by the RISC-V Foundation. While the editors intend future changes to this specification to be forward compatible, it remains possible that implementations made to this draft specification will not conform to the future standard. Editors: Andrew Waterman1, Krste Asanovi´c1;2 1SiFive Inc., 2CS Division, EECS Department, University of California, Berkeley [email protected], [email protected] May 7, 2017 Contributors to all versions of the spec in alphabetical order (please contact editors to suggest corrections): Krste Asanovi´c,Rimas Aviˇzienis,Jacob Bachmeyer, Allen J. Baum, Paolo Bonzini, Ruslan Bukin, Christopher Celio, David Chisnall, Anthony Coulter, Palmer Dabbelt, Monte Dal- rymple, Dennis Ferguson, Mike Frysinger, John Hauser, David Horner, Olof Johansson, Yunsup Lee, Andrew Lutomirski, Jonathan Neusch¨afer,Rishiyur Nikhil, Stefan O'Rear, Albert Ou, John Ousterhout, David Patterson, Colin Schmidt, Wesley Terpstra, Matt Thomas, Tommy Thorn, Ray VanDeWalker, Megan Wachs, Andrew Waterman, and Reinoud Zandijk. This document is released under a Creative Commons Attribution 4.0 International License. This document is a derivative of the RISC-V privileged specification version 1.9.1 released under following license: c 2010{2017 Andrew Waterman, Yunsup Lee, Rimas Aviˇzienis,David Patterson, Krste Asanovi´c.Creative Commons Attribution 4.0 International License. Please cite as: \The RISC-V Instruction Set Manual, Volume II: Privileged Architecture, Version 1.10", Editors Andrew Waterman and Krste Asanovi´c,RISC-V Foundation, May 2017. Preface This is version 1.10 of the RISC-V privileged architecture proposal.
    [Show full text]
  • • Programs • Processes • Context Switching • Protected Mode Execution • Inter-Process Communication • Threads
    • Programs • Processes • Context Switching • Protected Mode Execution • Inter-process Communication • Threads 1 Running Dynamic Code • One basic function of an OS is to execute and manage code dynamically, e.g.: – A command issued at a command line terminal – An icon double clicked from the desktop – Jobs/tasks run as part of a batch system (MapReduce) • A process is the basic unit of a program in execution 2 How to Run a Program? • When you double-click on an .exe, how does the OS turn the file on disk into a process? • What information must the .exe file contain in order to run as a program? 3 Program Formats • Programs obey specific file formats – CP/M and DOS: COM executables (*.com) – DOS: MZ executables (*.exe) • Named after Mark Zbikowski, a DOS developer – Windows Portable Executable (PE, PE32+) (*.exe) • Modified version of Unix COFF executable format • PE files start with an MZ header. Why? – Unix/Linux: Executable and Linkable Format (ELF) – Mac OSX: Mach object file format (Mach-O) 4 test.c #include <stdio.h> int big_big_array[10 * 1024 * 1024]; char *a_string = "Hello, World!"; int a_var_with_value = 100; int main(void) { big_big_array[0] = 100; printf("%s\n", a_string); a_var_with_value += 20; printf("main is : %p\n", &main); return 0; } 5 ELF File Format • ELF Header – Contains compatibility info – Entry point of the executable code • Program header table – Lists all the segments in the file – Used to load and execute the program • Section header table – Used by the linker 6 ELF Header Format• Entry point of typedef struct
    [Show full text]
  • EECS 482 Introduction to Operating Systems
    EECS 482 Introduction to Operating Systems Fall 2019 Baris Kasikci Slides by: Harsha V. Madhyastha Base and bounds ● Load each process into contiguous region of physical memory • Prevent process from accessing data outside its region • Base register: starting physical address physical • Bound register: size of region memory base + bound virtual bound memory base 0 0 February 20, 2019 EECS 482 – Lecture 12 2 Base and bounds ● Pros? • Fast • Simple hardware support ● Cons? • Virtual address space limited by physical memory • No controlled sharing • External fragmentation February 20, 2019 EECS 482 – Lecture 12 3 Base and bounds ● Can’t share part of an address space between processes physical memory data (P2) virtual virtual address address space 1 data (P1) space 2 data code data code code February 20, 2019 EECS 482 – Lecture 12 4 External fragmentation ● Processes come and go, leaving a mishmash of available memory regions • Wasted memory between allocated regions P1 P2 P5 P3 P4 February 20, 2019 EECS 482 – Lecture 12 5 Growing address space physical memory base + bound virtual bound memory base Stack 0 Heap 0 How can stack and heap grow independently? February 20, 2019 EECS 482 – Lecture 12 6 Segmentation ● Divide address space into segments ● Segment: region of memory contiguous in both physical memory and in virtual address space • Multiple segments of memory with different base and bounds. February 20, 2019 EECS 482 – Lecture 12 7 Segmentation virtual physical memory memory segment 3 fff 46ff code stack 4000 0 virtual memory 2fff
    [Show full text]
  • CS5460: Operating Systems
    CS5460: Operating Systems Lecture 13: Memory Management (Chapter 8) CS 5460: Operating Systems Where are we? Basic OS structure,• HW/SW interface, interrupts, scheduling • •Concurrency • Memory management Storage management Other topics CS 5460: Operating Systems Example Virtual Address Space 0xFFFFFFFF Typical address space has 4 parts User stack segment SP – Code: binary image of program – Data: static variables (globals) HP – Heap : explicitly allocated data (malloc) User heap – Stack: implicitly allocated data User Kernel mapped into all processes User data segment MMU hardware: PC User code segment – Remaps virtual addresses to physical 0x80000000 – Supports read-only, supervisor-only Kernel heap – Detects accesses to unmapped regions How can we load two processes Kernel data segment into memory at same time? – Assume each has similar layout Kernel Kernel code segment 0x00000000 CS 5460: Operating Systems Mapping Addresses How can process (virtual) addresses be mapped to physical addresses? – Compile time: Compiler generates physical addresses directly » Advantages: No MMU hardware, no runtime translation overhead » Disadvantages: Inflexible, hard to multiprogram, inefficient use of DRAM – Load time: OS loader “fixes” addresses when it loads program » Advantages: Can support static multiprogramming » Disadvantages: MMU hardware, inflexible, hard to share data, … – Dynamic: Compiler generates address, but OS/HW reinterpret » Advantages: Very flexible, can use memory efficiently » Disadvantages: MMU hardware req’d, runtime
    [Show full text]
  • Automated Verification of RISC-V Kernel Code
    Automated Verification of RISC-V Kernel Code Antoine Kaufmann Big Picture ● Micro/exokernels can be viewed as event-driven ○ Initialize, enter application, get interrupt/syscall, repeat ● Interrupt/syscall handlers are bounded ● Most of the code fiddles with low-level details in some way ○ -> messy, high level language does not help Idea: ● Use SMT solver on instructions and spec ○ (And see how far we get) Overview 1. RISC-V Z3 model 2. Kernel + Proof RISC-V ● Free modular ISA from Berkeley ● Clean slate, compact, and no legacy features ○ 100 pages of spec for user instructions (including extensions) ○ 60 pages for kernel features ● 62 Core RV64-I instructions ○ Basic register operations, branches, linear arithmetic, bit ops ● Supports full blown virtual memory, or base+bounds RISC-V SMT Model Status ● Only 8 RV64-I instructions still missing: ○ fence(i), rdcycle(h), rdtime(h), rdinstrret(h) ● Supported kernel features: ○ Transfers between protection levels: syscall/traps ○ Base and bounds virtual memory ● Missing kernel features: ○ Modelling interrupt causes ○ More than 2 privilege levels ○ Full page table based virtual memory ● Runs (and passes) provided riscv-tests for instructions ○ -> Demo Model: The first attempt ● Pure Z3 expressions for fetch and exec of instructions ○ fetch_and_exec :: machine state -> machine state ● Having pure Z3 expressions everywhere is convenient ● Use simplify after every step to keep compact ● But: Non-trivial Conditional branches cause blow-up ○ Simplify won’t be able to cut down much in next step ○ Expressions
    [Show full text]
  • Operating Systems Principles and Practice, Volume 3: Memory
    Operating Systems Principles & Practice Volume III: Memory Management Second Edition Thomas Anderson University of Washington Mike Dahlin University of Texas and Google Recursive Books recursivebooks.com Operating Systems: Principles and Practice (Second Edition) Volume III: Memory Management by Thomas Anderson and Michael Dahlin Copyright ©Thomas Anderson and Michael Dahlin, 2011-2015. ISBN 978-0-9856735-5-0 Publisher: Recursive Books, Ltd., http://recursivebooks.com/ Cover: Reflection Lake, Mt. Rainier Cover design: Cameron Neat Illustrations: Cameron Neat Copy editors: Sandy Kaplan, Whitney Schmidt Ebook design: Robin Briggs Web design: Adam Anderson SUGGESTIONS, COMMENTS, and ERRORS. We welcome suggestions, comments and error reports, by email to [email protected] Notice of rights. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form by any means — electronic, mechanical, photocopying, recording, or otherwise — without the prior written permission of the publisher. For information on getting permissions for reprints and excerpts, contact [email protected] Notice of liability. The information in this book is distributed on an “As Is" basis, without warranty. Neither the authors nor Recursive Books shall have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information or instructions contained in this book or by the computer software and hardware products described in it. Trademarks: Throughout this book trademarked names are used. Rather than put a trademark symbol in every occurrence of a trademarked name, we state we are using the names only in an editorial fashion and to the benefit of the trademark owner with no intention of infringement of the trademark.
    [Show full text]
  • Virtual Memory
    Fall 2017 :: CSE 306 Introduction to Virtual Memory Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau) Fall 2017 :: CSE 306 Motivating Virtual Memory • (Very) old days: Uniprogramming — only one process existed at a time • “OS” was little more than a library occupying the beginning of the memory 0 OS Code Physical Memory Heap User Process Stack 2n-1 • Advantage: • Simplicity — No virtualization needed • Disadvantages: • Only one process runs at a time • Process can destroy OS Fall 2017 :: CSE 306 Goals for Multiprogramming • Transparency • Processes are not aware that memory is shared • Works regardless of number and/or location of processes • Protection • Cannot corrupt OS or other processes • Privacy: Cannot read data of other processes • Efficiency • Low run-time overhead • Do not waste memory resources • Sharing • Cooperating processes should be able to share portions of address space Fall 2017 :: CSE 306 Abstraction: Address Space • Address space: Each process’ view of its own memory range • Set of addresses that map to bytes 0 Code • Problem: how can OS provide Heap illusion of private address space to each process? Stack • Address space has static and 2n-1 dynamic components • Static: Code and some global variables • Dynamic: Stack and Heap Fall 2017 :: CSE 306 How to Virtualize Memory? • Problem: How to run multiple processes concurrently? • Addresses are “hard-coded” into program binaries • How to avoid collisions? Fall 2017 :: CSE 306 How to Virtualize Memory? • Possible Solutions for Mechanisms: 1) Time Sharing
    [Show full text]
  • The Abstraction: Address Space
    Memory Management CMPU 334 – Operating Systems Jason Waterman Memory Virtualization • What is memory virtualization? • OS virtualizes its physical memory • OS provides an illusion of memory space per each process • It is as if each process can access the entire memory space • Benefits • Ease of use in programming • Memory efficiency in terms of time and space • The guarantee of isolation for processes as well as OS • Protection from errant accesses of other processes 9/16/21 CMPU 335 -- Operating Systems 2 Early OSes 0KB • Load only one process in memory Operating System (code, data, etc.) • Poor utilization and efficiency 64KB Current Program (code, data, etc.) max Physical Memory 9/16/21 CMPU 335 -- Operating Systems 3 Multiprogramming and Time Sharing 0KB • Load multiple processes in memory Operating System (code, data, etc.) • Execute one for a short while 64KB Free • Switch processes between them in memory 128KB Process C • Increase utilization and efficiency (code, data, etc.) 192KB Process B (code, data, etc.) 256KB • Causes an important protection issue Free • 320KB Errant memory accesses from other processes Process A (code, data, etc.) 384KB Free 448KB Free 512KB Physical Memory 9/16/21 CMPU 335 -- Operating Systems 4 Address Space • The OS creates an abstraction of physical 0KB Program Code memory 1KB • The address space contains all the information Heap about a running process 2KB • program code, heap, stack and etc. (free) 15KB Stack 16KB Address Space 9/16/21 CMPU 335 -- Operating Systems 5 Address Space(Cont.) • Code 0KB
    [Show full text]