(Or TLB) Paging Typical Address Space Layout

Total Page:16

File Type:pdf, Size:1020Kb

(Or TLB) Paging Typical Address Space Layout Memory Management Unit (or TLB) Virtual Memory The position and function of the MMU 1 2 Virtual Address Virtual Address Typical Address Space 15 Paging Space 15 14 14 Space Layout 13 Kernel 13 • Virtual Memory 12 • Physical Memory 12 • Stack region is at top, – Divided into equal- and can grow down sized pages 11 – Divided into Stack 11 – A mapping is a 10 equal-sized 10 • Heap has free space to translation between 9 frames Shared 9 grow up Libraries • A page and a frame 8 8 • Text is typically read-only • A page and null – Mappings defined at 7 1 7 BSS 7 • Kernel is in a reserved, runtime 6 12 6 (heap) 6 protected, shared region • They can change 5 10 5 5 • 0-th page typically not – Address space can 4 4 have holes 4 Data used, why? – Process does not 3 2 3 3 have to be 2 2 Text 2 contiguous in 1 3 Physical Address 1 physical memory 1 (Code) 0 11 0 Space 3 0 4 Programmer’s perspective : Proc 1 Address Proc 2 Address Virtual Address Space Space Space 15 logically present 15 15 14 System’s perspective : Not Currently 14 14 running 13 mapped, data on disk 13 13 • A process may 12 12 Physical 12 Address Space be only partially 11 4 11 11 resident 14 6 2 10 10 10 15 – Allows OS to 2 4 9 9 9 14 14 store individual 10 13 8 8 8 2 3 pages on disk 4 7 1 Disk 7 7 15 1 – Saves memory 6 15 6 1 6 for infrequently 2 Disk used data & code 5 5 5 4 4 3 4 • What happens if Memory 3 5 3 13 3 we access non- Access 2 2 2 resident 1 3 Physical Address 1 1 memory? 5 6 0 13 Space 0 0 1 Virtual Address Page Faults Space 15 6 14 Page • Referencing an invalid page triggers a page fault 13 Table 0 • An exception handled by the OS • Page table for 12 • Broadly, two standard page fault types resident part of 11 – Illegal Address (protection error) • Signal or kill the process address space 10 – Page not resident 9 • Get an empty frame 8 • Load page from disk 7 1 7 • Update page (translation) table (enter frame #, set valid bit, etc.) • Restart the faulting instruction 6 15 6 5 5 5 3 4 4 4 3 5 3 1 2 2 2 1 3 1 Physical 7 7 0 13 0 Address Space 8 Shared Pages • Note: Some implementations store disk • Private code and data • Shared code block numbers of non-resident pages in – Each process has own – Single copy of code the page table (with valid bit Unset ) copy of code and data shared between all – Code and data can processes executing it appear anywhere in – Code must not be self the address space modifying – Code must appear at same address in all processes 9 10 Proc 1 Address Proc 2 Address Space Space 15 15 Page Table Structure 14 14 0 13 13 5 • Page table is (logically) an array of 5 12 Physical 12 Address Space frame numbers 11 11 10 2 10 – Index by page number 9 6 9 • Each page-table entry (PTE) also has 8 13 8 other bits 7 4 7 Two (or more) 3 processes 6 6 running the 5 1 5 same program 4 3 4 4 4 1 and sharing 3 13 3 the text section 7 2 2 7 7 2 Page 1 1 Page 2 Page 2 Table 0 0 Table 11 Table 12 2 PTE bits Address Translation • Present/Absent bit – Also called valid bit, it indicates a valid mapping for the page • Modified bit • Every (virtual) memory address issued by – Also called dirty bit, it indicates the page may have been the CPU must be translated to physical modified in memory memory • Reference bit – Indicates the page has been accessed – Every load and every store instruction • Protection bits – Every instruction fetch – Read permission, Write permission, Execute permission – Or combinations of the above • Need Translation Hardware • Caching bit – Use to indicate processor should bypass the cache when • In paging system, translation involves accessing memory replace page number with a frame number • Example: to access device registers or memory 13 14 virtual memory virtual and physical mem chopped up in pages • programs use virtual addresses • virtual to physical mapping by MMU -first check if page present (present/absent bit ) -if yes: address in page table form MSBs in physical address -if no: bring in the page from disk page fault Page Tables Page Tables • Assume we have • Assume we have – 32-bit virtual address (4 Gbyte address space) – 64-bit virtual address ( humungous address space) – 4 KByte page size – 4 KByte page size – How many page table entries do we need for one – How many page table entries do we need for one process? process? • Problem: – Page table is very large – Access has to be fast, lookup for every memory reference – Where do we store the page table? • Registers? • Main memory? 17 18 3 Two-level Page Page Tables Table • Page tables are implemented as data structures in main • 2nd –level memory page tables • Most processes do not use the full 4GB address space representing – e.g., 0.1 – 1 MB text, 0.1 – 10 MB data, 0.1 MB stack unmapped • We need a compact representation that does not waste pages are not space allocated – But is still very fast to search – Null in the • Three basic schemes top-level – Use data structures that adapt to sparsity page table – Use data structures which only represent resident pages – Use VM techniques for page tables (details left to extended OS) 19 20 Two-level Translation Example Translations 21 22 Alternative: Inverted Page Table Alternative: Inverted Page Table PID VPN offset PID VPN offset 0 0x5 0x123 Index PID VPNctrl next Index PID VPNctrl next Hash Anchor Table 0 Hash Anchor Table 0 (HAT) 1 (HAT) 1 Hash 2 Hash 2 1 0x1A 0x40C 3 … 4 0x40C 0 0x5 0x0 5 0x40D 6 2 … … … IPT: entry for each physical frame ppn offset 0x40C 0x123 4 Inverted Page Table (IPT) Properties of IPTs • “Inverted page table” is an array of page numbers sorted (indexed) by frame number (it’s • IPT grows with size of RAM, NOT virtual address space a frame table). • Frame table is needed anyway (for page replacement, • Algorithm more later) – Compute hash of page number • Need a separate data structure for non-resident pages – Extract index from hash table • Saves a vast amount of space (especially on 64-bit – Use this to index into inverted page table systems) – Match the PID and page number in the IPT entry • Used in some IBM and HP workstations – If match, use the index value as frame # for translation – If no match, get next candidate IPT entry from chain field – If NULL chain entry ⇒ page fault 25 26 Given n processes Another look at sharing… • how many page tables will the system have for – ‘normal’ page tables – inverted page tables? Proc 1 Address Proc 2 Address Space Space 15 15 VM Implementation Issue 14 14 0 13 13 5 12 Physical 12 • Problem: Address Space 11 11 – Each virtual memory reference can cause two physical memory accesses 10 2 10 6 • One to fetch the page table entry 9 9 • One to fetch/store the data 13 8 8 ⇒Intolerable performance impact!! 7 4 7 Two (or more) • Solution: 6 3 6 processes – High-speed cache for page table entries (PTEs) running the 5 1 5 • Called a translation look-aside buffer (TLB) same program 3 4 4 4 • Contains recently used page table entries 1 and sharing 3 13 3 the text section • Associative, high-speed memory, similar to cache memory 7 2 2 7 • May be under OS control (unlike memory cache) 2 Page 1 1 Page 2 Table 0 0 Table 29 30 5 On-CPU hardware TLB operation Translation Lookaside Buffer device!!! • Given a virtual address, processor examines the TLB • If matching PTE found ( TLB hit ), the address is translated • Otherwise ( TLB miss ), the page number is used Data structure to index the process’s page table in main memory – If PT contains a valid entry, reload TLB and restart – Otherwise, (page fault) check if page is on disk • If on disk, swap it in • Otherwise, allocate a new page or raise an exception 31 32 TLB properties TLB properties • TLB may or may not be under direct OS control • Page table is (logically) an array of frame – Hardware-loaded TLB numbers • On miss, hardware performs PT lookup and reloads TLB • Example: x86, ARM • TLB holds a (recently used) subset of PT entries – Software-loaded TLB – Each TLB entry must be identified (tagged) with the • On miss, hardware generates a TLB miss exception, and page # it translates exception handler reloads TLB • Example: MIPS, Itanium (optionally) – Access is by associative lookup: • All TLB entries’ tags are concurrently compared to the page # • TLB size: typically 64-128 entries • TLB is associative (or content-addressable) memory • Can have separate TLBs for instruction fetch and data access • TLBs can also be used with inverted page tables (and others) 33 34 TLB and context switching TLB effect • TLB is a shared piece of hardware • Normal page tables are per-process (address space) • Without TLB • TLB entries are process-specific – Average number of physical memory – On context switch need to flush the TLB (invalidate all references per virtual reference entries) = 2 • high context-switching overhead (Intel x86) – or tag entries with address-space ID (ASID) • With TLB (assume 99% hit ratio) • called a tagged TLB • used (in some form) on all modern architectures – Average number of physical memory • TLB entry: ASID, page #, frame #, valid and write-protect bits references per virtual reference = .99 * 1 + 0.01 * 2 = 1.01 35 36 6 Recap - Simplified Components of Recap - Simplified Components of Virtual Address Spaces VM System Page Tables for 3 Virtual Address Spaces VM System (3 processes) processes (3 processes) Frame Table Inverted Page Table 1 3 2 CPU CPU TLB TLB 1 2 3 1 2 3 Frame Pool Frame Pool Physical Memory Physical Memory 37 38 0xFFFFFFFF R3000 Address MIPS R3000 TLB kseg2 Space Layout 0xC0000000 • kuseg: kseg1 – 2 gigabytes 0xA0000000 – TLB translated (mapped) – Cacheable (depending on ‘N’ bit) kseg0
Recommended publications
  • Virtual Memory
    Chapter 4 Virtual Memory Linux processes execute in a virtual environment that makes it appear as if each process had the entire address space of the CPU available to itself. This virtual address space extends from address 0 all the way to the maximum address. On a 32-bit platform, such as IA-32, the maximum address is 232 − 1or0xffffffff. On a 64-bit platform, such as IA-64, this is 264 − 1or0xffffffffffffffff. While it is obviously convenient for a process to be able to access such a huge ad- dress space, there are really three distinct, but equally important, reasons for using virtual memory. 1. Resource virtualization. On a system with virtual memory, a process does not have to concern itself with the details of how much physical memory is available or which physical memory locations are already in use by some other process. In other words, virtual memory takes a limited physical resource (physical memory) and turns it into an infinite, or at least an abundant, resource (virtual memory). 2. Information isolation. Because each process runs in its own address space, it is not possible for one process to read data that belongs to another process. This improves security because it reduces the risk of one process being able to spy on another pro- cess and, e.g., steal a password. 3. Fault isolation. Processes with their own virtual address spaces cannot overwrite each other’s memory. This greatly reduces the risk of a failure in one process trig- gering a failure in another process. That is, when a process crashes, the problem is generally limited to that process alone and does not cause the entire machine to go down.
    [Show full text]
  • The Case for Compressed Caching in Virtual Memory Systems
    THE ADVANCED COMPUTING SYSTEMS ASSOCIATION The following paper was originally published in the Proceedings of the USENIX Annual Technical Conference Monterey, California, USA, June 6-11, 1999 The Case for Compressed Caching in Virtual Memory Systems _ Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis aUniversity of Texas at Austin © 1999 by The USENIX Association All Rights Reserved Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. For more information about the USENIX Association: Phone: 1 510 528 8649 FAX: 1 510 548 5738 Email: [email protected] WWW: http://www.usenix.org The Case for Compressed Caching in Virtual Memory Systems Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis Dept. of Computer Sciences University of Texas at Austin Austin, Texas 78751-1182 g fwilson|sfkaplan|smaragd @cs.utexas.edu http://www.cs.utexas.edu/users/oops/ Abstract In [Wil90, Wil91b] we proposed compressed caching for virtual memory—storing pages in compressed form Compressed caching uses part of the available RAM to in a main memory compression cache to reduce disk pag- hold pages in compressed form, effectively adding a new ing. Appel also promoted this idea [AL91], and it was level to the virtual memory hierarchy. This level attempts evaluated empirically by Douglis [Dou93] and by Russi- to bridge the huge performance gap between normal (un- novich and Cogswell [RC96]. Unfortunately Douglis’s compressed) RAM and disk.
    [Show full text]
  • Memory Protection at Option
    Memory Protection at Option Application-Tailored Memory Safety in Safety-Critical Embedded Systems – Speicherschutz nach Wahl Auf die Anwendung zugeschnittene Speichersicherheit in sicherheitskritischen eingebetteten Systemen Der Technischen Fakultät der Universität Erlangen-Nürnberg zur Erlangung des Grades Doktor-Ingenieur vorgelegt von Michael Stilkerich Erlangen — 2012 Als Dissertation genehmigt von der Technischen Fakultät Universität Erlangen-Nürnberg Tag der Einreichung: 09.07.2012 Tag der Promotion: 30.11.2012 Dekan: Prof. Dr.-Ing. Marion Merklein Berichterstatter: Prof. Dr.-Ing. Wolfgang Schröder-Preikschat Prof. Dr. Michael Philippsen Abstract With the increasing capabilities and resources available on microcontrollers, there is a trend in the embedded industry to integrate multiple software functions on a single system to save cost, size, weight, and power. The integration raises new requirements, thereunder the need for spatial isolation, which is commonly established by using a memory protection unit (MPU) that can constrain access to the physical address space to a fixed set of address regions. MPU-based protection is limited in terms of available hardware, flexibility, granularity and ease of use. Software-based memory protection can provide an alternative or complement MPU-based protection, but has found little attention in the embedded domain. In this thesis, I evaluate qualitative and quantitative advantages and limitations of MPU-based memory protection and software-based protection based on a multi-JVM. I developed a framework composed of the AUTOSAR OS-like operating system CiAO and KESO, a Java implementation for deeply embedded systems. The framework allows choosing from no memory protection, MPU-based protection, software-based protection, and a combination of the two.
    [Show full text]
  • Virtual Memory
    Virtual Memory Copyright ©: University of Illinois CS 241 Staff 1 Address Space Abstraction n Program address space ¡ All memory data ¡ i.e., program code, stack, data segment n Hardware interface (physical reality) ¡ Computer has one small, shared memory How can n Application interface (illusion) we close this gap? ¡ Each process wants private, large memory Copyright ©: University of Illinois CS 241 Staff 2 Address Space Illusions n Address independence ¡ Same address can be used in different address spaces yet remain logically distinct n Protection ¡ One address space cannot access data in another address space n Virtual memory ¡ Address space can be larger than the amount of physical memory on the machine Copyright ©: University of Illinois CS 241 Staff 3 Address Space Illusions: Virtual Memory Illusion Reality Giant (virtual) address space Many processes sharing Protected from others one (physical) address space (Unless you want to share it) Limited memory More whenever you want it Today: An introduction to Virtual Memory: The memory space seen by your programs! Copyright ©: University of Illinois CS 241 Staff 4 Multi-Programming n Multiple processes in memory at the same time n Goals 1. Layout processes in memory as needed 2. Protect each process’s memory from accesses by other processes 3. Minimize performance overheads 4. Maximize memory utilization Copyright ©: University of Illinois CS 241 Staff 5 OS & memory management n Main memory is normally divided into two parts: kernel memory and user memory n In a multiprogramming system, the “user” part of main memory needs to be divided to accommodate multiple processes. n The user memory is dynamically managed by the OS (known as memory management) to satisfy following requirements: ¡ Protection ¡ Relocation ¡ Sharing ¡ Memory organization (main mem.
    [Show full text]
  • Extracting Compressed Pages from the Windows 10 Virtual Store WHITE PAPER | EXTRACTING COMPRESSED PAGES from the WINDOWS 10 VIRTUAL STORE 2
    white paper Extracting Compressed Pages from the Windows 10 Virtual Store WHITE PAPER | EXTRACTING COMPRESSED PAGES FROM THE WINDOWS 10 VIRTUAL STORE 2 Abstract Windows 8.1 introduced memory compression in August 2013. By the end of 2013 Linux 3.11 and OS X Mavericks leveraged compressed memory as well. Disk I/O continues to be orders of magnitude slower than RAM, whereas reading and decompressing data in RAM is fast and highly parallelizable across the system’s CPU cores, yielding a significant performance increase. However, this came at the cost of increased complexity of process memory reconstruction and thus reduced the power of popular tools such as Volatility, Rekall, and Redline. In this document we introduce a method to retrieve compressed pages from the Windows 10 Memory Manager Virtual Store, thus providing forensics and auditing tools with a way to retrieve, examine, and reconstruct memory artifacts regardless of their storage location. Introduction Windows 10 moves pages between physical memory and the hard disk or the Store Manager’s virtual store when memory is constrained. Universal Windows Platform (UWP) applications leverage the Virtual Store any time they are suspended (as is the case when minimized). When a given page is no longer in the process’s working set, the corresponding Page Table Entry (PTE) is used by the OS to specify the storage location as well as additional data that allows it to start the retrieval process. In the case of a page file, the retrieval is straightforward because both the page file index and the location of the page within the page file can be directly retrieved.
    [Show full text]
  • A Minimal Powerpc™ Boot Sequence for Executing Compiled C Programs
    Order Number: AN1809/D Rev. 0, 3/2000 Semiconductor Products Sector Application Note A Minimal PowerPCª Boot Sequence for Executing Compiled C Programs PowerPC Systems Architecture & Performance [email protected] This document describes the procedures necessary to successfully initialize a PowerPC processor and begin executing programs compiled using the PowerPC embedded application interface (EABI). The items discussed in this document have been tested for MPC603eª, MPC750, and MPC7400 microprocessors. The methods and source code presented in this document may work unmodiÞed on similar PowerPC platforms as well. This document contains the following topics: ¥ Part I, ÒOverview,Ó provides an overview of the conditions and exceptions for the procedures described in this document. ¥ Part II, ÒPowerPC Processor Initialization,Ó provides information on the general setup of the processor registers, caches, and MMU. ¥ Part III, ÒPowerPC EABI Compliance,Ó discusses aspects of the EABI that apply directly to preparing to jump into a compiled C program. ¥ Part IV, ÒSample Boot Sequence,Ó describes the basic operation of the boot sequence and the many options of conÞguration, explains in detail a sample conÞgurable boot and how the code may be modiÞed for use in different environments, and discusses the compilation procedure using the supporting GNU build environment. ¥ Part V, ÒSource Files,Ó contains the complete source code for the Þles ppcinit.S, ppcinit.h, reg_defs.h, ld.script, and MakeÞle. This document contains information on a new product under development by Motorola. Motorola reserves the right to change or discontinue this product without notice. © Motorola, Inc., 2000. All rights reserved. Overview Part I Overview The procedures discussed in this document perform only the minimum amount of work necessary to execute a user program.
    [Show full text]
  • HALO: Post-Link Heap-Layout Optimisation
    HALO: Post-Link Heap-Layout Optimisation Joe Savage Timothy M. Jones University of Cambridge, UK University of Cambridge, UK [email protected] [email protected] Abstract 1 Introduction Today, general-purpose memory allocators dominate the As the gap between memory and processor speeds continues landscape of dynamic memory management. While these so- to widen, efficient cache utilisation is more important than lutions can provide reasonably good behaviour across a wide ever. While compilers have long employed techniques like range of workloads, it is an unfortunate reality that their basic-block reordering, loop fission and tiling, and intelligent behaviour for any particular workload can be highly subop- register allocation to improve the cache behaviour of pro- timal. By catering primarily to average and worst-case usage grams, the layout of dynamically allocated memory remains patterns, these allocators deny programs the advantages of largely beyond the reach of static tools. domain-specific optimisations, and thus may inadvertently Today, when a C++ program calls new, or a C program place data in a manner that hinders performance, generating malloc, its request is satisfied by a general-purpose allocator unnecessary cache misses and load stalls. with no intimate knowledge of what the program does or To help alleviate these issues, we propose HALO: a post- how its data objects are used. Allocations are made through link profile-guided optimisation tool that can improve the fixed, lifeless interfaces, and fulfilled by
    [Show full text]
  • Virtual Memory - Paging
    Virtual memory - Paging Johan Montelius KTH 2020 1 / 32 The process code heap (.text) data stack kernel 0x00000000 0xC0000000 0xffffffff Memory layout for a 32-bit Linux process 2 / 32 Segments - a could be solution Processes in virtual space Address translation by MMU (base and bounds) Physical memory 3 / 32 one problem Physical memory External fragmentation: free areas of free space that is hard to utilize. Solution: allocate larger segments ... internal fragmentation. 4 / 32 another problem virtual space used code We’re reserving physical memory that is not used. physical memory not used? 5 / 32 Let’s try again It’s easier to handle fixed size memory blocks. Can we map a process virtual space to a set of equal size blocks? An address is interpreted as a virtual page number (VPN) and an offset. 6 / 32 Remember the segmented MMU MMU exception no virtual addr. offset yes // < within bounds index + physical address segment table 7 / 32 The paging MMU MMU exception virtual addr. offset // VPN available ? + physical address page table 8 / 32 the MMU exception exception virtual address within bounds page available Segmentation Paging linear address physical address 9 / 32 a note on the x86 architecture The x86-32 architecture supports both segmentation and paging. A virtual address is translated to a linear address using a segmentation table. The linear address is then translated to a physical address by paging. Linux and Windows do not use use segmentation to separate code, data nor stack. The x86-64 (the 64-bit version of the x86 architecture) has dropped many features for segmentation.
    [Show full text]
  • Chapter 4 Memory Management and Virtual Memory Asst.Prof.Dr
    Chapter 4 Memory management and Virtual Memory Asst.Prof.Dr. Supakit Nootyaskool IT-KMITL Object • To discuss the principle of memory management. • To understand the reason that memory partitions are importance for system organization. • To describe the concept of Paging and Segmentation. 4.1 Difference in main memory in the system • The uniprogramming system (example in DOS) allows only a program to be present in memory at a time. All resources are provided to a program at a time. Example in a memory has a program and an OS running 1) Operating system (on Kernel space) 2) A running program (on User space) • The multiprogramming system is difference from the mentioned above by allowing programs to be present in memory at a time. All resource must be organize and sharing to programs. Example by two programs are in the memory . 1) Operating system (on Kernel space) 2) Running programs (on User space + running) 3) Programs are waiting signals to execute in CPU (on User space). The multiprogramming system uses memory management to organize location to the programs 4.2 Memory management terms Frame Page Segment 4.2 Memory management terms Frame Page A fixed-lengthSegment block of main memory. 4.2 Memory management terms Frame Page A fixed-length block of data that resides in secondary memory. A page of data may temporarily beSegment copied into a frame of main memory. A variable-lengthMemory management block of data that residesterms in secondary memory. A segment may temporarily be copied into an available region of main memory or the segment may be divided into pages which can be individuallyFrame copied into mainPage memory.
    [Show full text]
  • X86 Memory Protection and Translation
    2/5/20 COMP 790: OS Implementation COMP 790: OS Implementation Logical Diagram Binary Memory x86 Memory Protection and Threads Formats Allocators Translation User System Calls Kernel Don Porter RCU File System Networking Sync Memory Device CPU Today’s Management Drivers Scheduler Lecture Hardware Interrupts Disk Net Consistency 1 Today’s Lecture: Focus on Hardware ABI 2 1 2 COMP 790: OS Implementation COMP 790: OS Implementation Lecture Goal Undergrad Review • Understand the hardware tools available on a • What is: modern x86 processor for manipulating and – Virtual memory? protecting memory – Segmentation? • Lab 2: You will program this hardware – Paging? • Apologies: Material can be a bit dry, but important – Plus, slides will be good reference • But, cool tech tricks: – How does thread-local storage (TLS) work? – An actual (and tough) Microsoft interview question 3 4 3 4 COMP 790: OS Implementation COMP 790: OS Implementation Memory Mapping Two System Goals 1) Provide an abstraction of contiguous, isolated virtual Process 1 Process 2 memory to a program Virtual Memory Virtual Memory 2) Prevent illegal operations // Program expects (*x) – Prevent access to other application or OS memory 0x1000 Only one physical 0x1000 address 0x1000!! // to always be at – Detect failures early (e.g., segfault on address 0) // address 0x1000 – More recently, prevent exploits that try to execute int *x = 0x1000; program data 0x1000 Physical Memory 5 6 5 6 1 2/5/20 COMP 790: OS Implementation COMP 790: OS Implementation Outline x86 Processor Modes • x86
    [Show full text]
  • Operating Systems
    UC Santa Barbara Operating Systems Christopher Kruegel Department of Computer Science UC Santa Barbara http://www.cs.ucsb.edu/~chris/ CS 170 Info UC Santa Barbara • Web page: http://www.cs.ucsb.edu/~chris/cs170/index.html • Mailing lists (one for class, one for instructors) cs170-users – used to disseminate information and ask fellow classmates cs170-admin – use to reach TA and me 2 Requirements UC Santa Barbara • The course requirements include – several projects – a midterm and a final exam • The projects (and exams) are individual efforts • The final grade will be determined according to the following weight – projects: 50% – exams: 50% • Class participation and non-graded quizzes 3 Lab Projects UC Santa Barbara ~5 programming assignments • Shell (system calls) • Threads (parallel execution, scheduling) • Synchronization (semaphores, …) • Memory (virtual memory, shared regions, …) • File systems 4 Material UC Santa Barbara • The course will adopt the following book: Andrew S. Tanenbaum and Albert S. Woodhull Operating Systems (Design and Implementation) 3rd Edition, Prentice-Hall, 2006 • The set of assignments will be updated during the course • Additional material (slides) is provided on the class Web page 5 Operating Systems UC Santa Barbara • Let us do amazing things … – allow you to run multiple programs at the same time – protect all other programs when one app crashes – allow programs to use more memory that your computer has RAM – allow you to plug in a device and just use it (well, most of the time) – protects your data from
    [Show full text]
  • Memory Management
    Memory Management These slides are created by Dr. Huang of George Mason University. Students registered in Dr. Huang’s courses at GMU can make a single machine readable copy and print a single copy of each slide for their own reference as long as the slide contains the copyright statement, and the GMU facilities are not used to produce the paper copies. Permission for any other use, either in machine-readable or printed form, must be obtained form the author in writing. CS471 1 Memory 0000 A a set of data entries 0001 indexed by addresses 0002 0003 Typically the basic data 0004 0005 unit is byte 0006 0007 In 32 bit machines, 4 bytes 0008 0009 grouped to words 000A 000B Have you seen those 000C 000D DRAM chips in your PC ? 000E 000F CS471 2 1 Logical vs. Physical Address Space The addresses used by the RAM chips are called physical addresses. In primitive computing devices, the address a programmer/processor use is the actual address. – When the process fetches byte 000A, the content of 000A is provided. CS471 3 In advanced computers, the processor operates in a separate address space, called logical address, or virtual address. A Memory Management Unit (MMU) is used to map logical addresses to physical addresses. – Various mapping technologies to be discussed – MMU is a hardware component – Modern processors have their MMU on the chip (Pentium, Athlon, …) CS471 4 2 Continuous Mapping: Dynamic Relocation Virtual Physical Memory Memory Space Processor 4000 The processor want byte 0010, the 4010th byte is fetched CS471 5 MMU for Dynamic Relocation CS471 6 3 Segmented Mapping Virtual Physical Memory Memory Space Processor Obviously, more sophisticated MMU needed to implement this CS471 7 Swapping A process can be swapped temporarily out of memory to a backing store (a hard drive), and then brought back into memory for continued execution.
    [Show full text]