The Page-Fault Weird Machine: Lessons in Instruction-Less Computation
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Virtualization
Virtualization Dave Eckhardt and Roger Dannenberg based on material from : Mike Kasick Glenn Willen Mike Cui April 10, 2009 1 Synchronization Memorial service for Timothy Wismer − Friday, April 17 − 16:00-18:00 − Breed Hall (Margaret Morrison 103) Sign will say “Private Event” − Donations to National Arthritis Foundation will be welcome 2 Outline Introduction Virtualization x86 Virtualization Paravirtualization Alternatives for Isolation Alternatives for “running two OSes on same machine” Summary 3 What is Virtualization? Virtualization: − Process of presenting and partitioning computing resources in a logical way rather than partitioning according to physical reality Virtual Machine: − An execution environment (logically) identical to a physical machine, with the ability to execute a full operating system The Process abstraction is related to virtualization: it’s at least similar to a physical machine Process : Kernel :: Kernel : ? 4 Advantages of the Process Abstraction Each process is a pseudo-machine Processes have their own registers, address space, file descriptors (sometimes) Protection from other processes 5 Disadvantages of the Process Abstraction Processes share the file system − Difficult to simultaneously use different versions of: Programs, libraries, configurations Single machine owner: − root is the superuser − Any process that attains superuser privileges controls all processes Other processes aren't so isolated after all 6 Disadvantages of the Process Abstraction Processes share the same kernel − Kernel/OS -
Intermediate X86 Part 2
Intermediate x86 Part 2 Xeno Kovah – 2010 xkovah at gmail All materials are licensed under a Creative Commons “Share Alike” license. • http://creativecommons.org/licenses/by-sa/3.0/ 2 Paging • Previously we discussed how segmentation translates a logical address (segment selector + offset) into a 32 bit linear address. • When paging is disabled, linear addresses map 1:1 to physical addresses. • When paging is enabled, a linear address must be translated to determine the physical address it corresponds to. • It’s called “paging” because physical memory is divided into fixed size chunks called pages. • The analogy is to books in a library. When you need to find some information first you go to the library where you look up the relevant book, then you select the book and look up a specific page, from there you maybe look for a specific specific sentence …or “word”? ;) • The internets ruined the analogy! 3 Notes and Terminology • All of the figures or references to the manual in this section refer to the Nov 2008 manual (available in the class materials). This is because I think the manuals <= 2008 organized and presented much clearer than >= 2009 manuals. • When I refer to a “frame” it means “a page- sized chunk of physical memory” • When paging is enabled, a “linear address” is the same thing as a “virtual memory ” “ ” address or virtual address 4 The terrifying truth revealed! And now you knowAHHHHHHHH!!!…the rest of the story.(Nah, Good it’s not so bad day. :)) 5 Virtual Memory • When paging is enabled, the 32 bit linear address space can be mapped to a physical address space less than 32 bits. -
OS Structures and System Calls
COS 318: Operating Systems OS Structures and System Calls Kai Li Computer Science Department Princeton University (http://www.cs.princeton.edu/courses/cos318/) Outline Protection mechanisms OS structures System and library calls 2 Protection Issues CPU Kernel has the ability to take CPU away from users to prevent a user from using the CPU forever Users should not have such an ability Memory Prevent a user from accessing others’ data Prevent users from modifying kernel code and data structures I/O Prevent users from performing “illegal” I/Os Question What’s the difference between protection and security? 3 Architecture Support: Privileged Mode An interrupt or exception (INT) User mode Kernel (privileged) mode • Regular instructions • Regular instructions • Access user memory • Privileged instructions • Access user memory • Access kernel memory A special instruction (IRET) 4 Privileged Instruction Examples Memory address mapping Flush or invalidate data cache Invalidate TLB entries Load and read system registers Change processor modes from kernel to user Change the voltage and frequency of processor Halt a processor Reset a processor Perform I/O operations 5 x86 Protection Rings Privileged instructions Can be executed only When current privileged Level (CPR) is 0 Operating system kernel Level 0 Operating system services Level 1 Level 2 Applications Level 3 6 Layered Structure Hiding information at each layer Layered dependency Examples Level N THE (6 layers) . MS-DOS (4 layers) . Pros Level 2 Layered abstraction -
Multiprocessing Contents
Multiprocessing Contents 1 Multiprocessing 1 1.1 Pre-history .............................................. 1 1.2 Key topics ............................................... 1 1.2.1 Processor symmetry ...................................... 1 1.2.2 Instruction and data streams ................................. 1 1.2.3 Processor coupling ...................................... 2 1.2.4 Multiprocessor Communication Architecture ......................... 2 1.3 Flynn’s taxonomy ........................................... 2 1.3.1 SISD multiprocessing ..................................... 2 1.3.2 SIMD multiprocessing .................................... 2 1.3.3 MISD multiprocessing .................................... 3 1.3.4 MIMD multiprocessing .................................... 3 1.4 See also ................................................ 3 1.5 References ............................................... 3 2 Computer multitasking 5 2.1 Multiprogramming .......................................... 5 2.2 Cooperative multitasking ....................................... 6 2.3 Preemptive multitasking ....................................... 6 2.4 Real time ............................................... 7 2.5 Multithreading ............................................ 7 2.6 Memory protection .......................................... 7 2.7 Memory swapping .......................................... 7 2.8 Programming ............................................. 7 2.9 See also ................................................ 8 2.10 References ............................................. -
X86 Memory Protection and Translation
2/5/20 COMP 790: OS Implementation COMP 790: OS Implementation Logical Diagram Binary Memory x86 Memory Protection and Threads Formats Allocators Translation User System Calls Kernel Don Porter RCU File System Networking Sync Memory Device CPU Today’s Management Drivers Scheduler Lecture Hardware Interrupts Disk Net Consistency 1 Today’s Lecture: Focus on Hardware ABI 2 1 2 COMP 790: OS Implementation COMP 790: OS Implementation Lecture Goal Undergrad Review • Understand the hardware tools available on a • What is: modern x86 processor for manipulating and – Virtual memory? protecting memory – Segmentation? • Lab 2: You will program this hardware – Paging? • Apologies: Material can be a bit dry, but important – Plus, slides will be good reference • But, cool tech tricks: – How does thread-local storage (TLS) work? – An actual (and tough) Microsoft interview question 3 4 3 4 COMP 790: OS Implementation COMP 790: OS Implementation Memory Mapping Two System Goals 1) Provide an abstraction of contiguous, isolated virtual Process 1 Process 2 memory to a program Virtual Memory Virtual Memory 2) Prevent illegal operations // Program expects (*x) – Prevent access to other application or OS memory 0x1000 Only one physical 0x1000 address 0x1000!! // to always be at – Detect failures early (e.g., segfault on address 0) // address 0x1000 – More recently, prevent exploits that try to execute int *x = 0x1000; program data 0x1000 Physical Memory 5 6 5 6 1 2/5/20 COMP 790: OS Implementation COMP 790: OS Implementation Outline x86 Processor Modes • x86 -
Protected Mode - Wikipedia
2/12/2019 Protected mode - Wikipedia Protected mode In computing, protected mode, also called protected virtual address mode,[1] is an operational mode of x86- compatible central processing units (CPUs). It allows system software to use features such as virtual memory, paging and safe multi-tasking designed to increase an operating system's control over application software.[2][3] When a processor that supports x86 protected mode is powered on, it begins executing instructions in real mode, in order to maintain backward compatibility with earlier x86 processors.[4] Protected mode may only be entered after the system software sets up one descriptor table and enables the Protection Enable (PE) bit in the control register 0 (CR0).[5] Protected mode was first added to the x86 architecture in 1982,[6] with the release of Intel's 80286 (286) processor, and later extended with the release of the 80386 (386) in 1985.[7] Due to the enhancements added by protected mode, it has become widely adopted and has become the foundation for all subsequent enhancements to the x86 architecture,[8] although many of those enhancements, such as added instructions and new registers, also brought benefits to the real mode. Contents History The 286 The 386 386 additions to protected mode Entering and exiting protected mode Features Privilege levels Real mode application compatibility Virtual 8086 mode Segment addressing Protected mode 286 386 Structure of segment descriptor entry Paging Multitasking Operating systems See also References External links History https://en.wikipedia.org/wiki/Protected_mode -
Operating Systems & Virtualisation Security Knowledge Area
Operating Systems & Virtualisation Security Knowledge Area Issue 1.0 Herbert Bos Vrije Universiteit Amsterdam EDITOR Andrew Martin Oxford University REVIEWERS Chris Dalton Hewlett Packard David Lie University of Toronto Gernot Heiser University of New South Wales Mathias Payer École Polytechnique Fédérale de Lausanne The Cyber Security Body Of Knowledge www.cybok.org COPYRIGHT © Crown Copyright, The National Cyber Security Centre 2019. This information is licensed under the Open Government Licence v3.0. To view this licence, visit: http://www.nationalarchives.gov.uk/doc/open-government-licence/ When you use this information under the Open Government Licence, you should include the following attribution: CyBOK © Crown Copyright, The National Cyber Security Centre 2018, li- censed under the Open Government Licence: http://www.nationalarchives.gov.uk/doc/open- government-licence/. The CyBOK project would like to understand how the CyBOK is being used and its uptake. The project would like organisations using, or intending to use, CyBOK for the purposes of education, training, course development, professional development etc. to contact it at con- [email protected] to let the project know how they are using CyBOK. Issue 1.0 is a stable public release of the Operating Systems & Virtualisation Security Knowl- edge Area. However, it should be noted that a fully-collated CyBOK document which includes all of the Knowledge Areas is anticipated to be released by the end of July 2019. This will likely include updated page layout and formatting of the individual Knowledge Areas KA Operating Systems & Virtualisation Security j October 2019 Page 1 The Cyber Security Body Of Knowledge www.cybok.org INTRODUCTION In this Knowledge Area, we introduce the principles, primitives and practices for ensuring se- curity at the operating system and hypervisor levels. -
Porting the QEMU Virtualization Software to MINIX 3
Porting the QEMU virtualization software to MINIX 3 Master's thesis in Computer Science Erik van der Kouwe Student number 1397273 [email protected] Vrije Universiteit Amsterdam Faculty of Sciences Department of Mathematics and Computer Science Supervised by dr. Andrew S. Tanenbaum Second reader: dr. Herbert Bos 12 August 2009 Abstract The MINIX 3 operating system aims to make computers more reliable and more secure by keeping privileged code small and simple. Unfortunately, at the moment only few major programs have been ported to MINIX. In particular, no virtualization software is available. By isolating software environments from each other, virtualization aids in software development and provides an additional way to achieve reliability and security. It is unclear whether virtualization software can run efficiently within the constraints of MINIX' microkernel design. To determine whether MINIX is capable of running virtualization software, I have ported QEMU to it. QEMU provides full system virtualization, aiming in particular at portability and speed. I find that QEMU can be ported to MINIX, but that this requires a number of changes to be made to both programs. Allowing QEMU to run mainly involves adding standardized POSIX functions that were previously missing in MINIX. These additions do not conflict with MINIX' design principles and their availability makes porting other software easier. A list of recommendations is provided that could further simplify porting software to MINIX. Besides just porting QEMU, I also investigate what performance bottlenecks it experiences on MINIX. Several areas are found where MINIX does not perform as well as Linux. The causes for these differences are investigated. -
Chapter 3 Protected-Mode Memory Management
CHAPTER 3 PROTECTED-MODE MEMORY MANAGEMENT This chapter describes the Intel 64 and IA-32 architecture’s protected-mode memory management facilities, including the physical memory requirements, segmentation mechanism, and paging mechanism. See also: Chapter 5, “Protection” (for a description of the processor’s protection mechanism) and Chapter 20, “8086 Emulation” (for a description of memory addressing protection in real-address and virtual-8086 modes). 3.1 MEMORY MANAGEMENT OVERVIEW The memory management facilities of the IA-32 architecture are divided into two parts: segmentation and paging. Segmentation provides a mechanism of isolating individual code, data, and stack modules so that multiple programs (or tasks) can run on the same processor without interfering with one another. Paging provides a mech- anism for implementing a conventional demand-paged, virtual-memory system where sections of a program’s execution environment are mapped into physical memory as needed. Paging can also be used to provide isolation between multiple tasks. When operating in protected mode, some form of segmentation must be used. There is no mode bit to disable segmentation. The use of paging, however, is optional. These two mechanisms (segmentation and paging) can be configured to support simple single-program (or single- task) systems, multitasking systems, or multiple-processor systems that used shared memory. As shown in Figure 3-1, segmentation provides a mechanism for dividing the processor’s addressable memory space (called the linear address space) into smaller protected address spaces called segments. Segments can be used to hold the code, data, and stack for a program or to hold system data structures (such as a TSS or LDT). -
X86 Memory Protection and Translation
x86 Memory Protection and Translation Don Porter CSE 506 Lecture Goal ò Understand the hardware tools available on a modern x86 processor for manipulating and protecting memory ò Lab 2: You will program this hardware ò Apologies: Material can be a bit dry, but important ò Plus, slides will be good reference ò But, cool tech tricks: ò How does thread-local storage (TLS) work? ò An actual (and tough) Microsoft interview question Undergrad Review ò What is: ò Virtual memory? ò Segmentation? ò Paging? Two System Goals 1) Provide an abstraction of contiguous, isolated virtual memory to a program 2) Prevent illegal operations ò Prevent access to other application or OS memory ò Detect failures early (e.g., segfault on address 0) ò More recently, prevent exploits that try to execute program data Outline ò x86 processor modes ò x86 segmentation ò x86 page tables ò Software vs. Hardware mechanisms ò Advanced Features ò Interesting applications/problems x86 Processor Modes ò Real mode – walks and talks like a really old x86 chip ò State at boot ò 20-bit address space, direct physical memory access ò Segmentation available (no paging) ò Protected mode – Standard 32-bit x86 mode ò Segmentation and paging ò Privilege levels (separate user and kernel) x86 Processor Modes ò Long mode – 64-bit mode (aka amd64, x86_64, etc.) ò Very similar to 32-bit mode (protected mode), but bigger ò Restrict segmentation use ò Garbage collect deprecated instructions ò Chips can still run in protected mode with old instructions Translation Overview 0xdeadbeef Segmentation 0x0eadbeef Paging 0x6eadbeef Virtual Address Linear Address Physical Address Protected/Long mode only ò Segmentation cannot be disabled! ò But can be a no-op (aka flat mode) x86 Segmentation ò A segment has: ò Base address (linear address) ò Length ò Type (code, data, etc). -
Resilient Virtualized Systems Using Rehype
Resilient Virtualized Systems Using ReHype Michael Le† and Yuval Tamir Concurrent Systems Laboratory UCLA Computer Science Department {mvle,tamir}@cs.ucla.edu October 2014 Abstract System-level virtualization introduces critical vulnerabilities to failures of the software components that implement virtualization – the virtualization infrastructure (VI). To mitigate the impact of such failures, we introduce a resilient VI (RVI) that can recover individual VI components from failure, caused by hardware or software faults, transparently to the hosted virtual machines (VMs). Much of the focus is on the ReHype mechanism for recovery from hypervisor failures, that can lead to state corruption and to inconsistencies among the states of system components. ReHype’s implementation for the Xen hypervisor was done incrementally, using fault injection re- sults to identify sources of critical corruption and inconsistencies. This implementation involved 900 LOC, with memory space overhead of 2.1MB. Fault injection campaigns, with a variety of fault types, show that ReHype can successfully recover, in less than 750ms, from over 88% of detected hypervisor failures. In addition to ReHype, recovery mechanisms for the other VI components are described. The overall effectiveness of our RVI is evaluated hosting a Web service application, on a cluster of VMs. With faults in any VI component, for over 87% of detected failures, our recovery mechanisms allow services pro- vided by the application to be continuously maintained despite the resulting failures of VI components. 1 Introduction System-level virtualization [28] enables server consolidation by allowing multiple virtual machines (VMs) to run on a single physical host, while providing workload isolation and flexible resource management. -
Computer Architectures an Overview
Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.