Stealthy Page Table-Based Attacks on Enclaved Execution

Total Page:16

File Type:pdf, Size:1020Kb

Stealthy Page Table-Based Attacks on Enclaved Execution Telling Your Secrets Without Page Faults: Stealthy Page Table-Based Attacks on Enclaved Execution Jo Van Bulck, imec-DistriNet, KU Leuven; Nico Weichbrodt and Rüdiger Kapitza, IBR DS, TU Braunschweig; Frank Piessens and Raoul Strackx, imec-DistriNet, KU Leuven https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/van-bulck This paper is included in the Proceedings of the 26th USENIX Security Symposium August 16–18, 2017 • Vancouver, BC, Canada ISBN 978-1-931971-40-9 Open access to the Proceedings of the 26th USENIX Security Symposium is sponsored by USENIX Telling Your Secrets Without Page Faults: Stealthy Page Table-Based Attacks on Enclaved Execution Jo Van Bulck Nico Weichbrodt Rüdiger Kapitza imec-DistriNet, KU Leuven IBR DS, TU Braunschweig IBR DS, TU Braunschweig [email protected] [email protected] [email protected] Frank Piessens Raoul Strackx imec-DistriNet, KU Leuven imec-DistriNet, KU Leuven [email protected] [email protected] Abstract ware to make it relatively easy to run unmodified legacy applications within an enclave [3, 2, 41, 45]. Protected module architectures, such as Intel SGX, en- An essential aspect of enclaved execution is that the able strong trusted computing guarantees for hardware- hardware prevents privileged system software from read- enforced enclaves on top a potentially malicious operat- ing or writing a module’s private memory directly, or ing system. However, such enclaved execution environ- from tampering with its internal control flow. However, ments are known to be vulnerable to a powerful class of the OS remains in charge of allocating platform resources controlled-channel attacks. Recent research convincingly (memory pages and CPU time) to protected modules, such demonstrated that adversarial system software can extract that the platform can be protected against misbehaving sensitive data from enclaved applications by carefully or buggy enclaves. One consequence of this interaction revoking access rights on enclave pages, and recording between privileged system software and enclaves is an en- the associated page faults. As a response, a number of tirely new class of powerful, indirect attacks on enclaved state-of-the-art defense techniques has been proposed that applications. Xu et al. [48] first showed how a malicious suppress page faults during enclave execution. OS can use page faults as a noise-free controlled-channel This paper shows, however, that page table-based to extract rich information (full text and images) from threats go beyond page faults. We demonstrate that an a single run of a victim enclave. This is particularly untrusted operating system can observe enclave page ac- dangerous when legacy software is running within an cesses without resorting to page faults, by exploiting other enclave, as these applications have not been hardened side-effects of the address translation process. We con- against side-channel attacks. As a result, several authors tribute two novel attack vectors that infer enclaved mem- have expressed their concerns on side-channel vulnera- ory accesses from page table attributes, as well as from bilities in a PMA setting in general, and the page fault the caching behavior of unprotected page table memory. channel in particular [12, 9, 43, 39, 7]. We demonstrate the effectiveness of our attacks by recov- ering EdDSA session keys with little to no noise from the The research community has since proposed a num- popular Libgcrypt cryptographic software suite. ber of compile-time and hardware-enabled defense tech- niques [40, 10, 39] that hide enclave page accesses from the OS. We argue, however, that page faults are but one 1 Introduction side-effect of the address translation process that is ob- servable by untrusted system software. More specifically, Enclaved execution, or support for protected modules, is the main contribution of this paper is that we show that an a promising new security paradigm that makes it possible adversarial OS can infer page accesses from an enclaved to execute application code on a platform without having execution that never suffers a page fault. Our attacks ex- to trust the underlying operating system or hypervisor. ploit the key property that the SGX design leaves page With the advent of Intel SGX [32], support for Protected table memory under explicit control of the untrusted OS. Module Architectures (PMAs) is now available on main- As such, other side-effects of the page table walk in en- stream consumer hardware, and can be used to defend clave mode can be observed by the OS with very little to against malicious or compromised system software, both no noise. We identify and successfully exploit straightfor- in an untrustworthy cloud environment [3, 36] as well as ward effects such as the setting of “accessed” and “dirty” for desktop applications [18]. In particular, one line of bits, as well as less obvious effects such as the caching of research has developed techniques and supporting soft- page table memory itself. An important consequence is USENIX Association 26th USENIX Security Symposium 1041 that our novel attack vectors bypass recent defenses that Page ok Enclave no padrs in no focus exclusively on suppressing page faults [40, 39]. Allow In summary, the contributions of this paper are: walk? mode? PRM? yes • We advance the state-of-the-art by defeating recently fail yes proposed defense techniques, showing that we can vadrs Page fault in Abort page infer page accesses without resorting to page faults. enclave? no • We present a page table-based technique to precisely yes interrupt an enclave at instruction-level granularity. padrs in yes EPCM ok fail • We implement our novel attack vectors as an exten- EPC? checks? sion to Graphene-SGX’s untrusted runtime, facilitat- ing eavesdropping on unmodified applications. • We demonstrate the effectiveness of our attacks by Figure 1: Additional memory access checks performed by extracting private EdDSA session keys from the SGX for a virtual address vadrs that maps to a physical widely used Libgcrypt cryptographic library. address padrs. Our attack framework and evaluation scenarios are available as free software, licensed under GPLv3, at SGX enclaves are instantiated as part of the virtual ad- https://github.com/jovanbulck/sgx-pte. dress space of a conventional OS process. Since PRM is a limited system resource, untrusted system software is in charge of assigning protected memory pages to enclaves, 2 Background and is allowed to oversubscribe the EPC. At enclave cre- ation time, the OS can instruct the processor to initialize In this section, we provide the necessary background on newly allocated EPC pages with unprotected code or data. Intel SGX, refine the attacker model, and discuss previous After finalizing the enclave, and before it can be entered, research results on controlled-channel attacks. the hardware calculates a secure hash of the enclave’s initial state. This allows the integrity of the untrusted 2.1 Intel SGX loading process to be attested to a remote stakeholder [1]. SGX furthermore offers dedicated ring-zero instructions Recent Intel x86 processors from Skylake onwards are be- to securely evict and reload enclave pages between EPC ing shipped with Software Guard eXtensions (SGX) [32, memory and untrusted storage. 1, 23] that enable strong, hardware-enforced trusted com- An important design decision of SGX is that it leaves puting guarantees in an untrusted execution environment. page tables under explicit control of the untrusted oper- SGX extends the instruction set and memory access logic ating system. Instead, SGX implements an additional, of the Intel architecture to allow the execution of security- independent layer of access control on top of the legacy sensitive application logic in protected enclaves in isola- page table-based memory protection mechanism. Fig- tion from the remainder of the system, including privi- ure 1 summarizes the additional checks performed when leged OS or hypervisor. accessing enclave memory. First, in order to translate the provided virtual address to a physical one, the pro- Memory Protection. An SGX-enabled processor sets cessor traverses the OS-managed page tables, as well as aside a contiguous physical memory area, referred to as the extended page tables set up by the hypervisor, if any. Processor Reserved Memory (PRM). A hardware-level As usual, a page fault is signaled to the untrusted OS in memory encryption engine guarantees the confidentiality, case of a permission mismatch or missing page table en- integrity, and freshness of PRM memory while it resides try during address translation. Any attempt to access the outside of the processor package. The PRM region is PRM region in non-enclave mode results in abort page subdivided into two data structures: the Enclave Page semantics, i.e., read 0xFF and ignore writes. Likewise, in Cache (EPC) and the Enclave Page Cache Map (EPCM). enclave mode, the processor is allowed to reference all Protected 4 KB enclave code and data pages are allocated memory that falls outside of the executing enclave’s vir- from the EPC, while every EPC page has a shadow entry tual address range, but abort page semantics apply when in the EPCM to track ownership, type, address translation, such an address resolves into PRM memory. Further- and permission meta data. EPCM memory is exclusively more, a page fault is signaled to the untrusted OS for EPC managed by the processor, and is never directly accessible accesses that either do not belong to the currently execut- to software. ing enclave, are accessed through an unexpected virtual 1042 26th USENIX Security Symposium USENIX Association address, or do not comply with the read/write/execute flow to the instruction pointer specified in the SSA frame. permissions imposed by the EPCM. SGX allows an enclave to register trusted in-enclave ex- To speed up subsequent memory accesses, SGX em- ception handlers with a cooperative OS.
Recommended publications
  • Overcoming Traditional Problems with OS Huge Page Management
    MEGA: Overcoming Traditional Problems with OS Huge Page Management Theodore Michailidis Alex Delis Mema Roussopoulos University of Athens University of Athens University of Athens Athens, Greece Athens, Greece Athens, Greece [email protected] [email protected] [email protected] ABSTRACT KEYWORDS Modern computer systems now feature memory banks whose Huge pages, Address Translation, Memory Compaction aggregate size ranges from tens to hundreds of GBs. In this context, contemporary workloads can and do often consume ACM Reference Format: Theodore Michailidis, Alex Delis, and Mema Roussopoulos. 2019. vast amounts of main memory. This upsurge in memory con- MEGA: Overcoming Traditional Problems with OS Huge Page Man- sumption routinely results in increased virtual-to-physical agement. In The 12th ACM International Systems and Storage Con- address translations, and consequently and more importantly, ference (SYSTOR ’19), June 3–5, 2019, Haifa, Israel. ACM, New York, more translation misses. Both of these aspects collectively NY, USA, 11 pages. https://doi.org/10.1145/3319647.3325839 do hamper the performance of workload execution. A solu- tion aimed at dramatically reducing the number of address translation misses has been to provide hardware support for 1 INTRODUCTION pages with bigger sizes, termed huge pages. In this paper, we Computer memory capacities have been increasing sig- empirically demonstrate the benefits and drawbacks of using nificantly over the past decades, leading to the development such huge pages. In particular, we show that it is essential of memory hungry workloads that consume vast amounts for modern OS to refine their software mechanisms to more of main memory. This upsurge in memory consumption rou- effectively manage huge pages.
    [Show full text]
  • The Case for Compressed Caching in Virtual Memory Systems
    THE ADVANCED COMPUTING SYSTEMS ASSOCIATION The following paper was originally published in the Proceedings of the USENIX Annual Technical Conference Monterey, California, USA, June 6-11, 1999 The Case for Compressed Caching in Virtual Memory Systems _ Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis aUniversity of Texas at Austin © 1999 by The USENIX Association All Rights Reserved Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. For more information about the USENIX Association: Phone: 1 510 528 8649 FAX: 1 510 548 5738 Email: [email protected] WWW: http://www.usenix.org The Case for Compressed Caching in Virtual Memory Systems Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis Dept. of Computer Sciences University of Texas at Austin Austin, Texas 78751-1182 g fwilson|sfkaplan|smaragd @cs.utexas.edu http://www.cs.utexas.edu/users/oops/ Abstract In [Wil90, Wil91b] we proposed compressed caching for virtual memory—storing pages in compressed form Compressed caching uses part of the available RAM to in a main memory compression cache to reduce disk pag- hold pages in compressed form, effectively adding a new ing. Appel also promoted this idea [AL91], and it was level to the virtual memory hierarchy. This level attempts evaluated empirically by Douglis [Dou93] and by Russi- to bridge the huge performance gap between normal (un- novich and Cogswell [RC96]. Unfortunately Douglis’s compressed) RAM and disk.
    [Show full text]
  • Extracting Compressed Pages from the Windows 10 Virtual Store WHITE PAPER | EXTRACTING COMPRESSED PAGES from the WINDOWS 10 VIRTUAL STORE 2
    white paper Extracting Compressed Pages from the Windows 10 Virtual Store WHITE PAPER | EXTRACTING COMPRESSED PAGES FROM THE WINDOWS 10 VIRTUAL STORE 2 Abstract Windows 8.1 introduced memory compression in August 2013. By the end of 2013 Linux 3.11 and OS X Mavericks leveraged compressed memory as well. Disk I/O continues to be orders of magnitude slower than RAM, whereas reading and decompressing data in RAM is fast and highly parallelizable across the system’s CPU cores, yielding a significant performance increase. However, this came at the cost of increased complexity of process memory reconstruction and thus reduced the power of popular tools such as Volatility, Rekall, and Redline. In this document we introduce a method to retrieve compressed pages from the Windows 10 Memory Manager Virtual Store, thus providing forensics and auditing tools with a way to retrieve, examine, and reconstruct memory artifacts regardless of their storage location. Introduction Windows 10 moves pages between physical memory and the hard disk or the Store Manager’s virtual store when memory is constrained. Universal Windows Platform (UWP) applications leverage the Virtual Store any time they are suspended (as is the case when minimized). When a given page is no longer in the process’s working set, the corresponding Page Table Entry (PTE) is used by the OS to specify the storage location as well as additional data that allows it to start the retrieval process. In the case of a page file, the retrieval is straightforward because both the page file index and the location of the page within the page file can be directly retrieved.
    [Show full text]
  • Memory Protection File B [RWX] File D [RW] File F [R] OS GUEST LECTURE
    11/17/2018 Operating Systems Protection Goal: Ensure data confidentiality + data integrity + systems availability Protection domain = the set of accessible objects + access rights File A [RW] File C [R] Printer 1 [W] File E [RX] Memory Protection File B [RWX] File D [RW] File F [R] OS GUEST LECTURE XIAOWAN DONG Domain 1 Domain 2 Domain 3 11/15/2018 1 2 Private Virtual Address Space Challenges of Sharing Memory Most common memory protection Difficult to share pointer-based data Process A virtual Process B virtual Private Virtual addr space mechanism in current OS structures addr space addr space Private Page table Physical addr space ◦ Data may map to different virtual addresses Each process has a private virtual in different address spaces Physical address space Physical Access addr space ◦ Set of accessible objects: virtual pages frame no Rights Segment 3 mapped … … Segment 2 ◦ Access rights: access permissions to P1 RW each virtual page Segment 1 P2 RW Segment 1 Recorded in per-process page table P3 RW ◦ Virtual-to-physical translation + P4 RW Segment 2 access permissions … … 3 4 1 11/17/2018 Challenges of Sharing Memory Challenges for Memory Sharing Potential duplicate virtual-to-physical Potential duplicate virtual-to-physical translation information for shared Process A page Physical Process B page translation information for shared memory table addr space table memory Processor ◦ Page table is per-process at page ◦ Single copy of the physical memory, multiple granularity copies of the mapping info (even if identical) TLB L1 cache
    [Show full text]
  • Asynchronous Page Faults Aix Did It Red Hat Author Gleb Natapov August 10, 2010
    Asynchronous page faults Aix did it Red Hat Author Gleb Natapov August 10, 2010 Abstract Host memory overcommit may cause guest memory to be swapped. When guest vcpu access memory swapped out by a host its execution is suspended until memory is swapped back. Asynchronous page fault is a way to try and use guest vcpu more efficiently by allowing it to execute other tasks while page is brought back into memory. Part I How KVM Handles Guest Memory and What Inefficiency it Has With Regards to Host Swapping Mapping guest memory into host memory But we do it on demand Page fault happens on first guest access What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn()
    [Show full text]
  • Paging: Smaller Tables
    20 Paging: Smaller Tables We now tackle the second problem that paging introduces: page tables are too big and thus consume too much memory. Let’s start out with a linear page table. As you might recall1, linear page tables get pretty 32 12 big. Assume again a 32-bit address space (2 bytes), with 4KB (2 byte) pages and a 4-byte page-table entry. An address space thus has roughly 232 one million virtual pages in it ( 212 ); multiply by the page-table entry size and you see that our page table is 4MB in size. Recall also: we usually have one page table for every process in the system! With a hundred active processes (not uncommon on a modern system), we will be allocating hundreds of megabytes of memory just for page tables! As a result, we are in search of some techniques to reduce this heavy burden. There are a lot of them, so let’s get going. But not before our crux: CRUX: HOW TO MAKE PAGE TABLES SMALLER? Simple array-based page tables (usually called linear page tables) are too big, taking up far too much memory on typical systems. How can we make page tables smaller? What are the key ideas? What inefficiencies arise as a result of these new data structures? 20.1 Simple Solution: Bigger Pages We could reduce the size of the page table in one simple way: use bigger pages. Take our 32-bit address space again, but this time assume 16KB pages. We would thus have an 18-bit VPN plus a 14-bit offset.
    [Show full text]
  • What Is an Operating System III 2.1 Compnents II an Operating System
    Page 1 of 6 What is an Operating System III 2.1 Compnents II An operating system (OS) is software that manages computer hardware and software resources and provides common services for computer programs. The operating system is an essential component of the system software in a computer system. Application programs usually require an operating system to function. Memory management Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already in use by another program. Since programs time share, each program must have independent access to memory. Cooperative memory management, used by many early operating systems, assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen any more, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails, it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs or viruses may purposefully alter another program's memory, or may affect the operation of the operating system itself. With cooperative memory management, it takes only one misbehaved program to crash the system. Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU), which doesn't exist in all computers.
    [Show full text]
  • Measuring Software Performance on Linux Technical Report
    Measuring Software Performance on Linux Technical Report November 21, 2018 Martin Becker Samarjit Chakraborty Chair of Real-Time Computer Systems Chair of Real-Time Computer Systems Technical University of Munich Technical University of Munich Munich, Germany Munich, Germany [email protected] [email protected] OS program program CPU .text .bss + + .data +/- + instructions cache branch + coherency scheduler misprediction core + pollution + migrations data + + interrupt L1i$ miss access + + + + + + context mode + + (TLB flush) TLB + switch data switch miss L1d$ +/- + (KPTI TLB flush) miss prefetch +/- + + + higher-level readahead + page cache miss walk + + multicore + + (TLB shootdown) TLB coherency page DRAM + page fault + cache miss + + + disk + major minor I/O Figure 1. Event interaction map for a program running on an Intel Core processor on Linux. Each event itself may cause processor cycles, and inhibit (−), enable (+), or modulate (⊗) others. Abstract that our measurement setup has a large impact on the results. Measuring and analyzing the performance of software has More surprisingly, however, they also suggest that the setup reached a high complexity, caused by more advanced pro- can be negligible for certain analysis methods. Furthermore, cessor designs and the intricate interaction between user we found that our setup maintains significantly better per- formance under background load conditions, which means arXiv:1811.01412v2 [cs.PF] 20 Nov 2018 programs, the operating system, and the processor’s microar- chitecture. In this report, we summarize our experience on it can be used to improve high-performance applications. how performance characteristics of software should be mea- CCS Concepts • Software and its engineering → Soft- sured when running on a Linux operating system and a ware performance; modern processor.
    [Show full text]
  • Unikernel Monitors: Extending Minimalism Outside of the Box
    Unikernel Monitors: Extending Minimalism Outside of the Box Dan Williams Ricardo Koller IBM T.J. Watson Research Center Abstract Recently, unikernels have emerged as an exploration of minimalist software stacks to improve the security of ap- plications in the cloud. In this paper, we propose ex- tending the notion of minimalism beyond an individual virtual machine to include the underlying monitor and the interface it exposes. We propose unikernel monitors. Each unikernel is bundled with a tiny, specialized mon- itor that only contains what the unikernel needs both in terms of interface and implementation. Unikernel mon- itors improve isolation through minimal interfaces, re- Figure 1: The unit of execution in the cloud as (a) a duce complexity, and boot unikernels quickly. Our ini- unikernel, built from only what it needs, running on a tial prototype, ukvm, is less than 5% the code size of a VM abstraction; or (b) a unikernel running on a spe- traditional monitor, and boots MirageOS unikernels in as cialized unikernel monitor implementing only what the little as 10ms (8× faster than a traditional monitor). unikernel needs. 1 Introduction the application (unikernel) and the rest of the system, as defined by the virtual hardware abstraction, minimal? Minimal software stacks are changing the way we think Can application dependencies be tracked through the in- about assembling applications for the cloud. A minimal terface and even define a minimal virtual machine mon- amount of software implies a reduced attack surface and itor (or in this case a unikernel monitor) for the applica- a better understanding of the system, leading to increased tion, thus producing a maximally isolated, minimal exe- security.
    [Show full text]
  • Guide to Openvms Performance Management
    Guide to OpenVMS Performance Management Order Number: AA–PV5XA–TE May 1993 This manual is a conceptual and tutorial guide for experienced users responsible for optimizing performance on OpenVMS systems. Revision/Update Information: This manual supersedes the Guide to VMS Performance Management, Version 5.2. Software Version: OpenVMS VAX Version 6.0 Digital Equipment Corporation Maynard, Massachusetts May 1993 Digital Equipment Corporation makes no representations that the use of its products in the manner described in this publication will not infringe on existing or future patent rights, nor do the descriptions contained in this publication imply the granting of licenses to make, use, or sell equipment or software in accordance with the description. Possession, use, or copying of the software described in this publication is authorized only pursuant to a valid written license from Digital or an authorized sublicensor. © Digital Equipment Corporation 1993. All rights reserved. The postpaid Reader’s Comments forms at the end of this document request your critical evaluation to assist in preparing future documentation. The following are trademarks of Digital Equipment Corporation: ACMS, ALL–IN–1, Bookreader, CI, DBMS, DECnet, DECwindows, Digital, MicroVAX, OpenVMS, PDP–11, VAX, VAXcluster, VAX DOCUMENT, VMS, and the DIGITAL logo. All other trademarks and registered trademarks are the property of their respective holders. ZK4521 This document was prepared using VAX DOCUMENT Version 2.1. Contents Preface ............................................................ ix 1 Introduction to Performance Management 1.1 Knowing Your Workload ....................................... 1–2 1.1.1 Workload Management .................................... 1–3 1.1.2 Workload Distribution ..................................... 1–4 1.1.3 Code Sharing ............................................ 1–4 1.2 Evaluating User Complaints .
    [Show full text]
  • Virtual Memory §5.4 Virt Virtual Memory U Al Memo Use Main Memory As a “Cache” For
    Chapter 5 Large and Fast: Exppgloiting Memory Hierarchy Part II Virtual Memory §5.4 Virt Virtual Memory u al Memo Use main memory as a “cache” for secondary (disk) storage r y Managed jointly by CPU hardware and the operating system (OS) Programs share main memory Each gets a private virtual address space holdinggqy its frequently used code and data Protected from other programs CPU and OS translate virtual addresses to physical addresses VM “block” is called a page VM translation “miss” is called a page fault Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2 Address Translation Fixed-size pages (e.g., 4K) Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 3 Page Fault Penalty On page fault, the page must be fetched from disk Takes millions of clock cycles Handled by OS code Try to minimize page fault rate Fully associative placement Smart replacement algorithms Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4 Page Tables Stores placement information Array of page table entries, indexed by virtual page number Page table register in CPU points to page table in physical memory If page is present in memory PTE stores the physical page number Plus other status bits (referenced, dirty, …) If page is not present PTE can refer to location in swap space on dis k Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 5 Translation Using a Page Table Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6 Mapping Pages to Storage Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 7 Replacement and Writes
    [Show full text]
  • X86 Memory Protection and Translation
    2/5/20 COMP 790: OS Implementation COMP 790: OS Implementation Logical Diagram Binary Memory x86 Memory Protection and Threads Formats Allocators Translation User System Calls Kernel Don Porter RCU File System Networking Sync Memory Device CPU Today’s Management Drivers Scheduler Lecture Hardware Interrupts Disk Net Consistency 1 Today’s Lecture: Focus on Hardware ABI 2 1 2 COMP 790: OS Implementation COMP 790: OS Implementation Lecture Goal Undergrad Review • Understand the hardware tools available on a • What is: modern x86 processor for manipulating and – Virtual memory? protecting memory – Segmentation? • Lab 2: You will program this hardware – Paging? • Apologies: Material can be a bit dry, but important – Plus, slides will be good reference • But, cool tech tricks: – How does thread-local storage (TLS) work? – An actual (and tough) Microsoft interview question 3 4 3 4 COMP 790: OS Implementation COMP 790: OS Implementation Memory Mapping Two System Goals 1) Provide an abstraction of contiguous, isolated virtual Process 1 Process 2 memory to a program Virtual Memory Virtual Memory 2) Prevent illegal operations // Program expects (*x) – Prevent access to other application or OS memory 0x1000 Only one physical 0x1000 address 0x1000!! // to always be at – Detect failures early (e.g., segfault on address 0) // address 0x1000 – More recently, prevent exploits that try to execute int *x = 0x1000; program data 0x1000 Physical Memory 5 6 5 6 1 2/5/20 COMP 790: OS Implementation COMP 790: OS Implementation Outline x86 Processor Modes • x86
    [Show full text]