An Evolutionary Study of Linux Memory Management for Fun and Profit Jian Huang, Moinuddin K
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Virtual Memory
Chapter 4 Virtual Memory Linux processes execute in a virtual environment that makes it appear as if each process had the entire address space of the CPU available to itself. This virtual address space extends from address 0 all the way to the maximum address. On a 32-bit platform, such as IA-32, the maximum address is 232 − 1or0xffffffff. On a 64-bit platform, such as IA-64, this is 264 − 1or0xffffffffffffffff. While it is obviously convenient for a process to be able to access such a huge ad- dress space, there are really three distinct, but equally important, reasons for using virtual memory. 1. Resource virtualization. On a system with virtual memory, a process does not have to concern itself with the details of how much physical memory is available or which physical memory locations are already in use by some other process. In other words, virtual memory takes a limited physical resource (physical memory) and turns it into an infinite, or at least an abundant, resource (virtual memory). 2. Information isolation. Because each process runs in its own address space, it is not possible for one process to read data that belongs to another process. This improves security because it reduces the risk of one process being able to spy on another pro- cess and, e.g., steal a password. 3. Fault isolation. Processes with their own virtual address spaces cannot overwrite each other’s memory. This greatly reduces the risk of a failure in one process trig- gering a failure in another process. That is, when a process crashes, the problem is generally limited to that process alone and does not cause the entire machine to go down. -
The Linux Kernel Module Programming Guide
The Linux Kernel Module Programming Guide Peter Jay Salzman Michael Burian Ori Pomerantz Copyright © 2001 Peter Jay Salzman 2007−05−18 ver 2.6.4 The Linux Kernel Module Programming Guide is a free book; you may reproduce and/or modify it under the terms of the Open Software License, version 1.1. You can obtain a copy of this license at http://opensource.org/licenses/osl.php. This book is distributed in the hope it will be useful, but without any warranty, without even the implied warranty of merchantability or fitness for a particular purpose. The author encourages wide distribution of this book for personal or commercial use, provided the above copyright notice remains intact and the method adheres to the provisions of the Open Software License. In summary, you may copy and distribute this book free of charge or for a profit. No explicit permission is required from the author for reproduction of this book in any medium, physical or electronic. Derivative works and translations of this document must be placed under the Open Software License, and the original copyright notice must remain intact. If you have contributed new material to this book, you must make the material and source code available for your revisions. Please make revisions and updates available directly to the document maintainer, Peter Jay Salzman <[email protected]>. This will allow for the merging of updates and provide consistent revisions to the Linux community. If you publish or distribute this book commercially, donations, royalties, and/or printed copies are greatly appreciated by the author and the Linux Documentation Project (LDP). -
Overcoming Traditional Problems with OS Huge Page Management
MEGA: Overcoming Traditional Problems with OS Huge Page Management Theodore Michailidis Alex Delis Mema Roussopoulos University of Athens University of Athens University of Athens Athens, Greece Athens, Greece Athens, Greece [email protected] [email protected] [email protected] ABSTRACT KEYWORDS Modern computer systems now feature memory banks whose Huge pages, Address Translation, Memory Compaction aggregate size ranges from tens to hundreds of GBs. In this context, contemporary workloads can and do often consume ACM Reference Format: Theodore Michailidis, Alex Delis, and Mema Roussopoulos. 2019. vast amounts of main memory. This upsurge in memory con- MEGA: Overcoming Traditional Problems with OS Huge Page Man- sumption routinely results in increased virtual-to-physical agement. In The 12th ACM International Systems and Storage Con- address translations, and consequently and more importantly, ference (SYSTOR ’19), June 3–5, 2019, Haifa, Israel. ACM, New York, more translation misses. Both of these aspects collectively NY, USA, 11 pages. https://doi.org/10.1145/3319647.3325839 do hamper the performance of workload execution. A solu- tion aimed at dramatically reducing the number of address translation misses has been to provide hardware support for 1 INTRODUCTION pages with bigger sizes, termed huge pages. In this paper, we Computer memory capacities have been increasing sig- empirically demonstrate the benefits and drawbacks of using nificantly over the past decades, leading to the development such huge pages. In particular, we show that it is essential of memory hungry workloads that consume vast amounts for modern OS to refine their software mechanisms to more of main memory. This upsurge in memory consumption rou- effectively manage huge pages. -
The Case for Compressed Caching in Virtual Memory Systems
THE ADVANCED COMPUTING SYSTEMS ASSOCIATION The following paper was originally published in the Proceedings of the USENIX Annual Technical Conference Monterey, California, USA, June 6-11, 1999 The Case for Compressed Caching in Virtual Memory Systems _ Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis aUniversity of Texas at Austin © 1999 by The USENIX Association All Rights Reserved Rights to individual papers remain with the author or the author's employer. Permission is granted for noncommercial reproduction of the work for educational or research purposes. This copyright notice must be included in the reproduced paper. USENIX acknowledges all trademarks herein. For more information about the USENIX Association: Phone: 1 510 528 8649 FAX: 1 510 548 5738 Email: [email protected] WWW: http://www.usenix.org The Case for Compressed Caching in Virtual Memory Systems Paul R. Wilson, Scott F. Kaplan, and Yannis Smaragdakis Dept. of Computer Sciences University of Texas at Austin Austin, Texas 78751-1182 g fwilson|sfkaplan|smaragd @cs.utexas.edu http://www.cs.utexas.edu/users/oops/ Abstract In [Wil90, Wil91b] we proposed compressed caching for virtual memory—storing pages in compressed form Compressed caching uses part of the available RAM to in a main memory compression cache to reduce disk pag- hold pages in compressed form, effectively adding a new ing. Appel also promoted this idea [AL91], and it was level to the virtual memory hierarchy. This level attempts evaluated empirically by Douglis [Dou93] and by Russi- to bridge the huge performance gap between normal (un- novich and Cogswell [RC96]. Unfortunately Douglis’s compressed) RAM and disk. -
Linux Physical Memory Analysis
Linux Physical Memory Analysis Paul Movall International Business Machines Corporation 3605 Highway 52N, Rochester, MN Ward Nelson International Business Machines Corporation 3605 Highway 52N, Rochester, MN Shaun Wetzstein International Business Machines Corporation 3605 Highway 52N, Rochester, MN Abstract A significant amount of this information can alo be found in the /proc filesystem provided by the kernel. We present a tool suite for analysis of physical memory These /proc entries can be used to obtain many statis- usage within the Linux kernel environment. This tool tics including several related to memory usage. For ex- suite can be used to collect and analyze how the physi- ample, /proc/<pid>/maps can be used to display a pro- cal memory within a Linuxenvironment is being used. cess' virtual memory map. Likewise, the contents of / proc/<pid>/status can be used to retreive statistics about virtual memory usage as well as the Resident Set Size (RSS). Embedded subsystems are common in today's computer In most cases, this process level detail is sufficient to systems. These embedded subsystems range from the analyze memory usage. In systems that do not have very simple to the very complex. In such embedded sys- backing store, more details are often needed to analyze tems, memory is scarce and swap is non-existent. When memory usage. For example, it's useful to know which adapting Linux for use in this environment, we needed pages in a process' address map are resident, not just to keep a close eye on physical memory usage. how many. This information can be used to get some clues on the usage of a shared library. -
Memory Protection at Option
Memory Protection at Option Application-Tailored Memory Safety in Safety-Critical Embedded Systems – Speicherschutz nach Wahl Auf die Anwendung zugeschnittene Speichersicherheit in sicherheitskritischen eingebetteten Systemen Der Technischen Fakultät der Universität Erlangen-Nürnberg zur Erlangung des Grades Doktor-Ingenieur vorgelegt von Michael Stilkerich Erlangen — 2012 Als Dissertation genehmigt von der Technischen Fakultät Universität Erlangen-Nürnberg Tag der Einreichung: 09.07.2012 Tag der Promotion: 30.11.2012 Dekan: Prof. Dr.-Ing. Marion Merklein Berichterstatter: Prof. Dr.-Ing. Wolfgang Schröder-Preikschat Prof. Dr. Michael Philippsen Abstract With the increasing capabilities and resources available on microcontrollers, there is a trend in the embedded industry to integrate multiple software functions on a single system to save cost, size, weight, and power. The integration raises new requirements, thereunder the need for spatial isolation, which is commonly established by using a memory protection unit (MPU) that can constrain access to the physical address space to a fixed set of address regions. MPU-based protection is limited in terms of available hardware, flexibility, granularity and ease of use. Software-based memory protection can provide an alternative or complement MPU-based protection, but has found little attention in the embedded domain. In this thesis, I evaluate qualitative and quantitative advantages and limitations of MPU-based memory protection and software-based protection based on a multi-JVM. I developed a framework composed of the AUTOSAR OS-like operating system CiAO and KESO, a Java implementation for deeply embedded systems. The framework allows choosing from no memory protection, MPU-based protection, software-based protection, and a combination of the two. -
An In-Depth Study with All Rust Cves
Memory-Safety Challenge Considered Solved? An In-Depth Study with All Rust CVEs HUI XU, School of Computer Science, Fudan University ZHUANGBIN CHEN, Dept. of CSE, The Chinese University of Hong Kong MINGSHEN SUN, Baidu Security YANGFAN ZHOU, School of Computer Science, Fudan University MICHAEL R. LYU, Dept. of CSE, The Chinese University of Hong Kong Rust is an emerging programing language that aims at preventing memory-safety bugs without sacrificing much efficiency. The claimed property is very attractive to developers, and many projects start usingthe language. However, can Rust achieve the memory-safety promise? This paper studies the question by surveying 186 real-world bug reports collected from several origins which contain all existing Rust CVEs (common vulnerability and exposures) of memory-safety issues by 2020-12-31. We manually analyze each bug and extract their culprit patterns. Our analysis result shows that Rust can keep its promise that all memory-safety bugs require unsafe code, and many memory-safety bugs in our dataset are mild soundness issues that only leave a possibility to write memory-safety bugs without unsafe code. Furthermore, we summarize three typical categories of memory-safety bugs, including automatic memory reclaim, unsound function, and unsound generic or trait. While automatic memory claim bugs are related to the side effect of Rust newly-adopted ownership-based resource management scheme, unsound function reveals the essential challenge of Rust development for avoiding unsound code, and unsound generic or trait intensifies the risk of introducing unsoundness. Based on these findings, we propose two promising directions towards improving the security of Rust development, including several best practices of using specific APIs and methods to detect particular bugs involving unsafe code. -
Project Snowflake: Non-Blocking Safe Manual Memory Management in .NET
Project Snowflake: Non-blocking Safe Manual Memory Management in .NET Matthew Parkinson Dimitrios Vytiniotis Kapil Vaswani Manuel Costa Pantazis Deligiannis Microsoft Research Dylan McDermott Aaron Blankstein Jonathan Balkind University of Cambridge Princeton University July 26, 2017 Abstract Garbage collection greatly improves programmer productivity and ensures memory safety. Manual memory management on the other hand often delivers better performance but is typically unsafe and can lead to system crashes or security vulnerabilities. We propose integrating safe manual memory management with garbage collection in the .NET runtime to get the best of both worlds. In our design, programmers can choose between allocating objects in the garbage collected heap or the manual heap. All existing applications run unmodified, and without any performance degradation, using the garbage collected heap. Our programming model for manual memory management is flexible: although objects in the manual heap can have a single owning pointer, we allow deallocation at any program point and concurrent sharing of these objects amongst all the threads in the program. Experimental results from our .NET CoreCLR implementation on real-world applications show substantial performance gains especially in multithreaded scenarios: up to 3x savings in peak working sets and 2x improvements in runtime. 1 Introduction The importance of garbage collection (GC) in modern software cannot be overstated. GC greatly improves programmer productivity because it frees programmers from the burden of thinking about object lifetimes and freeing memory. Even more importantly, GC prevents temporal memory safety errors, i.e., uses of memory after it has been freed, which often lead to security breaches. Modern generational collectors, such as the .NET GC, deliver great throughput through a combination of fast thread-local bump allocation and cheap collection of young objects [63, 18, 61]. -
Memory Safety Without Garbage Collection for Embedded Applications
Memory Safety Without Garbage Collection for Embedded Applications DINAKAR DHURJATI, SUMANT KOWSHIK, VIKRAM ADVE, and CHRIS LATTNER University of Illinois at Urbana-Champaign Traditional approaches to enforcing memory safety of programs rely heavily on run-time checks of memory accesses and on garbage collection, both of which are unattractive for embedded ap- plications. The goal of our work is to develop advanced compiler techniques for enforcing memory safety with minimal run-time overheads. In this paper, we describe a set of compiler techniques that, together with minor semantic restrictions on C programs and no new syntax, ensure memory safety and provide most of the error-detection capabilities of type-safe languages, without using garbage collection, and with no run-time software checks, (on systems with standard hardware support for memory management). The language permits arbitrary pointer-based data structures, explicit deallocation of dynamically allocated memory, and restricted array operations. One of the key results of this paper is a compiler technique that ensures that dereferencing dangling pointers to freed memory does not violate memory safety, without annotations, run-time checks, or garbage collection, and works for arbitrary type-safe C programs. Furthermore, we present a new inter- procedural analysis for static array bounds checking under certain assumptions. For a diverse set of embedded C programs, we show that we are able to ensure memory safety of pointer and dy- namic memory usage in all these programs with no run-time software checks (on systems with standard hardware memory protection), requiring only minor restructuring to conform to simple type restrictions. Static array bounds checking fails for roughly half the programs we study due to complex array references, and these are the only cases where explicit run-time software checks would be needed under our language and system assumptions. -
Extracting Compressed Pages from the Windows 10 Virtual Store WHITE PAPER | EXTRACTING COMPRESSED PAGES from the WINDOWS 10 VIRTUAL STORE 2
white paper Extracting Compressed Pages from the Windows 10 Virtual Store WHITE PAPER | EXTRACTING COMPRESSED PAGES FROM THE WINDOWS 10 VIRTUAL STORE 2 Abstract Windows 8.1 introduced memory compression in August 2013. By the end of 2013 Linux 3.11 and OS X Mavericks leveraged compressed memory as well. Disk I/O continues to be orders of magnitude slower than RAM, whereas reading and decompressing data in RAM is fast and highly parallelizable across the system’s CPU cores, yielding a significant performance increase. However, this came at the cost of increased complexity of process memory reconstruction and thus reduced the power of popular tools such as Volatility, Rekall, and Redline. In this document we introduce a method to retrieve compressed pages from the Windows 10 Memory Manager Virtual Store, thus providing forensics and auditing tools with a way to retrieve, examine, and reconstruct memory artifacts regardless of their storage location. Introduction Windows 10 moves pages between physical memory and the hard disk or the Store Manager’s virtual store when memory is constrained. Universal Windows Platform (UWP) applications leverage the Virtual Store any time they are suspended (as is the case when minimized). When a given page is no longer in the process’s working set, the corresponding Page Table Entry (PTE) is used by the OS to specify the storage location as well as additional data that allows it to start the retrieval process. In the case of a page file, the retrieval is straightforward because both the page file index and the location of the page within the page file can be directly retrieved. -
Memory Leak Or Dangling Pointer
Modern C++ for Computer Vision and Image Processing Igor Bogoslavskyi Outline Using pointers Pointers are polymorphic Pointer “this” Using const with pointers Stack and Heap Memory leaks and dangling pointers Memory leak Dangling pointer RAII 2 Using pointers in real world Using pointers for classes Pointers can point to objects of custom classes: 1 std::vector<int> vector_int; 2 std::vector<int >* vec_ptr = &vector_int; 3 MyClass obj; 4 MyClass* obj_ptr = &obj; Call object functions from pointer with -> 1 MyClass obj; 2 obj.MyFunc(); 3 MyClass* obj_ptr = &obj; 4 obj_ptr->MyFunc(); obj->Func() (*obj).Func() ↔ 4 Pointers are polymorphic Pointers are just like references, but have additional useful properties: Can be reassigned Can point to ‘‘nothing’’ (nullptr) Can be stored in a vector or an array Use pointers for polymorphism 1 Derived derived; 2 Base* ptr = &derived; Example: for implementing strategy store a pointer to the strategy interface and initialize it with nullptr and check if it is set before calling its methods 5 1 #include <iostream > 2 #include <vector > 3 using std::cout; 4 struct AbstractShape { 5 virtual void Print() const = 0; 6 }; 7 struct Square : public AbstractShape { 8 void Print() const override { cout << "Square\n";} 9 }; 10 struct Triangle : public AbstractShape { 11 void Print() const override { cout << "Triangle\n";} 12 }; 13 int main() { 14 std::vector<AbstractShape*> shapes; 15 Square square; 16 Triangle triangle; 17 shapes.push_back(&square); 18 shapes.push_back(&triangle); 19 for (const auto* shape : shapes) { shape->Print(); } 20 return 0; 21 } 6 this pointer Every object of a class or a struct holds a pointer to itself This pointer is called this Allows the objects to: Return a reference to themselves: return *this; Create copies of themselves within a function Explicitly show that a member belongs to the current object: this->x(); 7 Using const with pointers Pointers can point to a const variable: 1 // Cannot change value , can reassign pointer. -
Asynchronous Page Faults Aix Did It Red Hat Author Gleb Natapov August 10, 2010
Asynchronous page faults Aix did it Red Hat Author Gleb Natapov August 10, 2010 Abstract Host memory overcommit may cause guest memory to be swapped. When guest vcpu access memory swapped out by a host its execution is suspended until memory is swapped back. Asynchronous page fault is a way to try and use guest vcpu more efficiently by allowing it to execute other tasks while page is brought back into memory. Part I How KVM Handles Guest Memory and What Inefficiency it Has With Regards to Host Swapping Mapping guest memory into host memory But we do it on demand Page fault happens on first guest access What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn() 4 get user pages fast() no previously mapped page and no swap entry found empty page is allocated 5 page is added into shadow/nested page table What happens on a page fault? 1 VMEXIT 2 kvm mmu page fault() 3 gfn to pfn()