An Efficient End-To-End Solution Against Heap Buffer Overflows
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
CSE 220: Systems Programming
Memory Allocation CSE 220: Systems Programming Ethan Blanton Department of Computer Science and Engineering University at Buffalo Introduction The Heap Void Pointers The Standard Allocator Allocation Errors Summary Allocating Memory We have seen how to use pointers to address: An existing variable An array element from a string or array constant This lecture will discuss requesting memory from the system. © 2021 Ethan Blanton / CSE 220: Systems Programming 2 Introduction The Heap Void Pointers The Standard Allocator Allocation Errors Summary Memory Lifetime All variables we have seen so far have well-defined lifetime: They persist for the entire life of the program They persist for the duration of a single function call Sometimes we need programmer-controlled lifetime. © 2021 Ethan Blanton / CSE 220: Systems Programming 3 Introduction The Heap Void Pointers The Standard Allocator Allocation Errors Summary Examples int global ; /* Lifetime of program */ void foo () { int x; /* Lifetime of foo () */ } Here, global is statically allocated: It is allocated by the compiler It is created when the program starts It disappears when the program exits © 2021 Ethan Blanton / CSE 220: Systems Programming 4 Introduction The Heap Void Pointers The Standard Allocator Allocation Errors Summary Examples int global ; /* Lifetime of program */ void foo () { int x; /* Lifetime of foo () */ } Whereas x is automatically allocated: It is allocated by the compiler It is created when foo() is called ¶ It disappears when foo() returns © 2021 Ethan Blanton / CSE 220: Systems Programming 5 Introduction The Heap Void Pointers The Standard Allocator Allocation Errors Summary The Heap The heap represents memory that is: allocated and released at run time managed explicitly by the programmer only obtainable by address Heap memory is just a range of bytes to C. -
Memory Allocation Pushq %Rbp Java Vs
University of Washington Memory & data Roadmap Integers & floats C: Java: Machine code & C x86 assembly car *c = malloc(sizeof(car)); Car c = new Car(); Procedures & stacks c.setMiles(100); c->miles = 100; Arrays & structs c->gals = 17; c.setGals(17); float mpg = get_mpg(c); float mpg = Memory & caches free(c); c.getMPG(); Processes Virtual memory Assembly get_mpg: Memory allocation pushq %rbp Java vs. C language: movq %rsp, %rbp ... popq %rbp ret OS: Machine 0111010000011000 100011010000010000000010 code: 1000100111000010 110000011111101000011111 Computer system: Winter 2016 Memory Allocation 1 University of Washington Memory Allocation Topics Dynamic memory allocation . Size/number/lifetime of data structures may only be known at run time . Need to allocate space on the heap . Need to de-allocate (free) unused memory so it can be re-allocated Explicit Allocation Implementation . Implicit free lists . Explicit free lists – subject of next programming assignment . Segregated free lists Implicit Deallocation: Garbage collection Common memory-related bugs in C programs Winter 2016 Memory Allocation 2 University of Washington Dynamic Memory Allocation Programmers use dynamic memory allocators (such as malloc) to acquire virtual memory at run time . For data structures whose size (or lifetime) is known only at runtime Dynamic memory allocators manage an area of a process’ virtual memory known as the heap User stack Top of heap (brk ptr) Heap (via malloc) Uninitialized data (.bss) Initialized data (.data) Program text (.text) 0 Winter 2016 Memory Allocation 3 University of Washington Dynamic Memory Allocation Allocator organizes the heap as a collection of variable-sized blocks, which are either allocated or free . Allocator requests pages in heap region; virtual memory hardware and OS kernel allocate these pages to the process . -
Unleashing the Power of Allocator-Aware Software
P2126R0 2020-03-02 Pablo Halpern (on behalf of Bloomberg): [email protected] John Lakos: [email protected] Unleashing the Power of Allocator-Aware Software Infrastructure NOTE: This white paper (i.e., this is not a proposal) is intended to motivate continued investment in developing and maturing better memory allocators in the C++ Standard as well as to counter misinformation about allocators, their costs and benefits, and whether they should have a continuing role in the C++ library and language. Abstract Local (arena) memory allocators have been demonstrated to be effective at improving runtime performance both empirically in repeated controlled experiments and anecdotally in a variety of real-world applications. The initial development and subsequent maintenance effort of implementing bespoke data structures using custom memory allocation, however, are typically substantial and often untenable, especially in the limited timeframes that urgent business needs impose. To address such recurring performance needs effectively across the enterprise, Bloomberg has adopted a consistent, ubiquitous allocator-aware software infrastructure (AASI) based on the PMR-style1 plug-in memory allocator protocol pioneered at Bloomberg and adopted into the C++17 Standard Library. In this paper, we highlight the essential mechanics and subtle nuances of programing on top of Bloomberg’s AASI platform. In addition to delineating how to obtain performance gains safely at minimal development cost, we explore many of the inherent collateral benefits — such as object placement, metrics gathering, testing, debugging, and so on — that an AASI naturally affords. After reading this paper and surveying the available concrete allocators (provided in Appendix A), a competent C++ developer should be adequately equipped to extract substantial performance and other collateral benefits immediately using our AASI. -
P1083r2 | Move Resource Adaptor from Library TS to the C++ WP
P1083r2 | Move resource_adaptor from Library TS to the C++ WP Pablo Halpern [email protected] 2018-11-13 | Target audience: LWG 1 Abstract When the polymorphic allocator infrastructure was moved from the Library Fundamentals TS to the C++17 working draft, pmr::resource_adaptor was left behind. The decision not to move pmr::resource_adaptor was deliberately conservative, but the absence of resource_adaptor in the standard is a hole that must be plugged for a smooth transition to the ubiquitous use of polymorphic_allocator, as proposed in P0339 and P0987. This paper proposes that pmr::resource_adaptor be moved from the LFTS and added to the C++20 working draft. 2 History 2.1 Changes from R1 to R2 (in San Diego) • Paper was forwarded from LEWG to LWG on Tuesday, 2018-10-06 • Copied the formal wording from the LFTS directly into this paper • Minor wording changes as per initial LWG review • Rebased to the October 2018 draft of the C++ WP 2.2 Changes from R0 to R1 (pre-San Diego) • Added a note for LWG to consider clarifying the alignment requirements for resource_adaptor<A>::do_allocate(). • Changed rebind type from char to byte. • Rebased to July 2018 draft of the C++ WP. 3 Motivation It is expected that more and more classes, especially those that would not otherwise be templates, will use pmr::polymorphic_allocator<byte> to allocate memory. In order to pass an allocator to one of these classes, the allocator must either already be a polymorphic allocator, or must be adapted from a non-polymorphic allocator. The process of adaptation is facilitated by pmr::resource_adaptor, which is a simple class template, has been in the LFTS for a long time, and has been fully implemented. -
The Slab Allocator: an Object-Caching Kernel Memory Allocator
The Slab Allocator: An Object-Caching Kernel Memory Allocator Jeff Bonwick Sun Microsystems Abstract generally superior in both space and time. Finally, Section 6 describes the allocator’s debugging This paper presents a comprehensive design over- features, which can detect a wide variety of prob- view of the SunOS 5.4 kernel memory allocator. lems throughout the system. This allocator is based on a set of object-caching primitives that reduce the cost of allocating complex objects by retaining their state between uses. These 2. Object Caching same primitives prove equally effective for manag- Object caching is a technique for dealing with ing stateless memory (e.g. data pages and temporary objects that are frequently allocated and freed. The buffers) because they are space-efficient and fast. idea is to preserve the invariant portion of an The allocator’s object caches respond dynamically object’s initial state — its constructed state — to global memory pressure, and employ an object- between uses, so it does not have to be destroyed coloring scheme that improves the system’s overall and recreated every time the object is used. For cache utilization and bus balance. The allocator example, an object containing a mutex only needs also has several statistical and debugging features to have mutex_init() applied once — the first that can detect a wide range of problems throughout time the object is allocated. The object can then be the system. freed and reallocated many times without incurring the expense of mutex_destroy() and 1. Introduction mutex_init() each time. An object’s embedded locks, condition variables, reference counts, lists of The allocation and freeing of objects are among the other objects, and read-only data all generally qual- most common operations in the kernel. -
A Buffer Overflow Study
A Bu®er Overflow Study Attacks & Defenses Pierre-Alain FAYOLLE, Vincent GLAUME ENSEIRB Networks and Distributed Systems 2002 Contents I Introduction to Bu®er Overflows 5 1 Generalities 6 1.1 Process memory . 6 1.1.1 Global organization . 6 1.1.2 Function calls . 8 1.2 Bu®ers, and how vulnerable they may be . 10 2 Stack overflows 12 2.1 Principle . 12 2.2 Illustration . 12 2.2.1 Basic example . 13 2.2.2 Attack via environment variables . 14 2.2.3 Attack using gets . 16 3 Heap overflows 18 3.1 Terminology . 18 3.1.1 Unix . 18 3.1.2 Windows . 18 3.2 Motivations and Overview . 18 3.3 Overwriting pointers . 19 3.3.1 Di±culties . 20 3.3.2 Interest of the attack . 20 3.3.3 Practical study . 20 3.4 Overwriting function pointers . 24 3.4.1 Pointer to function: short reminder . 24 3.4.2 Principle . 24 3.4.3 Example . 25 3.5 Trespassing the heap with C + + . 28 3.5.1 C++ Background . 28 3.5.2 Overwriting the VPTR . 31 3.5.3 Conclusions . 32 3.6 Exploiting the malloc library . 33 3.6.1 DLMALLOC: structure . 33 3.6.2 Corruption of DLMALLOC: principle . 34 II Protection solutions 37 4 Introduction 38 1 5 How does Libsafe work? 39 5.1 Presentation . 39 5.2 Why are the functions of the libC unsafe ? . 39 5.3 What does libsafe provide ? . 40 6 The Grsecurity Kernel patch 41 6.1 Open Wall: non-executable stack . -
Fast Efficient Fixed-Size Memory Pool: No Loops and No Overhead
COMPUTATION TOOLS 2012 : The Third International Conference on Computational Logics, Algebras, Programming, Tools, and Benchmarking Fast Efficient Fixed-Size Memory Pool No Loops and No Overhead Ben Kenwright School of Computer Science Newcastle University Newcastle, United Kingdom, [email protected] Abstract--In this paper, we examine a ready-to-use, robust, it can be used in conjunction with an existing system to and computationally fast fixed-size memory pool manager provide a hybrid solution with minimum difficulty. On with no-loops and no-memory overhead that is highly suited the other hand, multiple instances of numerous fixed-sized towards time-critical systems such as games. The algorithm pools can be used to produce a general overall flexible achieves this by exploiting the unused memory slots for general solution to work in place of the current system bookkeeping in combination with a trouble-free indexing memory manager. scheme. We explain how it works in amalgamation with Alternatively, in some time critical systems such as straightforward step-by-step examples. Furthermore, we games; system allocations are reduced to a bare minimum compare just how much faster the memory pool manager is to make the process run as fast as possible. However, for when compared with a system allocator (e.g., malloc) over a a dynamically changing system, it is necessary to allocate range of allocations and sizes. memory for changing resources, e.g., data assets Keywords-memory pools; real-time; memory allocation; (graphical images, sound files, scripts) which are loaded memory manager; memory de-allocation; dynamic memory dynamically at runtime. -
Chapter 10 Buffer Overflow Buffer Overflow
Chapter 10 Buffer Overflow Buffer Overflow ● Common attack mechanism ○ first wide use by the Morris Worm in 1988 ● Prevention techniques known ○ NX bit, stack canaries, ASLR ● Still of major concern ○ Recent examples: Shellshock, Hearthbleed Buffer Overflow/Buffer Overrun Definition: A condition at an interface under which more input can be placed into a buffer or data holding area than the capacity allocated, overwriting other information. Attackers exploit such a condition to crash a system or to insert specially crafted code that allows them to gain control of the system. Buffer Overflow Basics ● Programming error when a process attempts to store data beyond the limits of a fixed-sized buffer ● Overwrites adjacent memory locations ○ locations may hold other program variables, parameters, or program control flow data ○ buffer could be located on the stack, in the heap, or in the data section of the process ● Consequences: ○ corruption of program data ○ unexpected transfer of control ○ memory access violations Basic Buffer Overflow Example Basic Buffer Overflow Stack Values Buffer Overflow Attacks Attacker needs: ● To identify a buffer overflow vulnerability in some program that can be triggered using externally sourced data under the attacker’s control ● To understand how that buffer is stored in memory and determine potential for corruption Identifying vulnerable programs can be done by: ● inspection of program source ● tracing the execution of programs as they process oversized input ● using tools such as fuzzing to automatically identify -
Towards Efficient Heap Overflow Discovery
Towards Efficient Heap Overflow Discovery Xiangkun Jia, TCA/SKLCS, Institute of Software, Chinese Academy of Sciences; Chao Zhang, Institute for Network Science and Cyberspace, Tsinghua University; Purui Su, Yi Yang, Huafeng Huang, and Dengguo Feng, TCA/SKLCS, Institute of Software, Chinese Academy of Sciences https://www.usenix.org/conference/usenixsecurity17/technical-sessions/presentation/jia This paper is included in the Proceedings of the 26th USENIX Security Symposium August 16–18, 2017 • Vancouver, BC, Canada ISBN 978-1-931971-40-9 Open access to the Proceedings of the 26th USENIX Security Symposium is sponsored by USENIX Towards Efficient Heap Overflow Discovery Xiangkun Jia1;3, Chao Zhang2 , Purui Su1;3 , Yi Yang1, Huafeng Huang1, Dengguo Feng1 1TCA/SKLCS, Institute of Software, Chinese Academy of Sciences 2Institute for Network Science and Cyberspace 3University of Chinese Academy of Sciences Tsinghua University {jiaxiangkun, yangyi, huanghuafeng, feng}@tca.iscas.ac.cn [email protected] [email protected] Abstract to attack. As the heap layout is not deterministic, heap Heap overflow is a prevalent memory corruption vulner- overflow vulnerabilities are in general harder to exploit ability, playing an important role in recent attacks. Find- than stack corruption vulnerabilities. But attackers could ing such vulnerabilities in applications is thus critical for utilize techniques like heap spray [16] and heap feng- security. Many state-of-art solutions focus on runtime shui [43] to arrange the heap layout and reliably launch detection, requiring abundant inputs to explore program attacks, making heap overflow a realistic threat. paths in order to reach a high code coverage and luckily Several solutions are proposed to protect heap overflow trigger security violations. -
Effective STL
Effective STL Author: Scott Meyers E-version is made by: Strangecat@epubcn Thanks is given to j1foo@epubcn, who has helped to revise this e-book. Content Containers........................................................................................1 Item 1. Choose your containers with care........................................................... 1 Item 2. Beware the illusion of container-independent code................................ 4 Item 3. Make copying cheap and correct for objects in containers..................... 9 Item 4. Call empty instead of checking size() against zero. ............................. 11 Item 5. Prefer range member functions to their single-element counterparts... 12 Item 6. Be alert for C++'s most vexing parse................................................... 20 Item 7. When using containers of newed pointers, remember to delete the pointers before the container is destroyed. ........................................................... 22 Item 8. Never create containers of auto_ptrs. ................................................... 27 Item 9. Choose carefully among erasing options.............................................. 29 Item 10. Be aware of allocator conventions and restrictions. ......................... 34 Item 11. Understand the legitimate uses of custom allocators........................ 40 Item 12. Have realistic expectations about the thread safety of STL containers. 43 vector and string............................................................................48 Item 13. Prefer vector -
Comprehensively and Efficiently Protecting the Heap ∗
Appears in the Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS XII), October 2006. Comprehensively and Efficiently Protecting the Heap ∗ Mazen Kharbutli Xiaowei Jiang Yan Solihin Guru Venkataramani Milos Prvulovic Jordan Univ. of Science and Technology North Carolina State University Georgia Institute of Technology [email protected] {xjiang,solihin}@ece.ncsu.edu {guru,milos}@cc.gatech.edu Abstract of altering the program behavior when it uses V . Attackers may use The goal of this paper is to propose a scheme that provides com- a variety of well-known techniques to overwrite M with V ,suchas prehensive security protection for the heap. Heap vulnerabilities are buffer overflows, integer overflows,andformat strings, typically by increasingly being exploited for attacks on computer programs. In supplying unexpected external input (e.g. network packets) to the most implementations, the heap management library keeps the heap application. The desired program behavior alteration may include meta-data (heap structure information) and the application’s heap direct control flow modifications in which M contains control flow data in an interleaved fashion and does not protect them against information such as function pointers, return addresses, and condi- each other. Such implementations are inherently unsafe: vulnera- tional branch target addresses. Alternatively, attackers may choose bilities in the application can cause the heap library to perform un- to indirectly change program behavior by modifying critical data intended actions to achieve control-flow and non-control attacks. that determines program behavior. Unfortunately, current heap protection techniques are limited Researchers have shown that the stack is vulnerable to over- in that they use too many assumptions on how the attacks will writes, so many protection schemes have been proposed to protect be performed, require new hardware support, or require too many it. -
Semi-Synchronized Non-Blocking Concurrent Kernel Cruising
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCC.2020.2970183, IEEE Transactions on Cloud Computing IEEE TRANSACTIONS ON CLOUD COMPUTING 1 Semi-synchronized Non-blocking Concurrent Kernel Cruising Donghai Tian, Qiang Zeng, Dinghao Wu, Member, IEEE,Peng Liu, Member, IEEE, Changzhen Hu Abstract—Kernel heap buffer overflow vulnerabilities have been exposed for decades, but there are few practical countermeasure that can be applied to OS kernels. Previous solutions either suffer from high performance overhead or compatibility problems with mainstream kernels and hardware. In this paper, we present KRUISER, a concurrent kernel heap buffer overflow monitor. Unlike conventional methods, the security enforcement of which is usually inlined into the kernel execution, Kruiser migrates security enforcement from the kernel’s normal execution to a concurrent monitor process, leveraging the increasingly popular multi-core architectures. To reduce the synchronization overhead between the monitor process and the running kernel, we design a novel semi-synchronized non-blocking monitoring algorithm, which enables efficient runtime detection on live memory without incurring false positives. To prevent the monitor process from being tampered and provide guaranteed performance isolation, we utilize the virtualization technology to run the monitor process out of the monitored VM, while heap memory allocation information is collected inside the monitored VM in a secure and efficient way. The hybrid VM monitoring technique combined with the secure canary that cannot be counterfeited by attackers provides guaranteed overflow detection with high efficiency.