Repframe: an Efficient and Transparent Framework For

Total Page:16

File Type:pdf, Size:1020Kb

Repframe: an Efficient and Transparent Framework For REPFRAME: An Efficient and Transparent Framework for Dynamic Program Analysis Heming Cui+*, Rui Gu*, Cheng Liu*, and Junfeng Yang* +The University of Hong Kong *Columbia University Abstract sis frameworks can be classified into two approaches de- pending on how a framework transfers an application’s Dynamic program analysis frameworks greatly improve execution states to an analysis tool. Traditional analysis software quality as they enable a wide range of pow- frameworks [8, 31, 34, 38, 41] take a “fully-coupled” ap- erful analysis tools (e.g., reliability, profiling, and log- proach: a framework in-lines an analysis with the execu- ging) at runtime. However, because existing frameworks tion, then the analysis can inspect all or most execution run only one actual execution for each software applica- states. However, this approach can easily let the anal- tion, the execution is fully or partially coupled with an ysis slow down the actual execution. To improve per- analysis tool in order to transfer execution states (e.g., formance, leveraging a fact that many analyses can be accessed memory and thread interleavings) to the anal- done asynchronously with the actual execution, some re- ysis tool, easily causing a prohibitive slowdown for the cent frameworks [10, 20, 22, 35, 42, 43] take a second execution. To reduce the portions of execution states that “partially-decoupled” approach: only analysis-critical require transfer, many frameworks require significantly execution states (e.g., effective memory addresses and carving analysis tools as well as the frameworks them- thread interleavings) are transferred to the analysis run- selves. Thus, these frameworks significantly trade off ning on other CPU cores, then the actual execution can transparency with analysis tools and allow only one type be less perturbed by the analysis. of tools to run within one execution. This paper presents REPFRAME, an efficient and Unfortunately, despite these great effort, most exist- transparent framework that fully decouples execution ing analysis frameworks are still hard to deploy in pro- and analysis by constructing multiple equivalent execu- duction runs, mainly due to three problems. The first tions. To do so, REPFRAME leverages a recent fault- problem is still performance. Traditional frameworks tolerant technique: transparent state machine replica- fully couple the analysis with the actual execution us- tion, which runs the same software application on a set ing the shadow memory approach, so whenever an anal- of machines (or replicas), and ensures that all replicas ysis does some heavyweight work, the execution is see the same sequence of inputs and process these inputs slowed down (e.g., a popular race detection tool Thread- with the same efficient thread interleavings automati- Sanitizer [41], which uses a traditional framework, in- cally. In addition, this paper discusses potential direc- curs 20X∼100X slowdown for many programs). Recent tions in which REPFRAME can further strengthen exist- frameworks that take the “partially-decoupled” approach ing analyses. Evaluation shows that REPFRAME is easy have shown to run 4X∼8X times faster [22, 43] than to run two asynchronous analysis tools together and has traditional frameworks because they transfer fewer ex- reasonable overhead. ecution states. However, the overall slowdown of these recent frameworks are still prohibitive in their own eval- 1 Introduction uation because the amount of execution states (e.g., effective memory addresses [22] and thread interleav- Dynamic program analysis frameworks greatly improve ings [43]) transferred to analysis tools are still enormous. software quality as they enable a wide range of powerful analysis tools (e.g., data race detectors [34, 41, 43] for The second problem is that recent frameworks with multithreaded applications) at runtime. Existing analy- the “partially-decoupled” approach heavily trade off transparency with analysis tools. To be able to transfer Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or fewer execution states to an analysis tool, these frame- distributed for profit or commercial advantage and that copies bear this notice works require heavily carving the analysis tool as well and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is per- as the transferred execution states. For example, a re- mitted. To copy otherwise, or republish, to post on servers or to redistribute to cent framework [43] for race detection tools leverages lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. the record-replay technique [24, 25, 29] to reduce anal- ysis work in the actual execution, but this framework re- APSys ’15, July 27-28, 2015, Tokyo, Japan. 2015 ACM. ISBN 978-1-4503-3554-6/15/07$15.00. quires significantly carving race detection tools into sev- http://dx.doi.org/10.1145/2797022.2797033 eral phases to adapt to record-replay, which makes the 1 tools largely different from typical ones (Note that an cusses several types of analyses in which REPFRAME analysis tool itself is already quite difficult to implement has the potential to strengthen and speedup existing tools and maintain.) themselves via REPFRAME’s replication architecture. In The third problem is that existing frameworks, includ- addition, this paper points out that existing race detec- ing both traditional ones and recent ones, have not shown tion tools can actually speedup REPFRAME. In a word, to run multiple types of analysis tools together. Multiple REPFRAME and existing analysis tools form a mutually analysis tools bring multiple powerful guarantees, which beneficial eco-system. is attractive to today’s applications. However, the tra- Third, despite much effort, state-of-the-art still lacks ditional “fully-coupled” frameworks typically take the evaluation to show that different types of analysis shadow memory approach to hold analysis results for tools can actually run together. To address this chal- each memory byte in the actual execution, then different lenge, we evaluated REPFRAME on a popular paral- analyses may require different copies of shadow mem- lel anti-virus scanning server ClamAV [11] with three ory per byte. It is not trivial to extend these frameworks replicas (each has 24 cores) in Linux. Our evaluation to support multiple copies of shadow memory. Consider- shows that REPFRAME is able to transparently run two ing “partially-decoupled” frameworks, it is not trivial to analysis tools together: one is a heavyweight analy- extend them to support multiple types of analysis tools sis tool, the Helgrind race detector [34]; the other is a within one execution either, because these frameworks lightweight analysis tool, DynamoRio’s code coverage heavily carve the transferred execution states, which are tool drcov [8]. Moreover, REPFRAME incurred merely hard to reuse for different types of analysis tools. 2.1% overhead over the actual execution. In sum, a fundamental reason for the three problems The main contribution of REPFRAME is the idea of in existing analysis frameworks is that they run only one applying transparent state machine replication to fully actual execution, so an analysis tool has to be fully or decouple an application’s actual execution and analy- partially coupled with the execution in order to inspect sis tools, which benefits applications, analysis tools, and execution states. Well, what if one can construct multiple frameworks. Note that: (1) unlike CRANE,REPFRAME equivalent executions efficiently and transparently? does not aim to provide fault-tolerance, but aims to con- To address this question, this paper presents struct multiple equivalent executions so that applications REPFRAME, an efficient, transparent dynamic pro- can enjoy efficient and transparent analyses; and (2) gram analysis framework by leveraging a technique REPFRAME does not aim to replace traditional “fully- called transparent state machine replication, presented coupled” frameworks, but aims to compensate them be- in CRANE [16]. This technique runs the same multi- cause REPFRAME can easily run them on replicas. threaded application on a set of machines (or replicas) In the remaining of this paper, x2 introduces the back- transparently without modifying these applications, and ground of the CRANE system. x3 gives an overview of enforces that all replicas perform the same execution REPFRAME, including its deployment model, its check- state transitions. To do so, CRANE combines two point design for analysis tools, and its potential bene- techniques, state machine replication (or SMR) and fits. x4 presents evaluation results, x5 introduces related deterministic multithreading (or DMT). SMR ensures work, and x6 concludes. that all replicas always see the same sequence of inputs as long as majority of replicas agree on these inputs. 2C RANE Background DMT efficiently enforces the same thread interleavings CRANE’s deployment model is similar to a typical across replicas on each input. SMR’s. In a CRANE-replicated system, a set of 2f+1 Leveraging CRANE,REPFRAME can just run anal- machines (nodes) are set up within a LAN, and each ysis tools on some replicas while the other replicas node runs an instance of CRANE containing the same still run actual executions and process client requests server program. Once the CRANE system starts, one fast. However, to achieve this goal, REPFRAME must node becomes the primary node which proposes the or- address three practical challenges. First, even asyn- der of requests to execute, and the others become backup chronous analysis tools (e.g., race detectors) sometimes nodes which follow the primary’s proposals. An arbi- may decide to roll back to previous execution states and trary number of clients in LAN or WAN send network discard malicious inputs that triggered harmful events. requests to the primary and get responses. If failures oc- To address this challenge, REPFRAME provides a trans- cur, the nodes run a leader election to elect a new leader parent application-level checkpoint mechanism with ex- and continue.
Recommended publications
  • Dynamic Optimization of IA-32 Applications Under Dynamorio by Timothy Garnett
    Dynamic Optimization of IA-32 Applications Under DynamoRIO by Timothy Garnett Submitted to the Department of Electrical Engineering and Computer Science in Partial Fulfillment of the Requirements for the Degree of Master of Engineering in Computer Science Abstract The ability of compilers to optimize programs statically is diminishing. The advent and increased use of shared libraries, dynamic class loading, and runtime binding means that the compiler has, even with difficult to accurately obtain profiling data, less and less knowledge of the runtime state of the program. Dynamic optimization systems attempt to fill this gap by providing the ability to optimize the application at runtime when more of the system's state is known and very accurate profiling data can be obtained. This thesis presents two uses of the DynamoRIO runtime introspection and modification system to optimize applications at runtime. The first is a series of optimizations for interpreters running under DynamoRIO's logical program counter extension. The optimizations include extensions to DynamoRIO to allow the interpreter writer add a few annotations to the source code of his interpreter that enable large amounts of optimization. For interpreters instrumented as such, improvements in performance of 20 to 60 percent were obtained. The second use is a proof of concept for accelerating the performance of IA-32 applications running on the Itanium processor by dynamically translating hot trace paths into Itanium IA-64 assembly through DynamoRIO while allowing the less frequently executed potions of the application to run in Itanium's native IA-32 hardware emulation. This transformation yields speedups of almost 50 percent on small loop intensive benchmarks.
    [Show full text]
  • Dynamic Binary Translation & Instrumentation
    Dynamic Binary Translation & Instrumentation Pin Building Customized Program Analysis Tools with Dynamic Instrumentation CK Luk, Robert Cohn, Robert Muth, Harish Patil, Artur Klauser, Geoff Lowney, Steven Wallace, Kim Hazelwood Intel Vijay Janapa Reddi University of Colorado http://rogue.colorado.edu/Pin PLDI’05 2 Instrumentation • Insert extra code into programs to collect information about execution • Program analysis: • Code coverage, call-graph generation, memory-leak detection • Architectural study: • Processor simulation, fault injection • Existing binary-level instrumentation systems: • Static: • ATOM, EEL, Etch, Morph • Dynamic: • Dyninst, Vulcan, DTrace, Valgrind, Strata, DynamoRIO C Pin is a new dynamic binary instrumentation system PLDI’05 3 A Pintool for Tracing Memory Writes #include <iostream> #include "pin.H" executed immediately before a FILE* trace; write is executed • Same source code works on the 4 architectures VOID RecordMemWrite(VOID* ip, VOID* addr, UINT32 size) { fprintf(trace,=> “%p: Pin Wtakes %p %dcare\n”, of ip, different addr, size); addressing modes } • No need to manually save/restore application state VOID Instruction(INS ins, VOID *v) { if (INS_IsMemoryWrite(ins))=> Pin does it for you automatically and efficiently INS_InsertCall(ins, IPOINT_BEFORE, AFUNPTR(RecordMemWrite), IARG_INST_PTR, IARG_MEMORYWRITE_EA, IARG_MEMORYWRITE_SIZE, IARG_END); } int main(int argc, char * argv[]) { executed when an instruction is PIN_Init(argc, argv); dynamically compiled trace = fopen(“atrace.out”, “w”); INS_AddInstrumentFunction(Instruction,
    [Show full text]
  • Pin: Building Customized Program Analysis Tools with Dynamic Instrumentation
    Pin: Building Customized Program Analysis Tools with Dynamic Instrumentation Chi-Keung Luk Robert Cohn Robert Muth Harish Patil Artur Klauser Geoff Lowney Steven Wallace Vijay Janapa Reddi Kim Hazelwood Intel Corporation ¡ University of Colorado ¢¤£¦¥¨§ © £ ¦£ "! #%$&'( £)&(¦*+©-,.+/01$©-!2 ©-,2¦3 45£) 67©2, £¦!2 "0 Abstract 1. Introduction Robust and powerful software instrumentation tools are essential As software complexity increases, instrumentation—a technique for program analysis tasks such as profiling, performance evalu- for inserting extra code into an application to observe its behavior— ation, and bug detection. To meet this need, we have developed is becoming more important. Instrumentation can be performed at a new instrumentation system called Pin. Our goals are to pro- various stages: in the source code, at compile time, post link time, vide easy-to-use, portable, transparent, and efficient instrumenta- or at run time. Pin is a software system that performs run-time tion. Instrumentation tools (called Pintools) are written in C/C++ binary instrumentation of Linux applications. using Pin’s rich API. Pin follows the model of ATOM, allowing the The goal of Pin is to provide an instrumentation platform for tool writer to analyze an application at the instruction level with- building a wide variety of program analysis tools for multiple archi- out the need for detailed knowledge of the underlying instruction tectures. As a result, the design emphasizes ease-of-use, portabil- set. The API is designed to be architecture independent whenever ity, transparency, efficiency, and robustness. This paper describes possible, making Pintools source compatible across different archi- the design of Pin and shows how it provides these features.
    [Show full text]
  • An Infrastructure for Adaptive Dynamic Optimization
    An Infrastructure for Adaptive Dynamic Optimization Derek Bruening, Timothy Garnett, and Saman Amarasinghe Laboratory for Computer Science Massachusetts Institute of Technology Cambridge, MA 02139 fiye,timothyg,[email protected] Abstract ing profile information improves the predictions but still falls short for programs whose behavior changes dynami- Dynamic optimization is emerging as a promising ap- cally. Additionally, many software vendors are hesitant to proach to overcome many of the obstacles of traditional ship binaries that are compiled with high levels of static op- static compilation. But while there are a number of com- timization because they are hard to debug. piler infrastructures for developing static optimizations, Shifting optimizations to runtime solves these problems. there are very few for developing dynamic optimizations. Dynamic optimization allows the user to improve the per- We present a framework for implementing dynamic analy- formance of binaries without relying on how they were ses and optimizations. We provide an interface for build- compiled. Furthermore, several types of optimizations are ing external modules, or clients, for the DynamoRIO dy- best suited to a dynamic optimization framework. These in- namic code modification system. This interface abstracts clude adaptive, architecture-specific, and inter-module opti- away many low-level details of the DynamoRIO runtime mizations. system while exposing a simple and powerful, yet efficient Adaptive optimizations require instant responses to and lightweight, API. This is achieved by restricting opti- changes in program behavior. When performed statically, a mization units to linear streams of code and using adaptive single profiling run is taken to be representative of the pro- levels of detail for representing instructions.
    [Show full text]
  • Using Dynamic Binary Instrumentation for Security(And How You May Get
    King’s Research Portal Document Version Peer reviewed version Link to publication record in King's Research Portal Citation for published version (APA): Cono D'Elia, D., Coppa, E., Palmaro, F., & Cavallaro, L. (Accepted/In press). SoK: Using Dynamic Binary Instrumentation for Security (And How You May Get Caught Red Handed). In ACM Asia Conference on Information, Computer and Communications Security (ASIACCS 2019) Citing this paper Please note that where the full-text provided on King's Research Portal is the Author Accepted Manuscript or Post-Print version this may differ from the final Published version. If citing, it is advised that you check and use the publisher's definitive version for pagination, volume/issue, and date of publication details. And where the final published version is provided on the Research Portal, if citing you are again advised to check the publisher's website for any subsequent corrections. General rights Copyright and moral rights for the publications made accessible in the Research Portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognize and abide by the legal requirements associated with these rights. •Users may download and print one copy of any publication from the Research Portal for the purpose of private study or research. •You may not further distribute the material or use it for any profit-making activity or commercial gain •You may freely distribute the URL identifying the publication in the Research Portal Take down policy If you believe that this document breaches copyright please contact [email protected] providing details, and we will remove access to the work immediately and investigate your claim.
    [Show full text]
  • Coverage-Guided Fuzzing of Grpc Interface
    Niko Lappalainen COVERAGE-GUIDED FUZZING OF GRPC INTERFACE Master of Science Thesis Faculty of Information Technology and Communication Sciences Examiners: Marko Helenius Billy Brumley January 2021 i ABSTRACT Niko Lappalainen: Coverage-guided fuzzing of gRPC interface Master of Science Thesis Tampere University Master’s Degree Programme in Information Technology January 2021 Fuzz testing has emerged as a cost-effective method of finding security issues in many real- world targets. The software company M-Files Inc. wanted to incorporate fuzz testing to harden the security of their product M-Files Server. The newly implemented gRPC API was set as the target interface to be fuzzed. This thesis was requested to find a suitable fuzzing tool, and to verify that the tool could find and report issues. Another objective of this thesis was to determine a criterion for stopping fuzzing when adequate testing coverage has been achieved without having to run the fuzzer perpetually. To select a suitable fuzzing tool, some requirements had to be defined. Requirements and selection criteria were set based on the properties of the M-Files system as well as the target interface. Next, various fuzzing tool options were gathered from different sources. These options were validated based on the set requirements to select a short list of tools that could be analysed more closely. The suitable tool was selected from these based on their ease of integration and suspected performance. The coverage-guided WinAFL was evaluated as the most suitable from the considered options. The selected fuzzing tool was used to test M-Files Server in order to record its results.
    [Show full text]
  • Valgrind a Framework for Heavyweight Dynamic Binary
    VValgrindalgrind AA FramFrameweworkork forfor HHeavyweavyweighteight DDynamynamicic BBinaryinary InstrumInstrumentationentation Nicholas Nethercote — National ICT Australia Julian Seward — OpenWorks LLP 1 FAFAQQ #1#1 • How do you pronounce “Valgrind”? • “Val-grinned”, not “Val-grined” • Don’t feel bad: almost everyone gets it wrong at first 2 DDBBAA toolstools • Program analysis tools are useful – Bug detectors – Profilers – Visualizers • Dynamic binary analysis (DBA) tools – Analyse a program’s machine code at run-time – Augment original code with analysis code 3 BBuildinguilding DDBBAA toolstools • Dynamic binary instrumentation (DBI) – Add analysis code to the original machine code at run-time – No preparation, 100% coverage • DBI frameworks – Pin, DynamoRIO, Valgrind, etc. Tool Framework + = Tool plug-in 4 PriorPrior wworkork Well-studied Not well-studied Framework performance Instrumentation capabilities Simple tools Complex tools • Potential of DBI has not been fully exploited – Tools get less attention than frameworks – Complex tools are more interesting than simple tools 5 ShadowShadow valuevalue toolstools 6 ShadowShadow valuevalue toolstools (I)(I) • Shadow every value with another value that describes it – Tool stores and propagates shadow values in parallel Tool(s) Shadow values help find... Memcheck Uses of undefined values bugs Annelid Array bounds violations Hobbes Run-time type errors TaintCheck, LIFT, TaintTrace Uses of untrusted values security “Secret tracker” Leaked secrets DynCompB Invariants properties Redux Dynamic
    [Show full text]
  • Using Dynamic Binary Instrumentation for Security (And How You May Get Caught Red Handed)
    SoK: Using Dynamic Binary Instrumentation for Security (And How You May Get Caught Red Handed) Daniele Cono D’Elia Emilio Coppa Simone Nicchi Sapienza University of Rome Sapienza University of Rome Sapienza University of Rome [email protected] [email protected] [email protected] Federico Palmaro Lorenzo Cavallaro Prisma King’s College London [email protected] [email protected] ABSTRACT 1 INTRODUCTION Dynamic binary instrumentation (DBI) techniques allow for moni- Even before the size and complexity of computer software reached toring and possibly altering the execution of a running program up the levels that recent years have witnessed, developing reliable and to the instruction level granularity. The ease of use and flexibility of efficient ways to monitor the behavior of code under execution DBI primitives has made them popular in a large body of research in has been a major concern for the scientific community. Execution different domains, including software security. Lately, the suitabil- monitoring can serve a great deal of purposes: to name but a few, ity of DBI for security has been questioned in light of transparency consider performance analysis and optimization, vulnerability iden- concerns from artifacts that popular frameworks introduce in the tification, as well as the detection of suspicious actions, execution execution: while they do not perturb benign programs, a dedicated patterns, or data flows in a program. adversary may detect their presence and defeat the analysis. To accomplish this task users can typically resort to instrumen- The contributions we provide are two-fold. We first present the tation techniques, which in their simplest form consist in adding abstraction and inner workings of DBI frameworks, how DBI as- instructions to the code sections of the program under observation.
    [Show full text]
  • Dynamic Data Flow Analysis Via Virtual Code Integration (Aka the Spiderpig Case)
    Dynamic Data Flow Analysis via Virtual Code Integration (aka The SpiderPig case) Piotr Bania [email protected] November 2008 "I'm all alone I smoke my friends down to the lter But I feel much cleaner After it rains" - Tom Waits, Little Drop Of Poison Abstract This paper addresses the process of dynamic data ow analysis using vir- tual code integration (VCI), often refered to as dynamic binary rewriting. This article will try to demonstrate all of the techniques that were applied in the SpiderPig project [15]. It will also discuss the main dierences between the methods that were employed and those used in other available software, arXiv:0906.0724v1 [cs.CR] 3 Jun 2009 as well as introducing other related work. SpiderPig's approach was found to be very fast and was transparent enough for reliable and usable data ow analysis. It was created with the purpose of providing a tool which would aid vulnerability and security researchers with tracing and analyzing any necessary data and its further propagation through a program. At the time of writing this article, it is the authors opinion that SpiderPig oers one of the most advanced solutions for data 1 ow monitoring. At the current state it works on IA-32 platforms with Mi- crosoft Windows systems and it supports FPU, SSE1, MMX and all of the IA-32 general instructions. Furthermore it can be extended to cover other operating systems and architectures as well. SpiderPig also demonstrates the usage of a virtual code integration (VCI) framework which allows for modify- ing the target application code at the instruction level.
    [Show full text]
  • Introspective Intrusion Detection
    UNIVERSITY OF CALIFORNIA, IRVINE Introspective Intrusion Detection DISSERTATION submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in Computer Engineering by Byron Hawkins Dissertation Committee: Associate Professor Brian Demsky, Chair Professor Michael Franz Associate Professor Athina Markopoulou 2017 Portions of Chapters 1, 2, 5 c 2017 IEEE Portions of Chapters 1, 2, 3 c 2016 ACM Portions of Chapters 1, 4 c 2015 IEEE All other materials c 2017 Byron Hawkins DEDICATION To the International Justice Mission for their enduring effort to protect powerless individuals from the malicious intrusions of the real world. ii TABLE OF CONTENTS Page LIST OF FIGURES vii LIST OF TABLES ix LIST OF ALGORITHMS x ACKNOWLEDGMENTS xi CURRICULUM VITAE xii ABSTRACT OF THE DISSERTATION xiv 1 Introduction 1 1.1 Timeline of Important RCE Defenses . 8 1.1.1 Bug Fixing . 9 1.1.2 Traditional Intrusion Detection . 10 1.1.3 Control Flow Integrity . 12 1.1.4 Diversification . 14 1.2 Theoretical Advantages and Challenges of IID . 16 1.3 Goals of Introspective Intrusion Detection . 24 1.4 Usage Model . 29 1.4.1 Forensics . 30 1.4.2 Advanced Persistent Threats . 32 1.4.3 Field Debugging . 35 1.4.4 Profiling . 37 1.5 Prototype IID Implementations . 38 1.5.1 BlackBox for x86 COTS Binaries . 40 1.5.2 ZenIDS for PHP Applications . 42 1.6 Contributions . 44 1.7 Organization . 45 2 Paradise: a Hypothetically Pure IID 46 2.1 Platform Snares . 47 2.2 Generic Threat Model . 51 2.3 Deployment . 52 2.3.1 Generating the Trusted Profile .
    [Show full text]
  • A Platform for Secure Static Binary Instrumentation ∗
    A Platform for Secure Static Binary Instrumentation ∗ Mingwei Zhang Rui Qiao Niranjan Hasabnis R. Sekar Stony Brook University Abstract instrumentations can be more easily and extensively opti- Program instrumentation techniques form the basis of many mized by exploiting higher level information such as types. recent software security defenses, including defenses against However, binary instrumentations are more widely applica- common exploits and security policy enforcement. As com- ble since users have ready access to binaries. Moreover, se- pared to source-code instrumentation, binary instrumenta- curity instrumentations should be applied to all code, includ- tion is easier to use and more broadly applicable due to the ing all libraries, inline assembly code, and any code inserted ready availability of binary code. Two key features needed by the compiler/linker. Here again, binary based techniques for security instrumentations are (a) it should be applied are advantageous. to all application code, including code contained in various Binary instrumentation can either be static or dynamic. system and application libraries, and (b) it should be non- Static binary instrumentation (SBI) is performed offline on bypassable. So far, dynamic binary instrumentation (DBI) binary files, whereas dynamic binary instrumentation (DBI) techniques have provided these features, whereas static bi- operates on code already loaded into main memory. DBI nary instrumentation (SBI) techniques have lacked them. techniques disassemble and instrument each basic block just These features, combined with ease of use, have made DBI before its first execution. DBI has been the technique of the de facto choice for security instrumentations. However, choice for security instrumentation of COTS binaries. Pre- DBI techniques can incur high overheads in several common vious works have used DBI for sandboxing [15, 17], taint- usage scenarios, such as application startups, system-calls, tracking [23, 28], defense from return-oriented program- and many real-world applications.
    [Show full text]
  • Transparent Dynamic Instrumentation
    Transparent Dynamic Instrumentation Derek Bruening Qin Zhao Saman Amarasinghe Google, Inc. Google, Inc Massachusetts Institute of Technology [email protected] [email protected] [email protected] Abstract can be executed, monitored, and controlled. A software code cache Process virtualization provides a virtual execution environment is used to implement the virtual environment. The provided layer within which an unmodified application can be monitored and con- of control can be used for various purposes including security en- trolled while it executes. The provided layer of control can be used forcement [14, 22], software debugging [8, 43], dynamic analy- for purposes ranging from sandboxing to compatibility to profil- sis [42, 45], and many others [40, 41, 44]. The additional operations ing. The additional operations required for this layer are performed are performed clandestinely alongside regular program execution. clandestinely alongside regular program execution. Software dy- First presented in 2001 [9], DynamoRIO evolved from a re- namic instrumentation is one method for implementing process search project [6, 10] for dynamic optimization to the flagship virtualization which dynamically instruments an application such product of a security company [22] and finally an open source that the application’s code and the inserted code are interleaved to- project [1]. Presently it is being used to build tools like Dr. Mem- gether. ory [8] that run large applications including proprietary software on DynamoRIO is a process virtualization system implemented us- both Linux and Windows. There are many challenges to building ing software code cache techniques that allows users to build cus- such a system that supports executing arbitrary unmodified appli- tomized dynamic instrumentation tools.
    [Show full text]