The Future of Grey-Box Fuzzing

Total Page:16

File Type:pdf, Size:1020Kb

The Future of Grey-Box Fuzzing The future of grey-box fuzzing Isak Hjelt Isak Hjelt VT 2017 Examensarbete, 15 hp Supervisor: Pedher Johansson Examiner: Kai-Florian Richter Bachelor’s program in Computing Science, 180 hp Abstract Society are becoming more dependent on software, and more artifacts are being connected to the Internet each day[31]. This makes the work of tracking down vulnerabilities in software a moral obligation for soft- ware developers. Since manual testing is expensive[7], automated bug finding techniques are attractive within the quality assurance field, since it can save companies a lot of money. This thesis summarizes the research of an automated bug finding tech- nique called grey-box fuzzing, with the goal of saying something about its future. Grey-box fuzzing is a breed of fuzzing, where the basic con- cept of fuzzing is to provide random data as input to an application in order to test it for bugs. To portray the current state of grey-box fuzzing, two tools which are relevant to the current research will be presented and discussed. A definition of what grey-box fuzzing is will also be extracted from the research papers by looking at what they all have in common. The combination of fuzzing with symbolic execution or dynamic taint analysis are two of the approaches which this work has identified and discussed, but argues that dynamic taint analysis is more promising to the future. Lastly, the trend within fuzzing is predicted to go more to- wards the grey-box style of fuzzing, which leads to grey-box fuzzing rising in popularity. Acknowledgements I would like to thank my supervisor Pedher Johansson for helping me guide this thesis in the right direction and always answering all my questions to the best of his ability. I also want to thank my sister for proofreading this thesis and giving me tips on how to improve my writing. Last, but by no means least, thanks to my significant other for always motivating and sup- porting me during my three years of study. Contents 1 Introduction 1 1.1 Purpose of the thesis 3 2 The basics of fuzzing 4 2.1 Black-box fuzzing 4 2.2 White-box fuzzing 4 3 Grey-box fuzzing 6 3.1 Strengths and weaknesses 7 4 Grey-box fuzzers 7 4.1 AFL 7 4.2 VUzzer 8 5 Grey-box fuzzing now and in the future 8 5.1 Combining techniques 8 5.2 Dynamic taint analysis 10 6 Discussion and conclusion 11 6.1 Conclusion regarding the future 11 6.2 Conclusion validity 11 6.3 Importance of categorization 11 6.4 When should grey-box fuzzing be used? 12 References 13 1(16) 1 Introduction With a society that becomes increasingly dependent on technology, finding and eliminating bugs in software is essential. Many people depend on software, the software need to be available when they need it and in a way it is expected to work. Security bugs in software can have devastating impacts on companies and might cost them enormous amount of money [47]. To be able to stand up to the competition to other companies nowadays you will most likely be forced to rely on software whether you like it or not. The research for better software testing methods can not stop, it would be morally wrong if researchers stopped trying to come up with new smart techniques to find bugs in code. A lot of time and money are spent on testing, often 50% of a software project’s expenses goes towards manual testing [7]. Creating tests manually is expensive, error-prone and most of the time inconclusive [7]. Vulnerabilities in software are often caused by bugs in the code that escape detection of software quality assurance. Despite the efforts of trying to make software more resistant against security vulnerabilities, research from recent years suggests that the vulnerabilities in software are more common than ever [35]. This is a big problem to fight and the internet of things [31] phenomena does not make it easier or any less important. Bugs are commonly hidden in paths of the code that rarely gets executed. Error-prone software such as parsers or decoders that handles complex file formats must be able to handle many different inputs and corner cases. Bugs in software that handle files or any other complex input can have serious consequences which may lead to security exploits [10]. Sometimes companies purposely ignore their responsibility to test their applications for these security faults and go for a less demanding approach [33]. Because of the security testing problems stated, automated security testing has become more popular. One comple- menting testing technique that has emerged is called fuzzing. Fuzzing can be used to test applications where the space of possible inputs is large. The technique is used to see how well an application handles unexpected inputs and thereby reveal bugs. Fuzzing generates input in a random fashion and feeds it to the program. The input might generate an exception, make the program do something valid or even put the program in an invalid state that was never thought of, thus reveal a bug. Generating input for a program is mainly done in two ways: using grammar-based fuzzing which uses a model or format to generate input, or mutational fuzzing which starts with a valid input and changes it in a random way. Grey-box fuzzing is a subject without an excessive amount of documentation, which is partially the motivation for this study being made. It can briefly be explained as a fuzzing technique located between black- and white-box fuzzing, where black-box means only looking at the input/output of the system being tested, and white-box indicates the access to source code. Grey-box fuzzers does not require source code, but are still more refined than a black-box fuzzer since it can still glean information regarding the internal state of the system being tested, usually using dynamic or static analysis on a binary level [27]. Hackers often exploit bugs in software for their attacks, and they mainly use two methods to find them [17]. One of them is by using reverse engineering on binaries to retrieve the original source code (as close as possible). Once the source code is retrieved, it can be inspected with the intention of finding any security flaws. The other method is to use black- box fuzzing on the software to reveal bugs. This method is fruitful for a hackers, since the 2(16) process of feeding a system with random permutations of data can easily be automated and run until a bug occurs. Finding one bug in a software is easy compared to finding all of them, even an unsophisticated fuzzer might do the job, but finding one bug might be all it takes for a malicious hacker to do damage. To fight this problem, ethical security specialists need more sophisticated fuzzing methods to find and fix bugs before they can be found and exploited by hackers. 3(16) 1.1 Purpose of the thesis There exists no generally accepted method to distinguish grey-box fuzzing from the other two fuzzing techniques, white-box and black-box [20]. The purpose of this thesis is to clarify the term grey-box fuzzing and analyze the tools which employ the technique, thus give an understanding in where the grey-box fuzzing research stands today and where it is heading. With the hope of stating something about the future of grey-box fuzzing, these are the aspects that has been focused on: • Strategies the recent papers are presenting, and how they compare to other approaches. • What the current state is like within the research of the techniques that grey-box fuzzers utilize. • Things that are considered to be obstacles within the field of research. 4(16) 2 The basics of fuzzing The idea of fuzzing was first described by Miller in 1989 [21]. The basic idea of fuzzing is to use random strings as input to a monitored software with the intention to uncover bugs. Fuzzing is an automated or semi-automated technique where the input can be based on knowledge of the program internals, totally random, or based on some kind of initial seed. It is typically used to test applications that take structured files as input but might also be used for other things such as network protocols. The term is another word for interface robustness testing [14], where the interface is the attack surface and usually the thing available to the users. Fuzzing is a type of security testing but should not be confused with penetration testing. This technique is used to test how well a system handles unexpected input, usually with the intention of finding memory related errors like buffer overflows, heap overflows, stack overflows and the like, fuzzing might trigger any kind of bug, but deciding when a bug has occurred is a job for the test oracle [2]. Determining when a bug has occurred is a hard problem that requires an oracle to know what behavior to expect given a certain input. If a fuzzer manages to trigger a bug, but nothing registers its occurrence, the work of triggering the bug is insignificant. Research in the fuzzing technique has enhanced the knowledge of random input generation, but fuzzers still inhabits a bottleneck from the test oracle problem [2]. This topic is of importance for automated testing techniques, and improving it could have a big impact on the future. 2.1 Black-box fuzzing Black-box fuzzing is what we usually call traditional fuzzing.
Recommended publications
  • Automatically Bypassing Android Malware Detection System
    FUZZIFICATION: Anti-Fuzzing Techniques Jinho Jung, Hong Hu, David Solodukhin, Daniel Pagan, Kyu Hyung Lee*, Taesoo Kim * 1 Fuzzing Discovers Many Vulnerabilities 2 Fuzzing Discovers Many Vulnerabilities 3 Testers Find Bugs with Fuzzing Detected bugs Normal users Compilation Source Released binary Testers Compilation Distribution Fuzzing 4 But Attackers Also Find Bugs Detected bugs Normal users Compilation Attackers Source Released binary Testers Compilation Distribution Fuzzing 5 Our work: Make the Fuzzing Only Effective to the Testers Detected bugs Normal users Fuzzification ? Fortified binary Attackers Source Compilation Binary Testers Compilation Distribution Fuzzing 6 Threat Model Detected bugs Normal users Fuzzification Fortified binary Attackers Source Compilation Binary Testers Compilation Distribution Fuzzing 7 Threat Model Detected bugs Normal users Fuzzification Fortified binary Attackers Source Compilation Binary Testers Compilation Distribution Fuzzing Adversaries try to find vulnerabilities from fuzzing 8 Threat Model Detected bugs Normal users Fuzzification Fortified binary Attackers Source Compilation Binary Testers Compilation Distribution Fuzzing Adversaries only have a copy of fortified binary 9 Threat Model Detected bugs Normal users Fuzzification Fortified binary Attackers Source Compilation Binary Testers Compilation Distribution Fuzzing Adversaries know Fuzzification and try to nullify 10 Research Goals Detected bugs Normal users Fuzzification Fortified binary Attackers Source Compilation Binary Testers Compilation
    [Show full text]
  • Chopped Symbolic Execution
    Chopped Symbolic Execution David Trabish Andrea Mattavelli Noam Rinetzky Cristian Cadar Tel Aviv University Imperial College London Tel Aviv University Imperial College London Israel United Kingdom Israel United Kingdom [email protected] [email protected] [email protected] [email protected] ABSTRACT the code with symbolic values instead of concrete ones. Symbolic Symbolic execution is a powerful program analysis technique that execution engines thus replace concrete program operations with systematically explores multiple program paths. However, despite ones that manipulate symbols, and add appropriate constraints on important technical advances, symbolic execution often struggles to the symbolic values. In particular, whenever the symbolic executor reach deep parts of the code due to the well-known path explosion reaches a branch condition that depends on the symbolic inputs, it problem and constraint solving limitations. determines the feasibility of both sides of the branch, and creates In this paper, we propose chopped symbolic execution, a novel two new independent symbolic states which are added to a worklist form of symbolic execution that allows users to specify uninter- to follow each feasible side separately. This process, referred to as esting parts of the code to exclude during the analysis, thus only forking, refines the conditions on the symbolic values by adding targeting the exploration to paths of importance. However, the appropriate constraints on each path according to the conditions excluded parts are not summarily ignored, as this may lead to on the branch. Test cases are generated by finding concrete values both false positives and false negatives. Instead, they are executed for the symbolic inputs that satisfy the path conditions.
    [Show full text]
  • Advanced Systems Security: Symbolic Execution
    Systems and Internet Infrastructure Security Network and Security Research Center Department of Computer Science and Engineering Pennsylvania State University, University Park PA Advanced Systems Security:! Symbolic Execution Trent Jaeger Systems and Internet Infrastructure Security (SIIS) Lab Computer Science and Engineering Department Pennsylvania State University Systems and Internet Infrastructure Security (SIIS) Laboratory Page 1 Problem • Programs have flaws ‣ Can we find (and fix) them before adversaries can reach and exploit them? • Then, they become “vulnerabilities” Systems and Internet Infrastructure Security (SIIS) Laboratory Page 2 Vulnerabilities • A program vulnerability consists of three elements: ‣ A flaw ‣ Accessible to an adversary ‣ Adversary has the capability to exploit the flaw • Which one should we focus on to find and fix vulnerabilities? ‣ Different methods are used for each Systems and Internet Infrastructure Security (SIIS) Laboratory Page 3 Is It a Flaw? • Problem: Are these flaws? ‣ Failing to check the bounds on a buffer write? ‣ printf(char *n); ‣ strcpy(char *ptr, char *user_data); ‣ gets(char *ptr) • From man page: “Never use gets().” ‣ sprintf(char *s, const char *template, ...) • From gnu.org: “Warning: The sprintf function can be dangerous because it can potentially output more characters than can fit in the allocation size of the string s.” • Should you fix all of these? Systems and Internet Infrastructure Security (SIIS) Laboratory Page 4 Is It a Flaw? • Problem: Are these flaws? ‣ open( filename, O_CREAT );
    [Show full text]
  • Moonshine: Optimizing OS Fuzzer Seed Selection with Trace Distillation
    MoonShine: Optimizing OS Fuzzer Seed Selection with Trace Distillation Shankara Pailoor, Andrew Aday, and Suman Jana Columbia University Abstract bug in system call implementations might allow an un- privileged user-level process to completely compromise OS fuzzers primarily test the system-call interface be- the system. tween the OS kernel and user-level applications for secu- OS fuzzers usually start with a set of synthetic seed rity vulnerabilities. The effectiveness of all existing evo- programs , i.e., a sequence of system calls, and itera- lutionary OS fuzzers depends heavily on the quality and tively mutate their arguments/orderings using evolution- diversity of their seed system call sequences. However, ary guidance to maximize the achieved code coverage. generating good seeds for OS fuzzing is a hard problem It is well-known that the performance of evolutionary as the behavior of each system call depends heavily on fuzzers depend critically on the quality and diversity of the OS kernel state created by the previously executed their seeds [31, 39]. Ideally, the synthetic seed programs system calls. Therefore, popular evolutionary OS fuzzers for OS fuzzers should each contain a small number of often rely on hand-coded rules for generating valid seed system calls that exercise diverse functionality in the OS sequences of system calls that can bootstrap the fuzzing kernel. process. Unfortunately, this approach severely restricts However, the behavior of each system call heavily de- the diversity of the seed system call sequences and there- pends on the shared kernel state created by the previous fore limits the effectiveness of the fuzzers.
    [Show full text]
  • Test Generation Using Symbolic Execution
    Test Generation Using Symbolic Execution Patrice Godefroid Microsoft Research [email protected] Abstract This paper presents a short introduction to automatic code-driven test generation using symbolic execution. It discusses some key technical challenges, solutions and milestones, but is not an exhaustive survey of this research area. 1998 ACM Subject Classification D.2.5 Testing and Debugging, D.2.4 Software/Program Veri- fication Keywords and phrases Testing, Symbolic Execution, Verification, Test Generation Digital Object Identifier 10.4230/LIPIcs.FSTTCS.2012.24 1 Automatic Code-Driven Test Generation In this paper, we discuss the problem of automatic code-driven test generation: Given a program with a known set of input parameters, automatically generate a set of input values that will exercise as many program statements as possible. Variants of this problem definition can be obtained using other code coverage criteria [48]. An optimal solution to this problem is theoretically impossible since this problem is undecidable in general (for infinite-state programs written in Turing-expressive programming languages). In practice, approximate solutions are sufficient. Although automating test generation using program analysis is an old idea (e.g., [41]), practical tools have only started to emerge during the last few years. Indeed, the expensive sophisticated program-analysis techniques required to tackle the problem, such as symbolic execution engines and constraint solvers, have only become computationally affordable in recent years thanks to the increasing computational power available on modern computers. Moreover, this steady increase in computational power has in turn enabled recent progress in the engineering of more practical software analysis techniques. Specifically, this recent progress was enabled by new advances in dynamic test generation [29], which generalizes and is more powerful than static test generation, as explained later in this paper.
    [Show full text]
  • Active Fuzzing for Testing and Securing Cyber-Physical Systems
    Active Fuzzing for Testing and Securing Cyber-Physical Systems Yuqi Chen Bohan Xuan Christopher M. Poskitt Singapore Management University Zhejiang University Singapore Management University Singapore China Singapore [email protected] [email protected] [email protected] Jun Sun Fan Zhang∗ Singapore Management University Zhejiang University Singapore China [email protected] [email protected] ABSTRACT KEYWORDS Cyber-physical systems (CPSs) in critical infrastructure face a per- Cyber-physical systems; fuzzing; active learning; benchmark gen- vasive threat from attackers, motivating research into a variety of eration; testing defence mechanisms countermeasures for securing them. Assessing the effectiveness of ACM Reference Format: these countermeasures is challenging, however, as realistic bench- Yuqi Chen, Bohan Xuan, Christopher M. Poskitt, Jun Sun, and Fan Zhang. marks of attacks are difficult to manually construct, blindly testing 2020. Active Fuzzing for Testing and Securing Cyber-Physical Systems. In is ineffective due to the enormous search spaces and resource re- Proceedings of the 29th ACM SIGSOFT International Symposium on Software quirements, and intelligent fuzzing approaches require impractical Testing and Analysis (ISSTA ’20), July 18–22, 2020, Virtual Event, USA. ACM, amounts of data and network access. In this work, we propose active New York, NY, USA, 13 pages. https://doi.org/10.1145/3395363.3397376 fuzzing, an automatic approach for finding test suites of packet- level CPS network attacks, targeting scenarios in which attackers 1 INTRODUCTION can observe sensors and manipulate packets, but have no existing knowledge about the payload encodings. Our approach learns re- Cyber-physical systems (CPSs), characterised by their tight and gression models for predicting sensor values that will result from complex integration of computational and physical processes, are sampled network packets, and uses these predictions to guide a often used in the automation of critical public infrastructure [78].
    [Show full text]
  • The Art, Science, and Engineering of Fuzzing: a Survey
    1 The Art, Science, and Engineering of Fuzzing: A Survey Valentin J.M. Manes,` HyungSeok Han, Choongwoo Han, Sang Kil Cha, Manuel Egele, Edward J. Schwartz, and Maverick Woo Abstract—Among the many software vulnerability discovery techniques available today, fuzzing has remained highly popular due to its conceptual simplicity, its low barrier to deployment, and its vast amount of empirical evidence in discovering real-world software vulnerabilities. At a high level, fuzzing refers to a process of repeatedly running a program with generated inputs that may be syntactically or semantically malformed. While researchers and practitioners alike have invested a large and diverse effort towards improving fuzzing in recent years, this surge of work has also made it difficult to gain a comprehensive and coherent view of fuzzing. To help preserve and bring coherence to the vast literature of fuzzing, this paper presents a unified, general-purpose model of fuzzing together with a taxonomy of the current fuzzing literature. We methodically explore the design decisions at every stage of our model fuzzer by surveying the related literature and innovations in the art, science, and engineering that make modern-day fuzzers effective. Index Terms—software security, automated software testing, fuzzing. ✦ 1 INTRODUCTION Figure 1 on p. 5) and an increasing number of fuzzing Ever since its introduction in the early 1990s [152], fuzzing studies appear at major security conferences (e.g. [225], has remained one of the most widely-deployed techniques [52], [37], [176], [83], [239]). In addition, the blogosphere is to discover software security vulnerabilities. At a high level, filled with many success stories of fuzzing, some of which fuzzing refers to a process of repeatedly running a program also contain what we consider to be gems that warrant a with generated inputs that may be syntactically or seman- permanent place in the literature.
    [Show full text]
  • Psofuzzer: a Target-Oriented Software Vulnerability Detection Technology Based on Particle Swarm Optimization
    applied sciences Article PSOFuzzer: A Target-Oriented Software Vulnerability Detection Technology Based on Particle Swarm Optimization Chen Chen 1,2,*, Han Xu 1 and Baojiang Cui 1 1 School of Cyberspace Security, Beijing University of Posts and Telecommunications, Beijing 100876, China; [email protected] (H.X.); [email protected] (B.C.) 2 School of Information and Navigation, Air Force Engineering University, Xi’an 710077, China * Correspondence: [email protected] Abstract: Coverage-oriented and target-oriented fuzzing are widely used in vulnerability detection. Compared with coverage-oriented fuzzing, target-oriented fuzzing concentrates more computing resources on suspected vulnerable points to improve the testing efficiency. However, the sample generation algorithm used in target-oriented vulnerability detection technology has some problems, such as weak guidance, weak sample penetration, and difficult sample generation. This paper proposes a new target-oriented fuzzer, PSOFuzzer, that uses particle swarm optimization to generate samples. PSOFuzzer can quickly learn high-quality features in historical samples and implant them into new samples that can be led to execute the suspected vulnerable point. The experimental results show that PSOFuzzer can generate more samples in the test process to reach the target point and can trigger vulnerabilities with 79% and 423% higher probability than AFLGo and Sidewinder, respectively, on tested software programs. Keywords: fuzzing; model-based fuzzing; vulnerability detection; code coverage; open-source program; directed fuzzing; static instrumentation; source code instrumentation Citation: Chen, C.; Han, X.; Cui, B. PSOFuzzer: A Target-Oriented 1. Introduction Software Vulnerability Detection The appearance of an increasing number of software vulnerabilities has made auto- Technology Based on Particle Swarm matic vulnerability detection technology a widespread concern in industry and academia.
    [Show full text]
  • Configuration Fuzzing Testing Framework for Software Vulnerability Detection
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Columbia University Academic Commons CONFU: Configuration Fuzzing Testing Framework for Software Vulnerability Detection Huning Dai, Christian Murphy, Gail Kaiser Department of Computer Science Columbia University New York, NY 10027 USA ABSTRACT Many software security vulnerabilities only reveal themselves under certain conditions, i.e., particular configurations and inputs together with a certain runtime environment. One approach to detecting these vulnerabilities is fuzz testing. However, typical fuzz testing makes no guarantees regarding the syntactic and semantic validity of the input, or of how much of the input space will be explored. To address these problems, we present a new testing methodology called Configuration Fuzzing. Configuration Fuzzing is a technique whereby the configuration of the running application is mutated at certain execution points, in order to check for vulnerabilities that only arise in certain conditions. As the application runs in the deployment environment, this testing technique continuously fuzzes the configuration and checks "security invariants'' that, if violated, indicate a vulnerability. We discuss the approach and introduce a prototype framework called ConFu (CONfiguration FUzzing testing framework) for implementation. We also present the results of case studies that demonstrate the approach's feasibility and evaluate its performance. Keywords: Vulnerability; Configuration Fuzzing; Fuzz testing; In Vivo testing; Security invariants INTRODUCTION As the Internet has grown in popularity, security testing is undoubtedly becoming a crucial part of the development process for commercial software, especially for server applications. However, it is impossible in terms of time and cost to test all configurations or to simulate all system environments before releasing the software into the field, not to mention the fact that software distributors may later add more configuration options.
    [Show full text]
  • Fuzzing with Code Fragments
    Fuzzing with Code Fragments Christian Holler Kim Herzig Andreas Zeller Mozilla Corporation∗ Saarland University Saarland University [email protected] [email protected] [email protected] Abstract JavaScript interpreter must follow the syntactic rules of JavaScript. Otherwise, the JavaScript interpreter will re- Fuzz testing is an automated technique providing random ject the input as invalid, and effectively restrict the test- data as input to a software system in the hope to expose ing to its lexical and syntactic analysis, never reaching a vulnerability. In order to be effective, the fuzzed input areas like code transformation, in-time compilation, or must be common enough to pass elementary consistency actual execution. To address this issue, fuzzing frame- checks; a JavaScript interpreter, for instance, would only works include strategies to model the structure of the de- accept a semantically valid program. On the other hand, sired input data; for fuzz testing a JavaScript interpreter, the fuzzed input must be uncommon enough to trigger this would require a built-in JavaScript grammar. exceptional behavior, such as a crash of the interpreter. Surprisingly, the number of fuzzing frameworks that The LangFuzz approach resolves this conflict by using generate test inputs on grammar basis is very limited [7, a grammar to randomly generate valid programs; the 17, 22]. For JavaScript, jsfunfuzz [17] is amongst the code fragments, however, partially stem from programs most popular fuzzing tools, having discovered more that known to have caused invalid behavior before. LangFuzz 1,000 defects in the Mozilla JavaScript engine. jsfunfuzz is an effective tool for security testing: Applied on the is effective because it is hardcoded to target a specific Mozilla JavaScript interpreter, it discovered a total of interpreter making use of specific knowledge about past 105 new severe vulnerabilities within three months of and common vulnerabilities.
    [Show full text]
  • Seed Selection for Successful Fuzzing
    Seed Selection for Successful Fuzzing Adrian Herrera Hendra Gunadi Shane Magrath ANU & DST ANU DST Australia Australia Australia Michael Norrish Mathias Payer Antony L. Hosking CSIRO’s Data61 & ANU EPFL ANU & CSIRO’s Data61 Australia Switzerland Australia ABSTRACT ACM Reference Format: Mutation-based greybox fuzzing—unquestionably the most widely- Adrian Herrera, Hendra Gunadi, Shane Magrath, Michael Norrish, Mathias Payer, and Antony L. Hosking. 2021. Seed Selection for Successful Fuzzing. used fuzzing technique—relies on a set of non-crashing seed inputs In Proceedings of the 30th ACM SIGSOFT International Symposium on Software (a corpus) to bootstrap the bug-finding process. When evaluating a Testing and Analysis (ISSTA ’21), July 11–17, 2021, Virtual, Denmark. ACM, fuzzer, common approaches for constructing this corpus include: New York, NY, USA, 14 pages. https://doi.org/10.1145/3460319.3464795 (i) using an empty file; (ii) using a single seed representative of the target’s input format; or (iii) collecting a large number of seeds (e.g., 1 INTRODUCTION by crawling the Internet). Little thought is given to how this seed Fuzzing is a dynamic analysis technique for finding bugs and vul- choice affects the fuzzing process, and there is no consensus on nerabilities in software, triggering crashes in a target program by which approach is best (or even if a best approach exists). subjecting it to a large number of (possibly malformed) inputs. To address this gap in knowledge, we systematically investigate Mutation-based fuzzing typically uses an initial set of valid seed and evaluate how seed selection affects a fuzzer’s ability to find bugs inputs from which to generate new seeds by random mutation.
    [Show full text]
  • Symstra: a Framework for Generating Object-Oriented Unit Tests Using Symbolic Execution
    Symstra: A Framework for Generating Object-Oriented Unit Tests using Symbolic Execution Tao Xie1, Darko Marinov2, Wolfram Schulte3, David Notkin1 1 Dept. of Computer Science & Engineering, Univ. of Washington, Seattle, WA 98195, USA 2 Department of Computer Science, University of Illinois, Urbana-Champaign, IL 61801, USA 3 Microsoft Research, One Microsoft Way, Redmond, WA 98052, USA {taoxie,notkin}@cs.washington.edu, [email protected], [email protected] Abstract. Object-oriented unit tests consist of sequences of method invocations. Behavior of an invocation depends on the method’s arguments and the state of the receiver at the beginning of the invocation. Correspondingly, generating unit tests involves two tasks: generating method sequences that build relevant receiver- object states and generating relevant method arguments. This paper proposes Symstra, a framework that achieves both test generation tasks using symbolic execution of method sequences with symbolic arguments. The paper defines sym- bolic states of object-oriented programs and novel comparisons of states. Given a set of methods from the class under test and a bound on the length of sequences, Symstra systematically explores the object-state space of the class and prunes this exploration based on the state comparisons. Experimental results show that Symstra generates unit tests that achieve higher branch coverage faster than the existing test-generation techniques based on concrete method arguments. 1 Introduction Object-oriented unit tests are programs that test classes. Each test case consists of a fixed sequence of method invocations with fixed arguments that explores a particular aspect of the behavior of the class under test. Unit tests are becoming an important component of software development.
    [Show full text]