Thomas Ball and Sriram Rajamani's the SLAM Project: Debugging System Software Via Static Analysis

Total Page:16

File Type:pdf, Size:1020Kb

Thomas Ball and Sriram Rajamani's the SLAM Project: Debugging System Software Via Static Analysis The SLAM Project: Debugging System Software via Static Analysis* Thomas Ball and Sriram K. Rajamani {tball,sriram}@microsoft.com Microso~ Research http://research.microsoft.com/slam/ Abstract. The goal of the SLAM project is to check whether automaton can read (but not modify) the state of the C or not a program obeys "API usage rules" that specif[y what program that is visible at the function call/return interface, it means to be a good client of an API. The SLAM toolkit maintain a history, and signal when a bad state occurs. }V~ statically analyzes a C program to determine whether or have developed Smc specifications for a variety of Windows not it violates given usage rules. The toolkit has two unique XP driver properties, ranging from simple locking proper- aspects: it does not require the programmer to annotate ties (such as given above) to complex properties dealing with the source program (invariants are inferred); it minimizes completion routines, plug-and-play, and power management. noise (false error messages) through a process known as Given a program P and a Sac specification S, a pre- "counterexample-driven refinement". SLAM exploits and ex- processor creates an instrumented program P' such that a tends results fi'om program analysis, model checking and au- unique label ERROR is reachable in P' if-and-only-if P does tomated deduction. }V~ have successfully applied the SLAM not satisI} S. The goal then shifts to determining whether toolkit to Windows XP device drivers, to both validate be- or not the ERROR label is reachable in P', a generally unde- havior and find defects in their usage of kernel APIs. cidable problem. Context. Today, many programmers are realizing the ben- Design. The basic design of the SLAM process is to iterate efits of using languages with static type systems. By pro- the creation, analysis and refinement of program abstrac- viding simple specifications about the tbrm of program data, tions, until either a t~asible execution path in P' to ERROR programmers receive useful compile-time error messages or is tbund~ the prograln P' is validated (EEROR is shown not guarantees about the behavior of their (type-correct) pro- to be reachable), or we run out of resources or patience. grams. Getting additional checking beyond the confines of The SLAM process creates a sound boolears pr~gr~rrs ab- a particular type system generally requires programmers to straction B' of the C program p,.1 Boolean programs have use assertions and pertbrm testing. A number of projects all the control-flow constructs of C programs, but contain have started to tbcus on statically checking programs against only boolean variables. Each boolean variable in B' con- user-supplied specifications, using techniques from program servatively tracks the state of a predicate (boolean expres- analysis [18, 19], model checking [21, 17, 22], and automated sion) in the C program. Boolean programs are created au- deduction [16, 12]. tomatically using the technique of predicate abstr~ctior~ [20]. If a teachability analysis of B' determines that the label Specification. The goal of the SLAM project is to check ERROR is not reachable in B' then it is not reachable in temporal sat>ty properties of sequential C programs [7]. P'. It is possible that B' may be too coarse an abstrac- Roughly stated, temporal sat~ty properties are those proper- tion of P' (that is, ERROR is reachable in B' via a path p ties whose violation is witnessed by a finite execution trace but ERROR is not reachable in P' via p). V~ apply a method (see [24] ibr a formal definition). A simple example of a known as cour~terw'arr~ple-driver~ ~vfir~err~er~t [23, 28, 27] to sat>ty property is that a lock should be alternatingly ac- create a more precise boolean program (by adding new pred- quired and released. }¥~ encode temporal sat~ty properties icates/boolean variables) that does not contain the spurious in a language called Sac (Specification Language for Inter- path p (or other paths that are spurious tbr the same reason face Checking) [9], which allows the definition of a sat~ty p is). Termination of the SLAM process is addressed below. automaton [30, 29] that monitors the execution behavior of }V~ expect the SLAM process to work well tbr programs a program at the level of function calls and returns. The whose behavior is governed by an underlying finite state pro- tocol. Seen in this light, the goal of SLAM is tO tease out *Presented by the first author. the underlying "protocol" state machine ti'om the code, to Permission to make digital or hard copies of all or part a level of precision that is good enough to find real errors of this work for personal or classroom use is granted with- out fee provided that copies are not made or distributed for 1Of course, whenever one hears a claim that an analysis of C code profit or commercial advantage and that copies bear this no- is %ound", one must ask %ound with respect to what assumptions'.'" tice and the full citation on the first page. 2b copy oth- because of the (potential) use of arbitrary pointer arithmetic. In erwise, to republish, to post on servers or to redistribute SLAM, %ve guarantee soundness under the assumption that the C pro- gram obeys a "logical memory model' in which the expressions *p and to lists, requires prior specific permission and/or at fee. *(p+i) refer to the same object. Another analysis (see the work on POPL '02, Jan. 16-18, 2002 Portland, OR USA CCured presented at this symposium [25]) is needed to discharge the Copyright 2002 ACM ISBN 1-58113-450-9/02/01...$5.00 "logical memory" assumption. 1 or validate the code. For example, while a video card driver In practice, the SLAM process has terminated for all may have a huge data path, most of this data has no bearing drivers within 20 iterations. Our major concern has been on the driver's interaction with the operating system. How- with the overall running time of the process. So far, we are ever, some of the driver data definitely are relevant to this able to analyze programs on the order of 10,000 lines, and interaction, and correlations between these data may need abstractions with several hundred boolean variables in the to be tracked. range of minutes to a half hour. In practice, we find that most of the predicates SLAM generates are simple equalities Implementation. Three basic tools comprise the SLAM with possible pointer deret~rences. For this class of predi- toolkit (in addition to the SLI¢ preprocessor): cates, we believe it is possible to scale the SLAM process to several 100,000 lines of code through optimizations outlined • C2Be, a tool that transtbrms a C program P into a below. boolean program /31P(P,E) with respect to a set of A major expense in the SLAM process is the reacha- predicates E [2, 3]. C2Bp translates each procedure bility step (BEgot'), which has worst case running time of the C program separately, enabling it to scale to O(N(GL)a), where N is the size of the boolean program, G large programs. Using the theory of abstract interpre- is the number of global states, and L is the maximum num- tation [13], we have characterized the precision of the ber of local states over all procedures. The number of states boolean program abstractions created by C2ge [4]. is exponential (in the worst-case) in the maximal number of • BEBOP, a too1 for performing reachability analysis of variables in scope. boolean programs [6, 8]. BEBOe combines interproce- The key to scaling for the SLAM process is in control- dural dataflow analysis in the style of [26] with Binary ling the complexity of the boolean program abstraction. Decision Diagrams [10, 11] (BDDs) to etficiently rep- Satyaki Das has implemented a predicate abstraction tech- resent the reachable states of the boolean program at nique based on successive approximations [15] in the SLAM each program point. toolkit, which has proven quite useful in this regard. Also relevant here is the paper on "lazy abstraction" in this sym- • NEWTON, a too1 that discovers additional predicates to posium [21]. refine the boolean program, by analyzing the t~asibility Additionally, there is substantial overhead in having to of paths in the C program. iterate the SLAM process many times, which can be ad- dressed by both the "lazy abstraction" method as well as The SLAM process starts with an initial set of predicates E0 methods tbr heuristically determining a "good" initial set derived t¥om the SLI¢ specification, and iterates the follow- of predicates. }V~stley }V~imer has implemented an algo- ing steps: rithm in SLAM that, given the set of predicates present in 1. Apply C2BP to construct the boolean program the SLI¢ specification, determines what other predicates in 13p(P', Ed. the C program will very likely be needed in the future. This technique, based on the value-flow graph [14], greatly re- 2. Apply BEBOe to check if there is a path Pi in duces the number of iterations of the SLAM process. BT'(P', Ei) that reaches the ERROR label. If BEBOP de- termines that ERROR is not reachable, then P satisfies Challenges, ~¥~ summarize by discussing some of the chal- the SLI¢ specification and the process terminates. lenges inherent in the endeavor of checking user-supplied properties of sottware. 3. If there is such a path pi, then NEWTON checks if Pi is feasible in P'.
Recommended publications
  • IBM Developer for Z/OS Enterprise Edition
    Solution Brief IBM Developer for z/OS Enterprise Edition A comprehensive, robust toolset for developing z/OS applications using DevOps software delivery practices Companies must be agile to respond to market demands. The digital transformation is a continuous process, embracing hybrid cloud and the Application Program Interface (API) economy. To capitalize on opportunities, businesses must modernize existing applications and build new cloud native applications without disrupting services. This transformation is led by software delivery teams employing DevOps practices that include continuous integration and continuous delivery to a shared pipeline. For z/OS Developers, this transformation starts with modern tools that empower them to deliver more, faster, with better quality and agility. IBM Developer for z/OS Enterprise Edition is a modern, robust solution that offers the program analysis, edit, user build, debug, and test capabilities z/OS developers need, plus easy integration with the shared pipeline. The challenge IBM z/OS application development and software delivery teams have unique challenges with applications, tools, and skills. Adoption of agile practices Application modernization “DevOps and agile • Mainframe applications • Applications require development on the platform require frequent updates access to digital services have jumped from the early adopter stage in 2016 to • Development teams must with controlled APIs becoming common among adopt DevOps practices to • The journey to cloud mainframe businesses”1 improve their
    [Show full text]
  • E2 Studio Integrated Development Environment User's Manual: Getting
    User’s Manual e2 studio Integrated Development Environment User’s Manual: Getting Started Guide Target Device RX, RL78, RH850 and RZ Family Rev.4.00 May 2016 www.renesas.com Notice 1. Descriptions of circuits, software and other related information in this document are provided only to illustrate the operation of semiconductor products and application examples. You are fully responsible for the incorporation of these circuits, software, and information in the design of your equipment. Renesas Electronics assumes no responsibility for any losses incurred by you or third parties arising from the use of these circuits, software, or information. 2. Renesas Electronics has used reasonable care in preparing the information included in this document, but Renesas Electronics does not warrant that such information is error free. Renesas Electronics assumes no liability whatsoever for any damages incurred by you resulting from errors in or omissions from the information included herein. 3. Renesas Electronics does not assume any liability for infringement of patents, copyrights, or other intellectual property rights of third parties by or arising from the use of Renesas Electronics products or technical information described in this document. No license, express, implied or otherwise, is granted hereby under any patents, copyrights or other intellectual property rights of Renesas Electronics or others. 4. You should not alter, modify, copy, or otherwise misappropriate any Renesas Electronics product, whether in whole or in part. Renesas Electronics assumes no responsibility for any losses incurred by you or third parties arising from such alteration, modification, copy or otherwise misappropriation of Renesas Electronics product. 5. Renesas Electronics products are classified according to the following two quality grades: “Standard” and “High Quality”.
    [Show full text]
  • SAS® Debugging 101 Kirk Paul Lafler, Software Intelligence Corporation, Spring Valley, California
    PharmaSUG 2017 - Paper TT03 SAS® Debugging 101 Kirk Paul Lafler, Software Intelligence Corporation, Spring Valley, California Abstract SAS® users are almost always surprised to discover their programs contain bugs. In fact, when asked users will emphatically stand by their programs and logic by saying they are bug free. But, the vast number of experiences along with the realities of writing code says otherwise. Bugs in software can appear anywhere; whether accidentally built into the software by developers, or introduced by programmers when writing code. No matter where the origins of bugs occur, the one thing that all SAS users know is that debugging SAS program errors and warnings can be a daunting, and humbling, task. This presentation explores the world of SAS bugs, providing essential information about the types of bugs, how bugs are created, the symptoms of bugs, and how to locate bugs. Attendees learn how to apply effective techniques to better understand, identify, and repair bugs and enable program code to work as intended. Introduction From the very beginning of computing history, program code, or the instructions used to tell a computer how and when to do something, has been plagued with errors, or bugs. Even if your own program code is determined to be error-free, there is a high probability that the operating system and/or software being used, such as the SAS software, have embedded program errors in them. As a result, program code should be written to assume the unexpected and, as much as possible, be able to handle unforeseen events using acceptable strategies and programming methodologies.
    [Show full text]
  • Agile Playbook V2.1—What’S New?
    AGILE P L AY B O OK TABLE OF CONTENTS INTRODUCTION ..........................................................................................................4 Who should use this playbook? ................................................................................6 How should you use this playbook? .........................................................................6 Agile Playbook v2.1—What’s new? ...........................................................................6 How and where can you contribute to this playbook?.............................................7 MEET YOUR GUIDES ...................................................................................................8 AN AGILE DELIVERY MODEL ....................................................................................10 GETTING STARTED.....................................................................................................12 THE PLAYS ...................................................................................................................14 Delivery ......................................................................................................................15 Play: Start with Scrum ...........................................................................................15 Play: Seeing success but need more fexibility? Move on to Scrumban ............17 Play: If you are ready to kick of the training wheels, try Kanban .......................18 Value ......................................................................................................................19
    [Show full text]
  • Opportunities and Open Problems for Static and Dynamic Program Analysis Mark Harman∗, Peter O’Hearn∗ ∗Facebook London and University College London, UK
    1 From Start-ups to Scale-ups: Opportunities and Open Problems for Static and Dynamic Program Analysis Mark Harman∗, Peter O’Hearn∗ ∗Facebook London and University College London, UK Abstract—This paper1 describes some of the challenges and research questions that target the most productive intersection opportunities when deploying static and dynamic analysis at we have yet witnessed: that between exciting, intellectually scale, drawing on the authors’ experience with the Infer and challenging science, and real-world deployment impact. Sapienz Technologies at Facebook, each of which started life as a research-led start-up that was subsequently deployed at scale, Many industrialists have perhaps tended to regard it unlikely impacting billions of people worldwide. that much academic work will prove relevant to their most The paper identifies open problems that have yet to receive pressing industrial concerns. On the other hand, it is not significant attention from the scientific community, yet which uncommon for academic and scientific researchers to believe have potential for profound real world impact, formulating these that most of the problems faced by industrialists are either as research questions that, we believe, are ripe for exploration and that would make excellent topics for research projects. boring, tedious or scientifically uninteresting. This sociological phenomenon has led to a great deal of miscommunication between the academic and industrial sectors. I. INTRODUCTION We hope that we can make a small contribution by focusing on the intersection of challenging and interesting scientific How do we transition research on static and dynamic problems with pressing industrial deployment needs. Our aim analysis techniques from the testing and verification research is to move the debate beyond relatively unhelpful observations communities to industrial practice? Many have asked this we have typically encountered in, for example, conference question, and others related to it.
    [Show full text]
  • Building Useful Program Analysis Tools Using an Extensible Java Compiler
    Building Useful Program Analysis Tools Using an Extensible Java Compiler Edward Aftandilian, Raluca Sauciuc Siddharth Priya, Sundaresan Krishnan Google, Inc. Google, Inc. Mountain View, CA, USA Hyderabad, India feaftan, [email protected] fsiddharth, [email protected] Abstract—Large software companies need customized tools a specific task, but they fail for several reasons. First, ad- to manage their source code. These tools are often built in hoc program analysis tools are often brittle and break on an ad-hoc fashion, using brittle technologies such as regular uncommon-but-valid code patterns. Second, simple ad-hoc expressions and home-grown parsers. Changes in the language cause the tools to break. More importantly, these ad-hoc tools tools don’t provide sufficient information to perform many often do not support uncommon-but-valid code code patterns. non-trivial analyses, including refactorings. Type and symbol We report our experiences building source-code analysis information is especially useful, but amounts to writing a tools at Google on top of a third-party, open-source, extensible type-checker. Finally, more sophisticated program analysis compiler. We describe three tools in use on our Java codebase. tools are expensive to create and maintain, especially as the The first, Strict Java Dependencies, enforces our dependency target language evolves. policy in order to reduce JAR file sizes and testing load. The second, error-prone, adds new error checks to the compilation In this paper, we present our experience building special- process and automates repair of those errors at a whole- purpose tools on top of the the piece of software in our codebase scale.
    [Show full text]
  • The Art, Science, and Engineering of Fuzzing: a Survey
    1 The Art, Science, and Engineering of Fuzzing: A Survey Valentin J.M. Manes,` HyungSeok Han, Choongwoo Han, Sang Kil Cha, Manuel Egele, Edward J. Schwartz, and Maverick Woo Abstract—Among the many software vulnerability discovery techniques available today, fuzzing has remained highly popular due to its conceptual simplicity, its low barrier to deployment, and its vast amount of empirical evidence in discovering real-world software vulnerabilities. At a high level, fuzzing refers to a process of repeatedly running a program with generated inputs that may be syntactically or semantically malformed. While researchers and practitioners alike have invested a large and diverse effort towards improving fuzzing in recent years, this surge of work has also made it difficult to gain a comprehensive and coherent view of fuzzing. To help preserve and bring coherence to the vast literature of fuzzing, this paper presents a unified, general-purpose model of fuzzing together with a taxonomy of the current fuzzing literature. We methodically explore the design decisions at every stage of our model fuzzer by surveying the related literature and innovations in the art, science, and engineering that make modern-day fuzzers effective. Index Terms—software security, automated software testing, fuzzing. ✦ 1 INTRODUCTION Figure 1 on p. 5) and an increasing number of fuzzing Ever since its introduction in the early 1990s [152], fuzzing studies appear at major security conferences (e.g. [225], has remained one of the most widely-deployed techniques [52], [37], [176], [83], [239]). In addition, the blogosphere is to discover software security vulnerabilities. At a high level, filled with many success stories of fuzzing, some of which fuzzing refers to a process of repeatedly running a program also contain what we consider to be gems that warrant a with generated inputs that may be syntactically or seman- permanent place in the literature.
    [Show full text]
  • T-Fuzz: Fuzzing by Program Transformation
    T-Fuzz: Fuzzing by Program Transformation Hui Peng1, Yan Shoshitaishvili2, Mathias Payer1 Purdue University1, Arizona State University2 A PRESENTATION BY MD SHAMIM HUSSAIN AND NAFIS NEEHAL CSCI 6450 - PRINCIPLES OF PROGRAM ANALYSIS 1 Outline • Brief forecast • Background • Key Contributions • T-Fuzz Design • T-Fuzz Limitations • Experimental Results • Concluding Remarks • References CSCI 6450 - PRINCIPLES OF PROGRAM ANALYSIS 2 Fuzzing • Randomly generates input to discover bugs in the target program • Highly effective for discovering vulnerabilities • Standard technique in software development to improve reliability and security • Roughly two types of fuzzers - generational and mutational • Generational fuzzers require strict format for generating input • Mutational fuzzers create input by randomly mutating or by random seeds CSCI 6450 - PRINCIPLES OF PROGRAM ANALYSIS 3 Challenges in Fuzzing • Shallow coverage • “Deep Bugs” are hard to find • Reason – fuzzers get stuck at complex sanity checks Shallow Code Path Fig 1: Fuzzing limitations CSCI 6450 - PRINCIPLES OF PROGRAM ANALYSIS 4 Existing Approaches and Limitations • Existing approaches that focus on input generation ◦ AFL (Feedback based) ◦ Driller (Concolic execution based) ◦ VUzzer (Taint analysis based) ◦ etc. • Limitations ◦ Heavyweight analysis (taint analysis, concolic execution) ◦ Not scalable (concolic execution) ◦ Gets stuck on complex sanity checks (e.g. checksum values) ◦ Slow progress for multiple sanity checks (feedback based) CSCI 6450 - PRINCIPLES OF PROGRAM ANALYSIS
    [Show full text]
  • An Efficient Data-Dependence Profiler for Sequential and Parallel Programs
    An Efficient Data-Dependence Profiler for Sequential and Parallel Programs Zhen Li, Ali Jannesari, and Felix Wolf German Research School for Simulation Sciences, 52062 Aachen, Germany Technische Universitat¨ Darmstadt, 64289 Darmstadt, Germany fz.li, [email protected], [email protected] Abstract—Extracting data dependences from programs data dependences without executing the program. Although serves as the foundation of many program analysis and they are fast and even allow fully automatic parallelization transformation methods, including automatic parallelization, in some cases [13], [14], they lack the ability to track runtime scheduling, and performance tuning. To obtain data dependences, more and more related tools are adopting profil- dynamically allocated memory, pointers, and dynamically ing approaches because they can track dynamically allocated calculated array indices. This usually makes their assessment memory, pointers, and array indices. However, dependence pessimistic, limiting their practical applicability. In contrast, profiling suffers from high runtime and space overhead. To dynamic dependence profiling captures only those depen- lower the overhead, earlier dependence profiling techniques dences that actually occur at runtime. Although dependence exploit features of the specific program analyses they are designed for. As a result, every program analysis tool in need profiling is inherently input sensitive, the results are still of data-dependence information requires its own customized useful in many situations, which is why such profiling profiler. In this paper, we present an efficient and at the forms the basis of many program analysis tools [2], [5], same time generic data-dependence profiler that can be used [6]. Moreover, input sensitivity can be addressed by running as a uniform basis for different dependence-based program the target program with changing inputs and computing the analyses.
    [Show full text]
  • Static Program Analysis As a Fuzzing Aid
    Static Program Analysis as a Fuzzing Aid Bhargava Shastry1( ), Markus Leutner1, Tobias Fiebig1, Kashyap Thimmaraju1, Fabian Yamaguchi2, Konrad Rieck2, Stefan Schmid3, Jean-Pierre Seifert1, and Anja Feldmann1 1 TU Berlin, Berlin, Germany [email protected] 2 TU Braunschweig, Braunschweig, Germany 3 Aalborg University, Aalborg, Denmark Abstract. Fuzz testing is an effective and scalable technique to per- form software security assessments. Yet, contemporary fuzzers fall short of thoroughly testing applications with a high degree of control-flow di- versity, such as firewalls and network packet analyzers. In this paper, we demonstrate how static program analysis can guide fuzzing by aug- menting existing program models maintained by the fuzzer. Based on the insight that code patterns reflect the data format of inputs pro- cessed by a program, we automatically construct an input dictionary by statically analyzing program control and data flow. Our analysis is performed before fuzzing commences, and the input dictionary is sup- plied to an off-the-shelf fuzzer to influence input generation. Evaluations show that our technique not only increases test coverage by 10{15% over baseline fuzzers such as afl but also reduces the time required to expose vulnerabilities by up to an order of magnitude. As a case study, we have evaluated our approach on two classes of network applications: nDPI, a deep packet inspection library, and tcpdump, a network packet analyzer. Using our approach, we have uncovered 15 zero-day vulnerabilities in the evaluated software that were not found by stand-alone fuzzers. Our work not only provides a practical method to conduct security evalua- tions more effectively but also demonstrates that the synergy between program analysis and testing can be exploited for a better outcome.
    [Show full text]
  • 5 Key Considerations Towards the Adoption of a Mainframe Inclusive Devops Strategy
    5 Key Considerations Towards the Adoption of a Mainframe Inclusive DevOps Strategy Key considaetrations towards the adoption of a mainframe inclusive devops strategy PLANNING FOR DEVOPS IN MAINFRAME ENVIRONMENTS – KEY CONSIDERATIONS – A mountain of information is available around the implementation and practice of DevOps, collectively encompassing people, processes and technology. This ranges significantly from the implementation of tool chains to discussions around DevOps principles, devoid of technology. While technology itself does not guarantee a successful DevOps implementation, the right tools and solutions can help companies better enable a DevOps culture. Unfortunately, the majority of conversations regarding DevOps center around distributed environments, neglecting the platform that still runs more than 70% of the business and transaction systems in the world; the mainframe. It is estimated that 5 billion lines of new COBOL code are added to live systems every year. Furthermore, 55% of enterprise apps and an estimated 80% of mobile transactions touch the mainframe at some point. Like their distributed brethren, mainframe applications benefit from DevOps practices, boosting speed and lowering risk. Instead of continuing to silo mainframe and distributed development, it behooves practitioners to increase communication and coordination between these environments to help address the pressure of delivering release cycles at an accelerated pace, while improving product and service quality. And some vendors are doing just that. Today, vendors are increasing their focus as it relates to the modernization of the mainframe, to not only better navigate the unavoidable generation shift in mainframe staffing, but to also better bridge mainframe and distributed DevOps cultures. DevOps is a significant topic incorporating community and culture, application and infrastructure development, operations and orchestration, and many more supporting concepts.
    [Show full text]
  • Engineering Static Analysis Techniques for Industrial Applications
    Eighth IEEE International Working Conference on Source Code Analysis and Manipulation 90% Perspiration: Engineering Static Analysis Techniques for Industrial Applications Paul Anderson GrammaTech, Inc. 317 N. Aurora St. Ithaca, NY 14850 [email protected] Abstract gineered to allow it to operate on very large programs. Sec- tion 4 describes key decisions that were taken to expand This article describes some of the engineering ap- the reach of the tool beyond its original conception. Sec- proaches that were taken during the development of Gram- tion 5 discusses generalizations that were made to support maTech’s static-analysis technology that have taken it from multiple languages. Section 6 describes the compromises to a prototype system with poor performance and scalability principles that had to be made to be successful. Section 7 and with very limited applicability, to a much-more general- briefly describes anticipated improvements and new direc- purpose industrial-strength analysis infrastructure capable tions. Finally, Section 8 presents some conclusions. of operating on millions of lines of code. A wide variety of code bases are found in industry, and many extremes 2. CodeSurfer of usage exist, from code size through use of unusual, or non-standard features and dialects. Some of the problems associated with handling these code-bases are described, CodeSurfer is a whole-program static analysis infras- and the solutions that were used to address them, including tructure. Given the source code of a program, it analyzes some that were ultimately unsuccessful, are discussed. the program and produces a set of representations, on which static analysis applications can operate.
    [Show full text]