Comparing Model Checking and Static Program Analysis: a Case Study in Error Detection Approaches

Comparing Model Checking and Static Program Analysis: a Case Study in Error Detection Approaches

Comparing Model Checking and Static Program Analysis: A Case Study in Error Detection Approaches Kostyantyn Vorobyov Padmanabhan Krishnan Bond University, Gold Coast, Australia Bond University, Gold Coast, Australia Abstract of bug detection, including manual code inspection, test- ing [25], static and run-time analyses [24], model check- Static program analysis and model checking are two dif- ing [23] or a combination of these techniques. ferent techniques in bug detection that perform error While testing remains the principal approach to de- checking statically, without running the program. In gen- tect bugs, recent advances have made it possible to con- eral, static program analysis determines run-time proper- sider other techniques such as static program analysis ties of programs by examining the code structure while and model checking for large systems. model checking needs to explore the relevant states of Static program analysis is the automatic determina- the computation. tion of run-time properties of programs [12], which con- This paper reports on a comparison of such approaches siders run-time errors at compilation time automatically, via an empirical evaluation of tools that implement these without code instrumentation or user interaction. Static techniques: CBMC – a bounded model checker and Par- analysis builds an abstract representation of the program fait – a static program analysis tool. These tools are run behaviour and examines its states. This is done by the over code repositories with known, marked in the code analyser maintaining extra information about a checked bugs. We indicate the performance of these tools and re- program, which has to be an approximation of the real port on statistics of found and missed bugs. Our results one. Static analysis may result in sound analyses, that illustrate the relative strengths of each approach. is, if a program is declared to be safe, it is indeed safe. But owing to the nature of the approximations to analysis 1 Introduction may report errors that actually do not exist. Analysis that is not sound (especially involving pointers in C) is also The safety of software can be a mission critical issue in used [20]. computer systems. A significant number of program er- Model checking [23, 11] on the other hand computes rors are still found even in software that has been thor- the run-time states of the program without running the oughly tested. Such flaws could lead to a situation which program. The generated run-time states are used to check severely compromises the system’s security. This is es- if a property holds. If the program has a finite number of pecially the case for low level system software (such as states, it is possible to do an exhaustive analysis. The operating systems), which are usually implemented in program is said to be verified if program behaviour sat- the C programming language. isfies all properties in the specification. If such an exe- While using C has a number of benefits (including ac- cution path, that would lead to a violation of any of the cess to system resources, relatively small size of code) properties, can be found, the program is said to fail veri- there are many sources of potential flaws. These errors fication. The result returned by a model checker is either can increase the level of vulnerabilities thereby exposing a notion of a successful verification, or a counterexam- systems to a significant number of attacks, which often ple – an execution path that violates a given property. leads to very serious consequences, including inconsis- If the program is not finite state, certain types of ap- tent and unpredictable program behaviour, undetectable proximations are necessary, or a model-check may not disclosure of confidential data, run-time errors and sys- terminate. For instance one can have bounded model tem crashes. Consequently there is a necessity in effec- checking [5, 9], where the number of states explored are tive detection of such errors. bounded. Such approach is not sound – as a bug can exist There are many general ways to tackle the problem beyond the bound. However, all bugs within the bound can be identified. pointer safety, exceptions and user-specified assertions. Based on the model checking and static program anal- Additionally, the CBMC tool checks if sufficient un- ysis characteristics, one usually makes the following winding is done [1], ensuring that no bugs exist beyond general observations: if one examines running time and the bound. If the formula is verified but cannot prove that resource consumption, model checkers will be a more sufficient unwinding has been performed the claim fails expensive approach in bug detection than static program verification. The program is verified when all identified analysis. This has been the motivation to develop static formulas are verified and no unwinding violations exist. analysers for large code bases [8]. However static anal- Parfait [8, 6] is a static bug checker for C and C++ ysis is not as accurate as model checking. So a model source code designed by Sun Microsystems Laboratories checker should be able to produce more precise answers. (now Sun Labs at Oracle). This tool was created specif- An interesting question is does the accuracy of the re- ically for the purpose of bug checking of large source sults from model checking justify the extra resources it code bases (millions of lines of code). Parfait operates consumes. by defining an extensible framework composed of layers Engler and Musuvanthi [15, 14] demonstrate results of program analyses. Initially Parfait generates a sound that dispel some of these common beliefs related to list of bugs – the one that contains all possible program model checking vs static analysis. Our work is similar errors, including such that do not exist. Parfait iterates (i.e., to check whether these beliefs hold and if so under over this list, applying a set of different analyses, accept- what circumstances) although there are a few key differ- ing a bug as a real one and moving it to the set of real ences. One is that that we use standard code bases with detections, or rejects it as non-existent. Analyses are or- known bugs. Furthermore we do not explicitly write any dered from the least to most expensive, and are applied specification for model checking – we only check for a to ensure that bug statements are detected in the cheapest single property, viz., buffer overflow. Thus our compar- possible way. We should note that Parfait is different to ison is about the tools for the buffer overflow property. other static program analysis tools and was designed to We also compare the behaviour on precision and perfor- have a low false positive rate (e.g. report only bugs that mance. are proven to be ones during the analysis.) In this paper we present our results comparing static Cifuentes et al [7] have identified common pitfalls and analysis and model checking on benchmarked (i.e., with provide a framework to compare error checking tools. known marked bugs) code bases. Our aim is to find out Using this framework, we focus on precision which is if it is possible to determine when to use static analysis the ratio of bugs that are reported correctly to bugs that and when to use model checking. Firstly, we need to de- are reported but do not exist and scalability – ability of termine whether the two approaches are comparable, and to tool to produce reports in timely and efficient manner. if so, establish the way it may be performed and identify In our comparison we measure precision by classify- the specific comparison criteria. ing the results from the tools into the following 3 cate- gories: 2 Experimental Setup • True Positive – reports that are correct (i.e. existing There are many possible tools that we could have cho- bugs) sen including lint (a consistency and plausibility static • False Negative – undiscovered bugs bug checker for C programs) [18], PolySpace (a static analysis tool for measuring dynamic software quality at- • False Positive – reports that are incorrect (i.e. non- tributes) [13] Splint (an open source lightweight static existing bugs) where non-existent bugs are reported analysis tool for ANSI C programs) [16], BLAST (a as bugs model checker that utilizes lazy predicate abstraction and interpolation-based predicate discovery) [4], SLAM (a We also identify and report on True Negative results, symbolic model checking, program analysis and theo- defined as the difference between the number potential rem proving tool) [3], SATURN (A boolean satisfiability bugs identified by the tool and real bugs, to better demon- based framework for static bug detection) [27]. strare differences in approaches. We choose Parfait and CMBC for this comparison. It is important to enable a fair evaluation, because The choice of these tools were dictated by their avail- different tools target specific types of bugs. For in- ability and prior uses for large programs. stance, Parfait performs a memory-leak analysis, not im- CBMC is a bounded (via limiting the number of un- plemented in CBMC. In order to avoid such issues, we winding of loops and recursive function call depths) limit our comparison to Buffer Overflows [2], which is model checker for ANCI C programs [10]. CBMC can the most common type of memory corruption and se- be used for the purposes of verification of array bounds, curity exploit. A buffer overflow occurs when data is 2 copied to a location in memory that exceeds the size of • Classify the violated properties into true positive, the reserved destination area. Other types of bugs that false positive and false negative detections. may be present in the code are not considered. For the effective evaluation of the effectiveness of the In this experimental set up we would expect CBMC to tools we use a benchmarking framework – a repository have a greater resource usage for verification of the same with known bugs which are documented.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us