Proving Conditional Termination
Total Page:16
File Type:pdf, Size:1020Kb
Proving Conditional Termination Byron Cook1, Sumit Gulwani1, Tal Lev-Ami2,?, Andrey Rybalchenko3,??, and Mooly Sagiv2 1 Microsoft Research 2 Tel Aviv University 3 MPI-SWS Abstract. We describe a method for synthesizing reasonable underap- proximations to weakest preconditions for termination—a long-standing open problem. The paper provides experimental evidence to demonstrate the usefulness of the new procedure. 1 Introduction Termination analysis is critical to the process of ensuring the stability and us- ability of software systems, as liveness properties such “Will Decode() always return back to its call sites?” or “Is every call to Acquire() eventually followed by a call to Release()?” can be reduced to a question of program termina- tion [8,22]. Automatic methods for proving such properties are now well studied in the literature, e.g. [1,4,6,9,16]. But what about the cases in which code only terminates for some inputs? What are the preconditions under which the code is still safe to call, and how can we automatically synthesize these conditions? We refer to these questions as the conditional termination problem. This paper describes a method for proving conditional termination. Our method is based on the discovery of potential ranking functions—functions over program states that are bounded but not necessarily decreasing—and then finding preconditions that promote the potential ranking functions into valid ranking functions. We describe two procedures based on this idea: PreSynth, which finds preconditions to termination, and PreSynthPhase, which extends PreSynth with the ability to identify the phases necessary to prove the termi- nation of phase-transition programs [3]. The challenge in this area is to find the right precondition: the empty precon- dition is correct but useless, whereas the weakest precondition [13] for even very simple programs can often be expressed only in complex domains not supported by today’s tools (e.g. non-linear arithmetic). In this paper we seek a method that finds useful preconditions. Such preconditions need to be weak enough to allow interesting applications of the code in question, but also expressible in ? Supported by an Adams Fellowship through the Israel Academy of Sciences and Humanities. ?? Supported in part by Microsoft Research through the European Fellowship Pro- gramme. the subset of logic supported by decision procedures, model checking tools, etc. Furthermore, they should be computed quickly (the weakest precondition ex- pressible in the target logic may be too expensive to compute). Since we are not always computing the weakest precondition, in this paper we allow the reader to judge the quality of the preconditions computed by our procedure for a number of examples. Several of these examples are drawn from industrial applications. Limitations. In this paper, we limit ourselves to the termination property and to sequential arithmetic programs. Note that, at the cost of complicating the exposition, we could use known techniques (e.g., [2] and [8]) to extend our ap- proach to programs with heap and ω-regular liveness properties. Our technique could also provide assistance when analyzing concurrent programs via [10], al- though we suspect that synthesizing environment abstractions that guarantee thread-termination is a more important problem for concurrent programs than conditional termination. Related work. Until now, few papers have directly addressed the problem of au- tomatically underapproximating weakest preconditions. One exception is [14], which yields constraint systems that are non-linear. The constraint-based tech- nique in [5] could also be modified to find preconditions, but again at the cost of non-linear constraints. In contrast to methods for underapproximating weakest preconditions, techniques for weakest liberal preconditions are known (e.g., [7, 17]). Note that weakest preconditions are so rarely considered in the literature that weakest liberal preconditions are often simply called weakest pre- conditions, e.g., [17]. 2 Example In this section we informally illustrate our method by applying it to several examples. The procedures proposed by this paper are displayed in Figures 1 and 2. They will be more formally described in Section 3. We have split our method into two procedures for presentational convenience. The first procedure illustrates our method’s key ideas, but fails for the class of phase-transition programs. The second procedure extends the first with support for phase-transition programs. Note that phase-change programs and precondi- tions are interrelated (allowing us to solve the phase-change problem easily with our tool), as a phase-change program can be thought of as several copies of the same loop composed, but with different preconditions. 2.1 Finding preconditions for programs without phase-change We consider the following code fragment: 1 // @requires true; 2 while(x>0){ 3 x=x+y; 4 } 2 We assume that the program variables x and y range over integers. The initially given requires-clause is not sufficient to guarantee termination. For example, if x=1 and y=0 at the loop entry then the code will not terminate. The weakest precondition for termination of this program is x ≤ 0 ∨ y < 0. If we apply an existing termination prover, e.g., Terminator [9] or ARMC [21], on this code fragment then it will compute a counterexample to termination. The counterexample consists of 1) a stem η, which allows for ma- nipulating the values before the loop is reached, and 2) a repeatable cycle ρ, which is a relation on program states that represents an arbitrary number of iterations of the loop. To simplify the presentation, we represent the stem η as an initial condition θ on the variables of the loop part. (Section 4 describes this step in more detail.) In our example, the initial condition θ is true and the transition relation of the loop is defined by ρ({x, y}, {x0, y0}) ≡ x > 0 ∧ x0 = x + y ∧ y0 = y . In order to try and prove this counterexample spurious (i.e. to prove it well- founded, as explained in [9]), we need to find a ranking function f such that 0 0 ρ(X, X ) ⇒ Rf (X, X ), where Rf is the ranking relation defined by f: 0 0 Rf (X, X ) ≡ f(X) ≥ 0 ∧ f(X ) ≤ f(X) − 1 . As the termination prover has returned the above relation ρ as a counterexam- ple, we can assume that no linear ranking function f exists (note that there could exist a non-linear ranking function, depending on the completeness of the termination prover). Due to the absence of a linear ranking function for ρ, we find a potential rank- 0 ing function, i.e., a function b such that one of the conjuncts defining Rb(X, X ) holds for ρ. We compute a potential ranking function for ρ by finding an expres- sion on the variables {x, y} that is bound from below. One method for finding such candiate functions ito consider only the domain (and not the range) of ρ, i.e., find functions that are bounded when there is a successor. In other words, consider ∃x0, y0. x > 0 ∧ x0 = x + y ∧ y0 = y. In practice we achieve this via a the application of a quantifier elimination procedure, i.e., we have QELIM(∃x0, y0. x > 0 ∧ x0 = x + y ∧ y0 = y) ≡ x > 0 . We can normalize the condition x > 0 as x−1 ≥ 0, and thus use the function b = x − 1. Because ρ({x, y}, {x0, y0}) ⇒ b({x, y}) ≥ 0, which is the first conjunction 4 required by Rb, we can use b as our potential ranking function. Enforcing ranking with a strengthening. The function b = x − 1 that we found only satisfies part of the requirements for proving termination with Rb (i.e., 4 In this simple example the result was exactly the loop condition. However, when translating the cycle returned from the termination prover to a formula, some of the conditions are not on the initial variables. 3 b(X) ≥ 0 but not b(X0) ≤ b(X) − 1). We need a strengthening s({x, y}) such that 0 0 0 0 s({x, y}) ∧ ρ({x, y}, {x , y }) ⇒ Rb({x, y}, {x , y }) . Since b is bounded, we find s({x, y}) as follows: s({x, y}) ≡ QELIM(∀x0, y0. ρ(x, y, x0, y0) ⇒ b({x0, y0}) ≤ b({x, y}) − 1) . We obtain s({x, y}) = x ≤ 0 ∨ y < 0. That is, if s were an invariant (and usually it is not), then ρ would be provably well-founded using b. Synthesizing a precondition guaranteeing the strengthening. Recall that the orig- inal problem statement is to find a precondition that guarantees termination of the presented code fragment. As the strengthening s guarantees termination, we now need to find a precondition that guarantees the validity of s on every iteration of ρ. The required assertion is the weakest liberal precondition of s wrt. the loop statement. We use known techniques for computing underapprox- imations of weakest liberal preconditions to find the precondition that ensures that after any number of iterations of the loop s must hold in the next iter- ation. Using a tool for abstract interpretation of arithmetic programs [15], we obtain r({x, y}) = x ≤ 0 ∨ y < 0. In summary, our procedure has discovered the precondition proposed above. Note that we can alternate executions of our procedure together with suc- cessive applications of a termination prover to find a precondition that is strong enough to prove the termination of the entire program. The interaction between the tools is based on counterexamples for termination, which are discovered by the termination prover and are eliminated by the precondition synthesis proce- dure. 2.2 Finding preconditions for phase-change programs Consider the following code fragment: 1 // @requires true; 2 while(x>0){ 3 x=x+y; 4 y=y+z; 5 } Again, the given requires-clause is not sufficient to ensure termination.