Advanced Software Testing and Debugging (CS598) Symbolic Execution
Total Page:16
File Type:pdf, Size:1020Kb
Advanced Software Testing and Debugging (CS598) Symbolic Execution Fall 2020 Lingming Zhang Brief history • 1976: A system to generate test data and symbolically execute programs (Lori Clarke) • 1976: Symbolic execution and program testing (James King) • 2005-present: practical symbolic execution • Using SMT solvers • Heuristics to control exponential explosion • Heap modeling and reasoning about pointers • Environment modeling • Dealing with solver limitations 2 Program execution paths if (a) • Program can be viewed as binary … tree with possibly infinite depth if (b) … • Each node represents the execution of a conditional statement if(a) • Each edge represents the F T execution of a sequence of non- if(b) if(b) conditional statements F T F T • Each path in the tree represents an equivalence class of inputs 3 Example a==null F T Code under test a.Length>0 void CoverMe(int[] a) { T a==null if (a == null) F return; a[0]==123… if (a.Length > 0) a!=null && F T if (a[0] == 1234567890) a.Length<=0 throw new Exception("bug"); } a!=null && a!=null && a.Length>0 && a.Length>0 && a[0]!=1234567890 a[0]==1234567890 4 Random testing? • Random Testing Code under test • Generate random inputs void CoverMe(int[] a) { if (a == null) • Execute the program on return; those (concrete) inputs if (a.Length > 0) if (a[0] == 1234567890) • Problem: throw new Exception("bug"); • Probability of reaching error could } be astronomically small Probability of ERROR for the gray branch: 1/232 ≈ 0.000000023% 5 The spectrum of program testing/verification Verification Bounded verification & symbolic execution Concolic testing & Confidence whitebox fuzzing Random/fuzz testing Cost (programmer effort, time, expertise) 6 This class • KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs (OSDI'08) • Hybrid Concolic Testing (ICSE'07) 7 Symbolic execution • Symbolic Execution • Use symbolic values for inputs • Execute program symbolically on symbolic input values • Collect symbolic path constraints (PCs) • Use SMT/SAT solvers to check if a branch can be taken Code under test int foo(int i) { Symbolic Test int j=2*I; Symbolic pach execution engine generation i=i++; constraint collection i=i*j; High-quality Path if(i<1) Solutions tests i=-i; constraints return i; } Constraint solver 8 Symbolic execution: example Code under test Concrete execution Symbolic execution int foo(int i) { i=1 i=I0 int j=2*I; i=1, j=2*1 PC=true, i= I0, j=2* I0 i=i++; i=1+1 PC=true, i= I0 +1, j=2* I0 i=i*j; i=2*2 PC=true, i=(I0 +1)*2I0 , j=2* I0 if(i<1) 4<1=false PC=true, i=(I0 +1)*2I0 , j=2* I0 T i=-i; F return i; PC=(I0 +1)*2I0<1, i=-(I0 +1)*2I0 , … return 4; PC=(I +1)*2I <1, return -(I +1)*2I } 0 0 0 0 PC=(I0 +1)*2I0>=1, return (I0 +1)*2I0 Generated test1: i=I0=0 Generated test2: i=I0=1 9 Symbolic execution: bug finding • How to extend symbolic execution to catch non-crash bugs? • Add dedicated checkers at dangerous code locations! • Divide by zero example: y = x / z where x and z are symbolic variables and assume current PC is p • Check if z==0&&p is possible! int foo(int i) { int j=2*I; PC=(I0 +1)*2I0<1, i=i++; (I0 +1)*2I0<1 ∧ i=-(I0 +1)*2I0, i= I0 = 0 or i=I0 = -1 -(I +1)*2I =0 i=i*j; branch i=0 0 0 Trigger the bug! if(i<1) True i=-i; i=j/i; False branch PC=(I0 +1)*2I0>=1, (I0 +1)*2I0>=1 ∧ i= (I +1)*2I , (I +1)*2I =0 UNSAT return i; 0 0 0 0 Always safe! } i=0 Code under test We can easily generate a dedicated checker for each kind of bug (e.g., buffer overflow, integer overflow, …) 10 Challenges: path explosion • Interleaving two search heuristics: • Random Path Selection: when a branch point is reached, the set of states in each subtree has equal probability of being selected • Coverage-Optimized Search: selects states likely to cover new code in the immediate future, based on • The minimum distance to an uncovered instruction • The call stack of the state • Whether the state recently covered new code 11 Challenges: optimizing SMT queries • Expression rewriting • Simple arithmetic simplifications (x * 0 = 0) • Strength reduction (x * 2n = x << n) • Linear simplification (2 * x - x = x) • Constraint set simplification • x < 10 && x = 5 --> x = 5 • Implied value concretization • x + 1 = 10 --> x = 9 • Constraint independence • i<j && j < 20 && k > 0 && i = 20 --> i<j && i<20 && i=20 12 Challenges: optimizing SMT queries (cont.) Optimizations Queries Time (s) STP Time (s) 400 None • CounterNone -example13717 cache300 281 Cex. Cache Independence 13717 166 148 300 Independence All •Cex.i < 10 Cache && i = 108174 (no solution)177 156 All 699 20 10 • i < 10 && j = 8 (satisfiable, with variable 200 assignments i → 5, j → 8) Table 1: Performance comparison of KLEE’s solver optimiza- Average Time (s) tions on COREUTILS.Eachtoolisrunfor5minuteswithout 100 • Supersetoptimization, andof rerununsatisfiable on the same workload constraints with the given optimizations.• {i < 10, i The= 10, results j = are 12} averaged (unsatisfiable) across all applications. 0 0 0.2 0.4 0.6 0.8 1 • Subset of satisfiable constraints Num. Instructions (normalized) currently• {i < 10 has} (satisfiable entries for {i<with10,i i=10→ }5,(no j → solution) 8) and {i<10,j =8} (satisfiable, with variable assign- Figure 2: The effect of KLEE’s solver optimizations over ments i → 5,j → 8). time, showing they become more effective over time, as the • Superset of satisfiable constraints caches fill and queries become more complicated. The num- 1Whenasubsetofaconstraintsethasnosolution, ber of executed instructions is normalized so that data can be • Samethen neither variable does the assignments original constraint might set. Adding work aggregated across all applications. constraints to an unsatisfiable constraint set cannot make it satisfiable. For example, given the cache 13 above, {i<10,i =10,j =12} is quickly deter- erage number of STP queries are reduced to 5% of the mined to be unsatisfiable. original number and the average runtime decreases by 2Whenasupersetofaconstraintsethasasolution, more than an order of magnitude. that solution also satisfies the original constraint set. It is also worth noting the degree to which STP time Dropping constraints from a constraint set does not (time spent solving queries) dominates runtime. For the invalidate a solution to that set. The assignment original runs, STP accounts for 92% of overall execution i → 5,j → 8,forexample,satisfieseitheri<10 time on average (the combined optimizations reduce this or j =8individually. by almost 300%). With both optimizations enabled this 3Whenasubsetofaconstraintsethasasolution,itis percentage drops to 41%, which is typical for applica- likely that this is also a solution for the original set. tions we have tested. Finally, Figure 2 shows the efficacy This is because the extra constraints often do not in- of KLEE’s optimizations increases with time — as the validate the solution to the subset. Because checking counter-example cache is filled and query sizes increase, apotentialsolutionischeap,KLEE tries substituting the speed-up from the optimizations also increases. in all solutions for subsets of the constraint set and returns a satisfying solution, if found. For example, 3.4 State scheduling the constraint set {i<10,j =8,i "=3} can still be KLEE selects the state to run at each instruction by inter- satisfied by i → 5,j → 8. leaving the following two search heuristics. To demonstrate the effectiveness of these optimiza- Random Path Selection maintains a binary tree record- tions, we performed an experiment where COREUTILS ing the program path followed for all active states, i.e. the applications were run for 5 minutes with both of these leaves of the tree are the current states and the internal optimizations turned off. We then deterministically reran nodes are places where execution forked. States are se- the exact same workload with constraint independence lected by traversing this tree from the root and randomly and the counter-example cache enabled separately and selecting the path to follow at branch points. Therefore, together for the same number of instructions. This exper- when a branch point is reached, the set of states in each iment was done on a large sample of COREUTILS utili- subtree has equal probability of being selected, regard- ties. The results in Table 1 show the averaged results. less of the size of their subtrees. This strategy has two As expected, the independence optimization by itself important properties. First, it favors states high in the does not eliminate any queries, but the simplifications it branch tree. These states have less constraints on their performs reduce the overall running time by almost half symbolic inputs and so have greater freedom to reach un- (45%). The counter-example cache reduces both the run- covered code. Second, and most importantly, this strat- ning time and the number of STP queries by 40%. How- egy avoids starvation when some part of the program is ever, the real win comes when both optimizations are en- rapidly creating new states (“fork bombing”) as it hap- abled; in this case the hit rate for the counter-example pens when a tight loop contains a symbolic condition. cache greatly increase due to the queries first being sim- Note that the simply selecting a state at random has nei- plified via independence. For the sample runs, the av- ther property. 6 Challenges: environment modeling int fd = open(“t.txt”, O_RDONLY); ssize t read(int fd, void *buf, size_t count) { • If all arguments are concrete, forward … to OS directly struct klee_fd *f = &fds[fd]; … int fd = open(sym_str, O_RDONLY); /* sym files are fixed size: don’t read beyond the end.