From: AAAI Technical Report FS-01-04. Compilation copyright © 2001, AAAI (www.aaai.org). All rights reserved.

Local Search and vs Non-Systematic Backtracking

Steven Prestwich Department of University College, Cork, Ireland s. prestwich@cs,ucc. ie

Abstract exploit structure, and is quite unsuitable for certain prob- lems. This situation has motivated research into hybrid ap- This paperaddresses the followingquestion: what is the es- proaches, with the aim of combiningfeatures of both classes sential differencebetween stochastic local search (LS)and systematicbacktracking (BT) that gives LSsuperior seal- of algorithmin order to solve currently intractable problems. ability? Onepossibility is LS’s lack of firm commitment Whatis the essential difference betweenlocal search and to any variable assignment.Three BT are mod- backtracking, that sometimesenables local search causes to ified to have this feature by introducingrandomness into scale far better than backtracking? Answeringthis question the choiceof backtrackingvariable: a forwardchecker for may lead to interesting new hybrids. The issue was de- n-queens, the DSATURgraph colouring , and a bated in (Freuder et al. 1995) and somesuggestions were Davis-Logemann-Lovelandprocedure for satisfiability. In local search’s locality, randomness,incompleteness, differ- each case the modifiedalgorithm scales like LSand some- ent search space, and ability to follow gradients. Wepropose timesgives better results. It is arguedthat randomisedback- that the essential difference is backtrackers’ strong commit- trackingis a formof local search. ment to early variable assignments,whereas local search can in principle (given sufficient noise) reassign any variable Introduction any point during search. Thoughthis seems rather an obvi- ous point, it has led to a newclass of algorithms that some- A variety of algorithms have been devised to solve constraint times out-performboth backtracking and local search. satisfaction and optimisation problems, two contrasting ap- proaches being systematic backtrackingand stochastic local Non-systematic backtracking search. Backtrackers are complete and can therefore prove unsolvability or find all solutions. Theycan also prove the The new approach has already been described in (Prest- optimality of a solution to an optimisation problemby fail- wich 2000a; Prestwich 2000b; Prestwich 2001a; Prestwich 2001b) but it is summarisedhere. It begins like a standard ing to find a better solution. They often use techniques backtrackerby selecting a variable using a heuristic, assign- such as constraint propagation, value and variable order- ing heuristics, branch-and-boundand intelligent backtrack- ing a value to it, optionally performingconstraint propaga- ing. Backtrackers are sometimescalled constructive because tion, then selecting another variable; on reaching a dead-end they construct solutions from partial consistent assignments (a variable that cannot be assigned any value without con- of domainvalues to variables. In contrast, local search algo- straint violation or, whereconstraint propagation is used, rithms are typically non-constructive, searching a space of domainwipe-out) it backtracks then resumes variable selec- total but inconsistent assignments.They attempt to minimise tion. The novel feature is the choice of backtracking vari- able: the variables selected for unassignmentare chosen ran- constraint violations, and whenall violations have been re- moveda solution has been found. domly, or using a heuristic with randomtie-breaking. As Neither backtracking nor local search is seen as adequate in DynamicBacktracking (Ginsberg 1993) only the selected for all problems¯ Real-world problems may have a great variables are unassigned, without undoinglater assignments. deal of structure and are often solved most effectively by Because of this resemblance we call the approach Incom- backtrackers, which are able to exploit structure via con- plete DynamicBacktracking (IDB). The search is no longer straint propagation. Unfortunately, backtrackers do not al- complete, so we must sometimesforce the unassignment of ways scale well to very large problems, even when aug- morethan one variable. Wedo this by adding an integer pa- mentedwith powerfulconstraint propagation and intelligent rameter 13 _> 1 to the algorithm and unassigning B variables backtracking. For large problemsit is often moreefficient at each dead-end. to use local search, whichtends to have superior scalabil- A complication arises when we wish to combine IDB ity. However,non-constructive local search cannot fully with forward checking. Suppose we have reached a dead- end after assigning variables vl ... Vk, and vk+l ... vn re- Copyright(~) 2001,American Association for Artificial Intelli- main unassigned. Wewould like to unassign some arbi- gence(www.aaai.org). All rights reserved. trary variable v~, where 1 _< u _< k, leaving the domains

109 in the state they would have been in had we assigned only n=lO n = 100 n = 1000 vl...v~,-1,vu+l...Vk. Howcan we do this efficiently? alg steps succ steps succ steps succ One way to characterise forward checking is as follows: a DFS1 81.0 100% 9929 value x is in the domainof a currently unassignedvariable v DFS2 25.4 100% 7128 39% 98097 3% if and only if addingthe assignmentv = x wouldnot cause a DFS3 14.7 100% 1268 92% 77060 24% constraint violation. In backtrackingalgorithms this is often IDB1 112 100% 711 100% 1213 100% used to update the domains of unassigned variables as as- IDB2 33.0 100% 141 100% 211 100% signments are added and removed. Wegeneralise this idea IDB3 23.8 100% 46.3 100% 41.2 100% by associating with each value x in the domainof each vari- IDB4 13.0 100% 8.7 100% 13.3 100% able v a conflict count Cv,x. The value of Cv,x is the number IDB5 12.7 100% 8.0 100% 12.3 100% of constraints that wouldbe violated if the assignmentv = x MCHC 57.0 100% 55.6 100% 48.8 100% wereadded. If Cv,x = 0 then x is currently in the domainof MCBT 46.8 100% 25.0 100% 30.7 100% v. Wemake a further generalisation by maintaining the con- flict counts of all variables, assigned and unassigned. The Figure 1: Results on n-queens meaningof a conflict count for a currently assigned variable is: Cv,x is the numberof constraints that wouldbe violated if the variable v were reassigned to x. Nowon assigning (Minton et al. 1992) comparedthe performance of back- or unassigning a variable, we incrementally update conflict tracking and local search on n-queens problems up to n = counts in all other variable domains.This is clearly moreex- 106. They executed each algorithm 100 times for various pensive than standard forward checking, which only updates values of n, with an upper bound of 100n on the numberof the domainsof unassigned variables. But the state of each steps (backtracks or repairs), and reported the meannum- variable domainis nowindependent of the order in which ber of steps and the success rate as a percentage. Were- assignments were made, and we can unassign variables ar- produce the experiment up to n = 1000, citing their re- bitrarily. The scheme can also be extended to non-binary sults for the Min-Confiictshill climbing algorithm (denoted constraints (see below). here by MCHC)and a backtracker augmentedwith the Min- A value ordering heuristic has been found useful in sev- Conflicts heuristic (denoted by MCBT).We compute results eral applications. It reassigns each variable to its previ- for depth-first search (DFS1), DFS1 with forward checking ous assigned value wherepossible, otherwise to a different (DFS2), and DFS2with dynamicvariable ordering based value. This speeds up the rediscovery of consistent partial minimumdomain size (DFS3). In DFS1and DFS2the vari- assignments. However,to avoid stagnation it attempts to use able to be assigned is selected randomly.(Our results differ a different (randomly-chosen)value for just one variable af- from those of Mintonet al., possibly because of algorithmic ter each dead-end. details such as randomtie-breaking during variable order- ing.) Wealso obtain results for these three algorithms with Experiments DFSreplaced by IDB(denoted by IDB1, IDB2, IDB3); also In this section IDBis applied to three problems: n-queens, for IDB3plus a backtracking heuristic that unassigns vari- graph colouring and satisfiability. Theexperiments are per- ables with maximumdomain size l (IDB4), and for IDB4 formed on a 300MHzDEC Alphaserver 1000A 5/300 under plus the value ordering heuristic described above (IDB5). Unix. For readers wishing to normalise our execution times ThelDB parameter is set to B = lforn = 1000 and to other platforms, the DIMACS(Johnson and Trick 1996) n = 100, and B = 2 for n = 10 (B = 1 sometimes causes benchmarkprogram dfmax r500.5 takes 46.2 seconds on the stagnation whenn = 10). Alphaserver. The results in Figure 1 show that replacing DFSby IDB greatly boosts scalability in three cases: the simple back- The n-queens problem tracking algorithm, backtracking with forward checking, Wefirst evaluate IDB on the n-queens problem. Though and the same algorithm with dynamic variable ordering. fashionable several years ago n-queensis no longer consid- Even the basic IDBalgorithm scales muchbetter than all ered a challenging problem. However,large instances still the DFS algorithms, and IDB3 performs like MCHC.The defeat mostbacktrackers and it is therefore of interest. Con- additional backtrackingand value ordering heuristics further sider a generalised board, which is a square divided boost performance, makingIDB the best reported algorithm into n × n smaller squares. Place n queens on it in such a in terms of backtracks. It also compareswell with another waythat no queenattacks any other. A queen attacks another hybrid: WeakCommitment Search (Yokoo 1994) requires if it is on the same row, columnor diagonal (in which case approximately 35 steps for large n (Pothos and Richards both attack each other). Wecan model this problem using 1995). In terms of execution time IDBis also efficient, each n variables each with domain Di = {1,..., n}. A variable backtrack taking a time linear in the problemsize. vi corresponds to a queen on row i (there is one queen per It should be noted that IDBis not the only backtracker to row), and the assignment vi = j denotes that the queen on performlike local search on n-queens. Similar results were row i is placed in columnj, wherej E Di. The constraints obtained by Mintonet al.’s MCBTalgorithm (see Figure 1) are vi ~ vi and Ivi - vii ~ li - Jl where 1 < i < j < n. Wemust assign a domainvalue to each variable without vi- t Recallthat domaininformation is also maintainedfor assigned olating any constraint. variables.

110 and others. These algorithms rely on good value ordering DSATUR IDB heuristics. In MCBTan initial total assignmentI is gener- graph col sec col sec B ated by the Min-Conflictsheuristic and used to guide DFSin R125.1 5 0.0 5 0.01 1 two ways. Firstly, variables are selected for assignmenton R250.1 8 0.0 8 0.1 1 the basis of howmany violations they cause in 1. Secondly, DSJR500.1 12 0.2 12 0.4 1 values are tried in ascending order of numberof violations R1000.1 20 1.6 20 1.81 1 with currently unassigned variables. This informed back- R125.5 36 63.4 36 0.06 1 tracking algorithm performs almost identically to MCHCon R250.5 66 2.9 65 0.33 2 n-queens and is complete. However,MCBT is still prone to DSJR500.5 130 17.6 122 1.99 7 the same drawbackas most backtrackers: a poor choice of R1000.5 246 75.8 235 8.61 6 assignmenthigh in the search will still take a very long DSJC125.5 20 0.7 18 16.4 1 time to recover from. IDBis able to modifyearlier choices, DSJC250.5 35 198 32 81.3 1 as long as the B parameteris sufficiently high, so it can re- DSJC500.5 63 5.8 59 4.41 1 cover from poor early decisions. This difference is not ap- DSJC1000.5 114 5.0 107 13.2 1 parent on n-queens,but it will be significant on problemsfor whichvalue ordering heuristics are of little help. Figure 2: Results on selected DIMACSbenchmarks Graph colouring Graph colouring is a combinatorial optimisation problem Minton et al. found that DSATURout-performed MCHC with real-world applications such as timetabling, schedul- on these graphs, and (J6nsson and Ginsberg 1993) found ing, frequency assignment, computer , that depth-first search and DynamicBacktracking both beat printed circuit board testing and pattern matching. A graph MCHCand the GSATlocal search algorithm on similar G = (V, E) consists of a set V of vertices and a set E of graphs. To compare IDB with DSATURwe therefore need edges betweenvertices. Twovertices connected by an edge harder benchmarks. We use selected DIMACSgraphs and are said to be adjacent. Theaim is to assign a colour to each compare IDB with M. Trick’s DSATURimplementation 2 vertex in such a waythat no two adjacent vertices have the which also performs somepre-processing of graphs by find- samecolour, using as few colours as possible. ing a large clique, an enhancementIDB does not have. We Weapply an algorithm similar to IDB5(see previous sec- use two types of graph: random and geometric. Random tion), again with forward checking implementedvia conflict graphs have little structure for constraint algorithms to ex- counts. However,the largest-domain backtracking heuris- ploit, while geometric graphs tend to have large cliques. tic used for n-queenscauses the search to stagnate on some Even without the clique enhancement DSATURworks quite colouring problems, so we use a modified heuristic: ran- well on geometric graphs (Culberson, Beacham, and Papp domlyselect a variable with domainsize greater than one, if 1995), so these are a challengingtest for IDB. possible; otherwise select randomly.We also use a different Figure 2 shows the results. IDBresults are means over variable ordering: the Br61azheuristic (Br61az1979). This 10 runs and start with the number of colours shownplus selects an unassigned variable with minimumdomain size, 10. DSATUR’sresults are obtained from a single run be- breaking ties by considering the degree in the uncoloured cause its heuristics are deterministic. Weshall also men- part of the graph, and breaking further ties randomly. The tion results for five other algorithms in (Joslin and Clements Br61az heuristic is used in the well-knownDSATUR colour- 1999): Squeaky Wheel Optimization, TABUsearch (both ing algorithm (as well as in other constraint applications) general-purposecombinatorial optimisation algorithms), It- and our IDBalgorithm is a modified DSATUR.Again like erative Greedy, distributed IMPASSEand parallel IMPASSE DSATUR,on finding each colouring our algorithm restarts (all pure colouring algorithms). with one less available colour, and reuses the previous as- On the sparse geometric graphs ({R,DSJR}*.I) IDBand signmentsas far as possible. DSATURare both very fast, often requiring no backtracks 3-colourable randomgraphs are often used as colouring at all. The other five algorithms also find these problems benchmarks. (Pothos and Richards 1995) comparedthe per- easy. On the denser geometric graphs ({R,DSJR}*.5) IDB formance of MCHCwith that of WeakCommitment Search consistently finds the sameor better colourings in shorter (WCS)on these graphs, with 60 vertices and average de- times than DSATURor any of the other algorithms. Ontwo gree between 3 and 7. Using a maximumof 2000 steps both of the graphs IDBfinds a better colouring than any of the algorithms had a lower success rate around the phase transi- other algorithms: the previous best for DSJR500.5was dis- tion near degree 5: WCS(without Br61az heuristics or for- tributed IMPASSEwith 123 in 94.1 seconds, and the best ward checking) dipped to about 80% while MCHCdipped for R1000.5 was Squeaky WheelOptimization with 239 in to less than 5%. A modified MCHCallowing uphill moves 309 seconds (times normalised to our platform). performed muchlike WCS.We generate such graphs in the On the randomgraphs (DSCJ*.5) IDBalso finds consis- same way and apply IDB to them under the same condi- tently better colourings than DSATUR.Though DSATUR’s tions: over 1000 runs with tuned values of the parameter B reported times are sometimesshorter, it was given 5 min- (increasing from1 to 9 near the phase transition) its success utes to find a better colouring but did not; it tended to rate dips to 98.5% at degree 4.67. Hence IDB beats MCHC on at least somecolouring problems. 2 http://mat.gsia.cmu.edu/COLOR/color.html

111 find a reasonable colouring quickly, then madelittle further problem time p progress. However,IDB is not the best algorithm on the f600 4.35 0.01 random graphs, beating only DSATURand TABUand per- fl000 84.1 0.01 formingroughly like Iterative Greedy. f2000 686 0.01 ssa-7552-* 0.31 0.998 Satisfiability iil6*l 0.305 0.1 Thesatisfiability (SAT)problem is of both practical and the- ii32"3 1.17 0.1 oretical importance. It was the first problem shownto be 2bitadd_l 1 0.026 0.998 NP-complete,has been the subject of a great deal of recent 2bitadd_12 0.028 0.998 research and has well-known benchmarks and algorithms, 2bitcomp_5 0.002 0.998 making it ideal for evaluating new approaches. The SAT 3bitadd_31 11.0 0.998 problem is to determine whether a Booleanexpression has 3bitadd_.32 8.6 0.998 a satisfying set of truth assignments. The problemsare usu- bw_large.a 0.425 0.9 ally expressed in conjunctive normalform: a conjunction of bw_large.b "L75 0.95 clauses C1 A... A Cmwhere each clause C is a disjunction bwdarge.c 48.9 0.98 of literals 11 V ... V In and eachliteral l is either a Boolean bwdarge.d 295 0.997 variable v or its negation-~v. hanoi4 4156 0.98 Manysystematic SATand other constraint algorithms are based on backtracking. An early example is the Davis- Figure 3: IDBon selected SATbenchmarks Putnam procedure in Loveland’s form (Davis, Logemann, and Loveland 1962), using depth-first search with infer- ence rules such as unit propagation. Modernsystematic iteratively unassignedwith probability p. SATsolvers are still based on this scheme and include In (Prestwich 2000b) IDB was shown to scale almost TABLEAU(Crawford and Auton 1996), RELSAT(Bayardo identically to WSATon hard random 3-SATproblems. In and Schrag 1997), POSIT (Freeman 1994), SATO(Zhang (Prestwich 2000a) it was shownto efficiently solve three 1997), GRASP(Silva and Sakallah 1996) and SATZ classes of problemconsidered hard for local search (though and Anbulagan 1997). The main difference between these DLMhas recently performed efficiently on these problems solvers is their variable ordering heuristics; SATZalso has (Wuand Wah2000)): all-interval series, parity learning a preprocessing phase and RELSATuses look-back tech- someAIM problems; it also beat backtracking, local search niques. There are also several local search SATalgorithms. and some hybrids on SAT-encodedgeometric graph colour- Early examples are GSAT(Selman, Levesque, and Mitchell ing problems. Figure 3 shows the performance of the new 1992) and Gu’s algorithms (Gu 1992). GSAThas since been SATvariant on benchmarksfrom three sources: the 2nd DI- enhanced in various ways (Selman, Kautz, and Cohen1994) MACSImplementation Challenge, 3 the SATLIBproblem giving several WSATvariants. Other local search SATalgo- repository, 4 and the International Competitionand Sympo- rithms include DLM(Wu and Wah 2000) and GRASP(Re- sium on Satisfiability Testing in Beijing. 5 It has similar sende and Feo 1996). DLMis based on Lagrange multipli- performanceto the previous implementation on most of the ers, while GRASPis a greedy, randomised, adaptive algo- problemsalready tested, but is several times faster on these rithm. (There is no relation betweenthe GRASPbacktrack- problems. ing and local search algorithms.) Random3-SAT is a standard benchmark for SATalgo- IDBhas already been applied to SATby modifyinga sim- rithms, the hard problemsbeing found near the phase transi- ple Davis-Logemann-Lovelandprocedure. The use of con- tion. Backtrackingcan efficiently solve such problemsonly flict counts is easily adaptedto unit resolution in SAT:the up to a few hundred variables (Crawford and Auton 1996) value of Cv,x denotesin howmany clauses (i) all other liter- and we were unable to solve f600 with SATZ,but local als are false, and (ii) the literal containingv has sign -~x. search algorithms can solve muchlarger problems (Selman, nowdescribe new heuristics which improve performance on Kautz, and Cohen 1994; Wuand Wah2000). The results, some problems. The new variable ordering heuristic ran- whichare meansover 20 runs, verify that IDBhas the scal- domlyselects a variable with domainsize I; if there are none ability of local search, thoughWSAT is faster and the latest then it selects the variable that was unassignableat the last DLMimplementation can solve f2000 in 19.2 seconds on a dead-end; whenthat has been used, it selects the most re- 400MHzPentium-II processor. cently unassigned variable. For backtracking the variable Circuit fault analysis SATproblems are harder for back- with lowest conflict counts is selected, breaking ties ran- tracking than for local search. The IDBresults are means domly.A probabilistic value ordering rule is used as follows. over 20 runs for all four DIMACSproblems denoted by ssa- Eachvariable records its last assignedvalue (initially set to 7552-*. It has similar performanceto WSAT,is slower than a randomtruth value). For each assignment, with probabil- DLM,and faster than the GRASPlocal search algorithm and ity 1/u the negated value is preferred, and with probability 1 - 1/u the value itself, whereu is the numberof currently 3 ftp://dimacs.rutgers.edu/pub/challenge/ unassignedvariables. Finally, the integer parameterB is re- 4http://aida.intellektik.informatik.th- placed by a real parameter p (0 < p < 1). At each dead-end darmstadt.de/-hoos/SATLIB one variable is unassignedas usual, and further variables are 5 http://www.cirl.uoregon.edu/crawford/beijing

112 four backtrackers (SATZ, GRASP,POSIT and RELSAT), backtracking yet too structured for local search, as shown ranking joint second of eight algorithms. on geometric graph colouring. Previous results provide fur- Circuit synthesis SATproblems are also hard for back- ther evidence of this strength: an IDB-modifiedconstraint trackers. The IDBresults for these problemsare meansover solver for Golombrulers out-performedthe systematic origi- 100 runs. Onthe benchmarksiil6*l and ii32"1 it is slower nal, and also a (Prestwich 2001b); an IDB- than DLMand GSATwith random walk but faster than the modified branch-and-boundalgorithm for a hard optimisa- GRASPlocal search algorithm, performing like a reason- tion problemout-performed the systematic original, and all ably efficient local search algorithm. On the benchmarks knownlocal search algorithms (Prestwich 2000b); and very 2bitadd_l 1 etc, IDBtimes are meansover 20 runs. It has competitive results were obtained on DIMACSmaximum similar performance to that of GSATwith randomwalk and clique benchmarksusing forward checking with IDB(Prest- is manytimes faster than the backtrackers SATZ,GRASP, wich 2001 b). POSIT and RELSATon the 3bitadd problems. However, ShouldIDB be classified as a backtracker, as local search, WSATis several times faster again. as both or as a hybrid? It may appear to be simply an Blocks Worldyields someof the hardest SATbenchmarks inferior version of DynamicBacktracking (DB), sacrific- (bwlarge.*), on which there is no clear dominancebetween ing the important property of completeness to no good pur- backtracking and local search. ComparingIDB times over pose. A counter-exampleto this view is the n-queens prob- 20 runs with published results for the systematic TABLEAU, lem, on which DBperforms like chronological backtracking SATZ, GRASP,POSIT and RELSAT,and the local search (J6nsson and Ginsberg 1993), whereas IDBperforms like lo- algorithm WSAT,we find that on the smallest problem IDB cal search. Another is the random3-SAT problem, on which ranks seventh, on the next it ranks sixth, on the next fourth, DBis slower than chronological backtracking (though and on the largest second. This reflects its local search-like modified DBis similar) (Baker 1994) while IDBagain per- scalability. forms like local search. Weclaim that IDBis in fact a lo- TABLEAUis able to solve hanoi4 in 3.2 hours on an SGI cal search algorithm, and in previous papers it has been re- Challenge (time given in SATLIB),and DLMin a mean ferred to as Constrained Local Search. Our view is that it 6515 seconds over 10 runs on a Pentium-III 500MHzwith stochastically explores a space of consistent partial assign- Solaris 7. HenceIDB appears to be the fastest current algo- ments while trying to minimise an objective function: the rithm on hanoi4. number of unassigned variables. The forward local moves Like most local search algorithms IDBis incomplete, but are variable assignments, while the backwardlocal moves (Hoos 1998) defines an analogous property to complete- are unassignments (backtracks). However,to someextent ness for randomised algorithms called approximate com- the question is academic: even if IDBis not local search, pleteness, also called convergencein the literature: if the results showthat it captures its essence. probability of a finding an existing solution tends to I as the Other researchers have designed algorithms using back- run time tends to c~ then the algorithm is convergent. The tracking techniques but with improvedscalability. Itera- newSAT heuristics for IDBallow the convergenceproperty tive Sampling(Langley 1992) restarts a constructive search to be proved as follows. Anysearch algorithm that period- every time a dead-end is reached. Bounded Backtrack ically restarts with randomvariable assignments is clearly Search (Harvey 1995) is a hybrid of Iterative Samplingand convergent, because each restart has a non-zero probabil- chronological backtracking, alternating a limited amountof ity of finding a solution directly. NowIDB has a non-zero chronological backtracking with randomrestarts. (Gomes, probability (pn-u-1 where n is the number of SATvari- Selman, and Kautz 1998) periodically restart chronological ables, u the numberof unassigned variables, and p > 0) of or intelligent backtracking with slightly randomisedheuris- restarting, and on reassigning each variable it has a non-zero tics. (Ginsberg and McAllester 1994) describe a hybrid probability of selecting either the samevalue as previously the GSATlocal search algorithm and DynamicBacktracking used (1 - 1/u) or the negated value (l/u). Therefore IDB that increases the flexibility in choice of backtrackingvari- maydiscover a solution immediately following any dead- able. Theynote that to achievetotal flexibility whilepreserv- end, and is convergent. As pointed out by (Morris 1993), ing completeness would require exponential memory, and convergencedoes not prevent the time taken to find a solu- recommenda less flexible version using only polynomial tion from being unreasonably long. However,Hoos found memory. WeakCommitment Search (Yokoo 1994) builds that modifying local search algorithms to makethem con- consistent partial assignments,using greedysearch (the min- vergent often boosted performance, and we have found the conflict heuristic) to guide value selection. Onreaching same with IDB. dead-end it restarts and uses learning to avoid redundant search. Local Changes (Verfaillie and Schiex 1994) is Discussion complete backtracking algorithm that uses conflict analysis IDBcombines backtracker-like constraint handling with lo- to unassign variables leading to constraint violation. Limited cal search-like scalability, giving competitiveresults across Discrepancy Search (Harvey 1995) searches the neighbour- a widevariety of problems.Nevertheless, it is not alwaysthe hood of a consistent partial assignment, trying neighbours fastest approach for any given problem (for example WSAT in increasing order of distance from the partial assignment. is faster on manySAT benchmarks) and should be seen as It is showntheoretically, and experimentally on job shop complementaryto existing approaches. Its main strength scheduling problems, to be superior to Iterative Sampling lies in its ability to solve problemsthat are too large for and chronological backtracking.

113 There are also manyhybrid approaches. (Schaerf 1997) straint Satisfaction. In Proceedingsof the FourteenthInter- describes an algorithm that explores the space of all par- national Joint Conferenceon , 2027- tial assignments (IDB explores only the consistent par- 2032. Montreal, Quebec: Morgan Kaufmann. tial assignments). The Path-Repair Algorithm (Jussien and Ginsberg, M. L. 1993. DynamicBacktracking. Journal of Lhomme2000) is a generalisation of Schaerf’s approach Artificial Intelligence Research1:25-46. that includes learning, allowing completeversions to be de- Ginsberg, M. L., and McAllester, D. A. 1994. GSATand vised. The two-phase algorithm of (Zhang and Zhang 1996) DynamicBacktracking. In Proceedings of the Fourth In- searches a space of partial assignments, alternating back- ternational Conferenceon Principles of KnowledgeRepre- tracking search with local search. (de Backer et al. 1997) sentation and Reasoning, 226-237. Bonn, Germany:Mor- generate partial assignmentsto key variables by local search, gan Kaufmann. then pass them to a constraint solver that checks consis- tency. (Pesant and Gendreau 1996) use branch-and-bound Gomes, C.; Selman, B.; and Kautz, H. 1998. Boosting to efficiently explore local search neighbourhoods. Large Through Randomization. In Pro- NeighbourhoodSearch (Shaw 1998) performs local search ceedings of the Fifteenth NationalConference on Artificial and uses backtracking to test the legality of moves.(Craw- Intelligence, 431-437. Madison, WI: AAAIPress. ford 1993) uses local search within a complete SATsolver Gu, J. 1992. Efficient Local Search for Very Large-Scale to select the best branchingvariable. Satisfiability Problems.SIGART Bulletin 3(1):8-12. Future work will include investigating whether IDBcan Harvey, W. D. 1995. Nonsystematic Backtracking Search. be madecomplete, thus combiningall the advantagesof sys- Ph.D. diss., Stanford University. tematic and non-systematic search. This could be achieved Hoos, H. H. 1998. Stochastic Local Search -- Methods, by learning techniques, but memoryrequirements maymake Models, Applications. PhDdiss., Dept. of ComputerSci- this approachimpractical for very large problems. ence, TUDarmstadt. References Johnson, D. S., and Trick, M. A. eds. 1996. Cliques, Col- oring and Satisfiability: Second DIMACSImplementation Baker, A. B. 1994. The Hazards of Fancy Backtracking. In Challenge, DIMACSSeries in and Proceedingsof the Twelfth National Conferenceon Artifi- Theoretical ComputerScience 26. AmericanMathematical cial Intelligence, 1:288-293. Seattle, WA:AAAI Press. Society. Bayardo, R. J. Jr, and Schrag, R. 1997. Using CSPLook- J6nsson, A. K., and Ginsberg, M. L. 1993. Experiment- Back Techniques to Solve Real World SATInstances. In ing with NewSystematic and Nonsystematic Search Tech- Proceedings of the Fourteenth National Conferenceon Ar- niques. In Proceedings of the AAAI Spring Symposiumon tificial Intelligence, 203-208.Providence, RI: AAAIPress. AI and NP-HardProblems. Stanford, California: AAAI Br61az, D. 1979. NewMethods to Color the Vertices of a Press. Graph. Communicationsof the A CM22(4):251-256. Joslin D. E., and Clements, D. P. 1999. Squeaky Wheel Crawford. J. M. 1993. Solving Satisfiability ProblemsUs- Optimization. Journal of Artificial Intelligence Research ing a Combinationof Systematic and Local Search. In Sec- 10:353-373. ond DIMACSChallenge: Cliques, Coloring, and Satisfia- Jussien, N., and Lhomme,O. 2000. Local Search With bility, RutgersUniversity, NJ. Constraint Propagation and Conflict-Based Heuristics. In Crawford, J. M., and Auton, L. D. 1996. Experimental Re- Proceedings of the Seventeenth National Conferenceon Ar- sults on the Crossover Point in Random3SAT. Artificial tificial Intelligence, 169-174. Austin, TX:AAAI Press. Intelligence 81 (1-2):31-57. Langley, P. 1992. Systematic and Nonsystematic Search Culberson, J. C.; BeachamA.; and Papp, D. 1995. Hid- Strategies. In Proceedingsof the First International Con- ing Our Colors. In Proceedings of the CP’95 Workshop ference on Artificial Intelligence Planning Systems, 145- on Studying and Solving Really HardProblems, Cassis, 152. University of Maryland: MorganKaufman. France. Li, C. M., and Anbulagan. 1997. Look-Ahead Versus Davis, M.; Logemann,G.; and Loveland, D. 1962. A Ma- Look-Backfor Satisfiability Problems. In Proceedings chine Program for TheoremProving. Communications of of the Third International Conference on Principles and the ACM5:394-397. Practice of Constraint Programming,341-355. Schloss de Backer, B.; Furnon,V.; Kilby, P.; Prosser, P.; and Shaw, Hagenburg, Austria: Lecture Notes in Computer Science P. 1997. Local Search in Constraint Programming:Appli- 1330, Springer-Verlag. cation to the . In Constraint Pro- Minton, S.; Johnston, M. D.; Philips, A. B.; and Laird, P. gramming97, Proceedings of the Workshopon Industrial 1992. Minimizing Conflicts: A Heuristic Repair Method Constraint-Directed Scheduling. for and Scheduling Problems.Arti- Freeman, J. W. 1994. Improvementsto Propositional Sat- ficial lnteUigence58 (1-3): 160-205. isfiability Search Algorithms. Ph.D. diss., University of Morris, P. 1993. The Breakout Methodfor Escaping Lo- Pennsylvania, Philadelphia. cal Minima.In Proceedings of the Eleventh National Con- Freuder, E. C.; Dechter, R.; Ginsberg, M. L.; Selman, B.; ference on Artificial Intelligence, 40-45. WashingtonDC: and Tsang, E. 1995. Systematic Versus Stochastic Con- AAAIPress.

114 Pesant, G., and Gendreau, M. 1996. A View of Local Tools with Artificial Intelligence. Toulouse, France: IEEE Search in Constraint Programming.In Proceedings of the ComputerSociety Press. Second International Conferenceon Principles and Prac- Verfaillie, G., and Schiex, T. 1994. Solution Reuse in Dy- tice of Constraint Programming, 353-366. Cambridge, namic Constraint Satisfaction Problems. In Proceedingsof MA:Lecture Notes in Computer Science 1118, Springer- the Twelfth NationalConference on Artificial Intelligence, Verlag. 307-312. Seattle, WA:AAAI Press. Pothos, D. G., and Richards, E. B. 1995. An Empirical Wu, Z., and Wah, B. W. 2000. An Efficient Global-Search Study of Min-Conflict and WeakCommit- Strategy in Discrete Lagrangian Methodsfor Solving Hard ment Search. In Proceedings of the CP-95 Workshopon Satisfiability Problems.In Proceedingsof the Seventeenth Studying and Solving Really Hard Problems, 140-146. National Conferenceon Artificial Intelligence, 31 0-315. Cassis, France. Austin, TX: AAAIPress. Prestwich, S. D. 2000a. Stochastic Local Search In Con- Yokoo, M. 1994. Weak-CommitmentSearch for Solv- strained Spaces. In Proceedingsof Practical Applications ing Constraint Satisfaction Problems. In Proceedings of of Constraint Technology and Logic Programming,27-39. the Twelfth National Conferenceon Artificial Intelligence, Manchester, England: The Practical Applications Com- 313-318. Seattle, WA:AAAI Press. pany Ltd. Zhang, H. 1997. SATO:An Efficient Propositional Prover. Prestwich, S. D. 2000b. A Hybrid Search Architecture Ap- In Proceedingsof the Fourteenth International Conference plied to Hard Random3-SAT and Low-Autocorrelation Bi- on AutomatedDeduction, 272-275. Townsville, Australia: nary Sequences. In Proceedingsof the Sixth International Lecture Notes in ComputerScience vol. 1249, Springer- Conferenceon Principles and Practice of Constraint Pro- Verlag. gramming, 337-352. Singapore: Lecture Notes in Com- J. Zhang, H. Zhang. CombiningLocal Search and Back- puter Science 1894, Springer-Verlag. tracking Techniquesfor Constraint Satisfaction. Proceed- Prestwich, S. D. 2001a. Coloration NeighbourhoodSearch ings of the Thirteenth National Conferenceon Artificial With Forward Checking. Annals of Mathematics and Arti- Intelligence and Eighth Conference on Innovative Appli- ficial Intelligence, special issue on Large Scale Combina- cations of Artificial Intelligence, 369-374. Portland, OR: torial Optimization and Constraints. Forthcoming. AAAIPress / MITPress. Prestwich, S. D. 200lb. Trading Completenessfor Scala- bility: HybridSearch for Cliques and Rulers. In Proceed- ings of the Third International Workshopon Integration of Al and ORTechniques in Constraint Programmingfor Combinatorial Optimization Problems, 159-174. Ashford, Kent, England. Forthcoming. Resende, M. G. C., and Feo, T. 1996. A GRASPfor Satis- fiability. In (Johnsonand Trick 1996) 499-520. Schaerf, A. 1997. Combining Local Search and Look- Aheadfor Scheduling and Constraint Satisfaction Prob- lems. In Proceedingsof the Fifteenth International Joint Conferenceon Artificial Intelligence, 1254-1259.Nagoya, Japan: Morgan Kaufmann. Selman, B.; Levesque, H.; and Mitchell, D. 1992. A New Methodfor Solving Hard Satisfiability Problems. In Pro- ceedings of the Tenth NationalConference on Artificial In- telligence, 440--446. San Jose, CA:AAAI Press. Selman, B.; Kautz, H.; and Cohen, B. 1994. Noise Strate- gies for Improving Local Search. In Proceedings of the Twelfth National Conference on Artificial Intelligence, 337-343. Seattle, WA:AAAI Press. Shaw, P. 1998. Using Constraint Programming and Lo- cal Search Methods to Solve Vehicle Routing Problems. In Proceedings of the Fourth International Conferenceon Principles and Practice of Constraint Programming,417- 431. Pisa, Italy: Lecture Notes in ComputerScience 1520, Springer-Verlag. Silva, J. P. M., and Sakallah, K. A. 1996. Conflict Anal- ysis in SearchAlgorithms for Propositional Satisfiability. In Proceedingsof the Eighth International Conferenceon

115