Journal of Articial Intelligence Research Submitted published

Dynamic Backtracking

Matthew L Ginsb erg ginsbergcsuoregonedu

CIRL University of Oregon

Eugene OR USA

Abstract

Because of their o ccasional need to return to shallow p oints in a search existing

backtracking metho ds can sometimes erase meaningful progress toward solving a search

problem In this pap er we present a metho d by which backtrack p oints can b e moved

deep er in the search space thereby avoiding this diculty The technique develop ed is

a variant of dep endencydirec ted backtracking that uses only p olynomial space while still

providing useful control information and retaining the completeness guarantees provided

by earlier approaches

Intro duction

Imagine that you are trying to solve some constraintsatisfaction problem or csp In the

interests of deniteness I will supp ose that the csp in question involves coloring a map of

the United States sub ject to the restriction that adjacent states b e colored dierently

Imagine we b egin by coloring the states along the Mississippi thereby splitting the

remaining problem in two We now b egin to color the states in the western half of the

country coloring p erhaps half a dozen of them b efore deciding that we are likely to b e able

to color the rest Supp ose also that the last state colored was Arizona

At this p oint we change our fo cus to the eastern half of the country After all if we cant

color the eastern half b ecause of our coloring choices for the states along the Mississippi

there is no p oint in wasting time completing the coloring of the western states

We successfully color the eastern states and then return to the west Unfortunately we

color New Mexico and Utah and then get stuck unable to color say Nevada Whats more

backtracking do esnt help at least in the sense that changing the colors for New Mexico

and Utah alone do es not allow us to pro ceed farther Depthrst search would now have

us backtrack to the eastern states trying a new color for say New York in the vain hop e

that this would solve our problems out West

This is obviously p ointless the blo ckade along the Mississippi makes it imp ossible for

New York to have any impact on our attempt to color Nevada or other western states

Whats more we are likely to examine every possible coloring of the eastern states b efore

addressing the problem that is actually the source of our diculties

The solutions that have b een prop osed to this involve nding ways to backtrack directly

to some state that might actually allow us to make progress in this case Arizona or earlier

Dep endencydirected backtracking Stallman Sussman involves a direct backtrack

to the source of the diculty backjumping Gaschnig avoids the computational over

head of this technique by using syntactic metho ds to estimate the p oint to which backtrack

is necessary

c

AI Access Foundation and Morgan Kaufmann Publishers All rights reserved

Ginsberg

In b oth cases however note that although we backtrack to the source of the problem

we backtrack over our successful solution to half of the original problem discarding our

solution to the problem of coloring the states in the East And once again the problem is

worse than this after we recolor Arizona we are in danger of solving the East yet again

b efore realizing that our new choice for Arizona needs to b e changed after all We wont

examine every p ossible coloring of the eastern states but we are in danger of rediscovering

our successful coloring an exp onential numb er of times

This hardly seems sensible a human problem solver working on this problem would

simply ignore the East if p ossible returning directly to Arizona and pro ceeding Only if the

states along the Mississippi needed new colors would the East b e reconsidered and even

then only if no new coloring could b e found for the Mississippi that was consistent with the

eastern solution

In this pap er we formalize this technique presenting a mo dication to conventional

search techniques that is capable of backtracking not only to the most recently expanded

no de but also directly to a no de elsewhere in the search tree Because of the dynamic way

in which the search is structured we refer to this technique as dynamic backtracking

A more sp ecic outline is as follows We b egin in the next section by intro ducing a

variety of notational conventions that allow us to cast b oth existing work and our new

ideas in a uniform computational setting Section discusses backjumping an intermediate

b etween simple chronological backtracking and our ideas which are themselves presented

in Section An example of the dynamic backtracking in use app ears in Section

and an exp erimental analysis of the technique in Section A summary of our results and

suggestions for future work are in Section All pro ofs have b een deferred to an app endix

in the interests of continuity of exp osition

Preliminaries

Denition By a problem I V we wil l mean a set I of vari

ables for each i I there is a set V of possible values for the variable i is a set of

i

constraints each a pair J P where J j j is an ordered subset of I and P is a

k

V subset of V

j j

1

k

A solution to the csp is a set v of values for each of the variables in I such that v V

i i i

v P for each i and for every constraint J P of the above form in v

j j

1

k

In the example of the intro duction I is the set of states and V is the set of p ossible

i

colors for the state i For each constraint the rst part of the constraint is a pair of adjacent

states and the second part is a set of allowable color combinations for these states

Our basic plan in this pap er is to present formal versions of the search

describ ed in the intro duction b eginning with simple depthrst search and pro ceeding to

backjumping and dynamic backtracking As a start we make the following denition of a

partial solution to a csp

Denition Let I V be a csp By a partial solution to the csp we mean an ordered

subset J I and an assignment of a value to each variable in J

Dynamic Backtracking

We wil l denote a partial solution by a tuple of ordered pairs where each ordered pair

P the i v assigns the value v to the variable i For a partial solution P we wil l denote by

set of variables assigned values by P

Constraintsatisfaction problems are solved in practice by taking partial solutions and

extending them by assigning values to new variables In general of course not any value can

b e assigned to a variable b ecause some are inconsistent with the constraints We therefore

make the following denition

Denition Given a partial solution P to a csp an eliminating explanation for a

P The intended meaning is that i variable i is a pair v S where v V and S

i

cannot take the value v because of the values already assigned by P to the variables in S

An elimination mechanism for a csp is a function that accepts as arguments a partial

solution P and a variable i P The function returns a possibly empty set P i of

eliminating explanations for i

b

For a set E of eliminating explanations we will denote by E the values that have b een

b

identied as eliminated ignoring the reasons given We therefore denote by P i the set

of values eliminated by elements of P i

Note that the ab ove denition is somewhat exible with regard to the amount of work

done by the elimination mechanism all values that violate completed constraints might

b e eliminated or some amount of lo okahead might b e done We will however make the

following assumptions ab out all elimination mechanisms

b

They are correct For a partial solution P if the value v P i then every

i

P fig is satised by the values in the partial solution constraint S T in with S

and the value v for i These are the constraints that are complete after the value v

i i

is assigned to i

They are complete Supp ose that P is a partial solution to a csp and there is some

0

solution that extends P while assigning the value v to i If P is an extension of P

0

with v E P i then

0

E P P

In other words whenever P can b e successfully extended after assigning v to i but

0 0

P cannot b e at least one element of P P is identied as a p ossible reason for the

problem

They are concise For a partial solution P variable i and eliminated value v there

is at most a single element of the form v E P i Only one reason is given why

the variable i cannot have the value v

Lemma Let be a complete elimination mechanism for a csp let P be a partial solu

tion to this csp and let i P Now if P can be successful ly extended to a complete solution

b

after assigning i the value v then v P i

I ap ologize for the swarm of denitions but they allow us to give a clean description of

depthrst search

Ginsberg

Algorithm Depthrst search Given as inputs a constraintsatisfaction problem

and an elimination mechanism

Set P P is a partial solution to the csp Set E for each i I E is the

i i

set of values that have been eliminated for the variable i

If P I so that P assigns a value to every element in I it is a solution to the

b

P Set E P i original problem Return it Otherwise select a variable i I

i

the values that have been eliminated as possible choices for i

Set S V E the set of remaining possibilities for i If S is nonempty choose an

i i

element v S Add i v to P thereby setting is value to v and return to step

If S is empty let j v be the last entry in P if there is no such entry return failure

j

Remove j v from P add v to E set i j and return to step

j j j

We have written the algorithm so that it returns a single answer to the csp the mo di

cation to accumulate all such answers is straightforward

The problem with Algorithm is that it lo oks very little like conventional depthrst

search since instead of recording the unexpanded children of any particular no de we are

keeping track of the failed siblings of that no de But we have the following

Lemma At any point in the execution of Algorithm if the last element of the partial

solution P assigns a value to the variable i then the unexplored siblings of the current node

are those that assign to i the values in V E

i i

Prop osition Algorithm is equivalent to depthrst search and therefore complete

As we have remarked the basic dierence b etween Algorithm and a more conven

tional description of depthrst search is the inclusion of the elimination sets E The

i

conventional description exp ects no des to include p ointers back to their parents the sib

lings of a given no de are found by examining the children of that no des parent Since we

will b e reorganizing the space as we search this is impractical in our framework

It might seem that a more natural solution to this diculty would b e to record not the

values that have b een eliminated for a variable i but those that remain to b e considered

The technical reason that we have not done this is that it is much easier to maintain

elimination information as the search progresses To understand this at an intuitive level

note that when the search backtracks the conclusion that has implicitly b een drawn is

that a particular no de fails to expand to a solution as opp osed to a conclusion ab out the

currently unexplored p ortion of the search space It should b e little surprise that the most

ecient way to manipulate this information is by recording it in approximately this form

Backjumping

How are we to describ e dep endencydirected backtracking or backjumping in this setting

In these cases we have a partial solution and have b een forced to backtrack these more

sophisticated backtracking mechanisms use information ab out the reason for the failure to

identify backtrack p oints that might allow the problem to b e addressed As a start we need

to mo dify Algorithm to maintain the explanations for the eliminated values

Dynamic Backtracking

Algorithm Given as inputs a constraintsatisfaction problem and an elimination mech

anism

Set P E for each i I E is a set of eliminating explanations for i

i i

If P I return P Otherwise select a variable i I P Set E P i

i

b

Set S V E If S is nonempty choose an element v S Add i v to P and

i i

return to step

If S is empty let j v be the last entry in P if there is no such entry return failure

j

b

Remove j v from P We must have E V so that every value for i has been

j i i

eliminated let E be the set of al l variables appearing in the explanations for each

eliminated value Add v E fj g to E set i j and return to step

j j

Lemma Let P be a partial solution obtained during the execution of Algorithm

0

and let i P be a variable assigned a value by P Now if P P can be successful ly

extended to a complete solution after assigning i the value v but v E E we must have

i

0

P P E

0

In other words the assignment of a value to some variable in P P is correctly identied

as the source of the problem

Note that in step of the algorithm we could have added v E P instead of v E

j j

fj g to E either way the idea is to remove from E any variables that are no longer assigned

j

values by P

In backjumping we now simply change our backtrack metho d instead of removing a

single entry from P and returning to the variable assigned a value prior to the problematic

variable i we return to a variable that has actually had an impact on i In other words we

return to some variable in the set E

Algorithm Backjumping Given as inputs a constraintsatisfaction problem and an

elimination mechanism

Set P E for each i I

i

If P I return P Otherwise select a variable i I P Set E P i

i

b

Set S V E If S is nonempty choose an element v S Add i v to P and

i i

return to step

b

If S is empty we must have E V Let E be the set of al l variables appearing in

i i

the explanations for each eliminated value

If E return failure Otherwise let j v be the last entry in P such that j E

j

P to E set i j Remove from P this entry and any entry fol lowing it Add v E

j j

and return to step

Ginsberg

In step we add v E P to E removing from E any variables that are no longer

j j

assigned values by P

Prop osition Backjumping is complete and always expands fewer nodes than does depth

rst search

Let us have a lo ok at this in our mapcoloring example If we have a partial coloring

P and are lo oking at a sp ecic state i supp ose that we denote by C the set of colors that

are obviously illegal for i b ecause they conict with a color already assigned to one of is

neighb ors

One p ossible elimination mechanism returns as P i a list of c P for each color

c C that has b een used to color a neighb or of i This repro duces depthrst search since

we gradually try all p ossible colors but have no idea what went wrong when we need to

P A far more sensible choice would take backtrack since every colored state is included in

P i to b e a list of c fng where n is a neighb or that is already colored c This would

ensure that we backjump to a neighb or of i if no coloring for i can b e found

If this causes us to backjump to another state j we will add is neighb ors to the elim

inating explanation for j s original color so that if we need to backtrack still further we

consider neighb ors of either i or j This is as it should b e since changing the color of one of

is other neighb ors might allow us to solve the coloring problem by reverting to our original

choice of color for the state j

We also have

Prop osition The amount of space needed by backjumping is oi v where i jI j is

the number of variables in the problem and v is the number of values for that variable with

the largest value set V

i

This result contrasts sharply with an approach to csps that relies on truthmaintenance

techniques to maintain a list of nogo o ds de Kleer There the numb er of nogo o ds

found can grow linearly with the time taken for the analysis and this will typically b e

exp onential in the size of the problem Backjumping avoids this problem by resetting the

set E of eliminating explanations in step of Algorithm

i

The description that we have given is quite similar to that develop ed in Bruyno oghe

The explanations there are somewhat coarser than ours listing all of the variables

that have b een involved in any eliminating explanation for a particular variable in the csp

but the idea is essentially the same Bruyno oghes eliminating explanations can b e stored

in oi space instead of oi v but the asso ciated loss of information makes the technique

less eective in practice This earlier work is also a description of backjumping only since

intermediate information is erased as the search pro ceeds

Dynamic backtracking

We nally turn to new results The basic problem with Algorithm is not that it back

jumps to the wrong place but that it needlessly erases a great deal of the work that has

b een done thus far At the very least we can retain the values selected for variables that

are backjump ed over in some sense moving the backjump variable to the end of the partial

Dynamic Backtracking

solution in order to replace its value without mo difying the values of the variables that

followed it

There is an additional mo dication that will probably b e clearest if we return to the

example of the intro duction Supp ose that in this example we color only some of the eastern

states b efore returning to the western half of the country We reorder the variables in order

to backtrack to Arizona and eventually succeed in coloring the West without disturbing the

colors used in the East

Unfortunately when we return East backtracking is required and we nd ourselves

needing to change the coloring on some of the eastern states with which we dealt earlier

The ideas that we have presented will allow us to avoid erasing our solution to the problems

out West but if the search through the eastern states is to b e ecient we will need to

retain the information we have ab out the p ortion of the Easts search space that has b een

eliminated After all if we have determined that New York cannot b e colored yellow our

changes in the West will not reverse this conclusion the Mississippi really do es isolate one

section of the country from the other

The machinery needed to capture this sort of reasoning is already in place When we

backjump over a variable k we should retain not only the choice of value for k but also k s

elimination set We do however need to remove from this elimination set any entry that

involves the eventual backtrack variable j since these entries are no longer valid they

dep end on the assumption that j takes its old value and this assumption is now false

Algorithm Dynamic backtracking I Given as inputs a constraintsatisfaction prob

lem and an elimination mechanism

Set P E for each i I

i

If P I return P Otherwise select a variable i I P Set E E P i

i i

b

Set S V E If S is nonempty choose an element v S Add i v to P and

i i

return to step

b

If S is empty we must have E V let E be the set of al l variables appearing in the

i i

explanations for each eliminated value

If E return failure Otherwise let j v be the last entry in P such that j E

j

Remove j v from P and for each variable k assigned a value after j remove from

j

E any eliminating explanation that involves j Set

k

E E P j fv E P g

j j j

so that v is eliminated as a value for j because of the values taken by variables in

j

E P The inclusion of the term P j incorporates new information from variables

that have been assigned values since the original assignment of v to j Now set i j

j

and return to step

Theorem Dynamic backtracking always terminates and is complete It continues to

satisfy Proposition and can be expected to expand fewer nodes than backjumping provided

that the goal nodes are distributed randomly in the search space

Ginsberg

The essential dierence b etween dynamic and dep endencydirected backtracking is that

the structure of our eliminating explanations means that we only save nogo o d information

based on the current values of assigned variables if a nogo o d dep ends on outdated infor

mation we drop it By doing this we avoid the need to retain an exp onential amount of

nogo o d information What makes this technique valuable is that as stated in the theorem

termination is still guaranteed

There is one trivial mo dication that we can make to Algorithm that is quite useful

in practice After removing the current value for the backtrack variable j Algorithm

immediately replaces it with another But there is no real reason to do this we could

instead pick a value for an entirely dierent variable

Algorithm Dynamic backtracking Given as inputs a constraintsatisfaction prob

lem and an elimination mechanism

Set P E for each i I

i

If P I return P Otherwise select a variable i I P Set E E P i

i i

b

Set S V E If S is nonempty choose an element v S Add i v to P and

i i

return to step

b

If S is empty we must have E V let E be the set of al l variables appearing in the

i i

explanations for each eliminated value

If E return failure Otherwise let j v be the last entry in P that binds a

j

variable appearing in E Remove j v from P and for each variable k assigned

j

a value after j remove from E any eliminating explanation that involves j Add

k

v E P to E and return to step

j j

An example

In order to make Algorithm a bit clearer supp ose that we consider a small map

coloring problem in detail The map is shown in Figure and consists of ve countries

Albania Bulgaria Czechoslovakia Denmark and England We will assume wrongly that

the countries b order each other as shown in the gure where countries are denoted by no des

and b order one another if and only if there is an arc connecting them

In coloring the map we can use the three colors red yellow and blue We will typically

abbreviate the country names to single letters in the obvious way

We b egin our search with Albania deciding say to color it red When we now lo ok at

Bulgaria no colors are eliminated b ecause Albania and Bulgaria do not share a b order we

decide to color Bulgaria yellow This is a mistake

We now go on to consider Czechoslovakia since it b orders Albania the color red is

eliminated We decide to color Czechoslovakia blue and the situation is now this

Dynamic Backtracking

Denmark

s

Albania

s s s

Bulgaria

Czechoslovakia

s

England

Figure A small mapcoloring problem

color red yellow blue country

Albania red

Bulgaria yellow

Czechoslovakia blue A

Denmark

England

For each country we indicate its current color and the eliminating explanations that mean

it cannot b e colored each of the three colors when such explanations exist We now lo ok

at Denmark

Denmark cannot b e colored red b ecause of its b order with Albania and cannot b e colored

yellow b ecause of its b order with Bulgaria it must therefore b e colored blue But now

England cannot b e colored any color at all b ecause of its b orders with Albania Bulgaria

and Denmark and we therefore need to backtrack to one of these three countries At this

p oint the elimination lists are as follows

country color red yellow blue

Albania red

Bulgaria yellow

Czechoslovakia blue A

Denmark blue A B

England A B D

We backtrack to Denmark b ecause it is the most recent of the three p ossibilities and

b egin by removing any eliminating explanation involving Denmark from the ab ove table to

get

Ginsberg

country color red yellow blue

Albania red

Bulgaria yellow

Czechoslovakia blue A

Denmark A B

England A B

Next we add to Denmarks elimination list the pair

blue fA B g

This indicates correctly that b ecause of the current colors for Albania and Bulgaria Den

mark cannot b e colored blue b ecause of the subsequent dead end at England Since every

color is now eliminated we must backtrack to a country in the set fA B g Changing

Czechoslovakias color wont help and we must deal with Bulgaria instead The elimination

lists are now

color red yellow blue country

Albania red

Bulgaria

Czechoslovakia blue A

Denmark A B AB

England A B

We remove the eliminating explanations involving Bulgaria and also add to Bulgarias elim

ination list the pair

yellow A

indicating correctly that Bulgaria cannot b e colored yellow b ecause of the current choice of

color for Albania red

The situation is now

country color red yellow blue

Albania red

Czechoslovakia blue A

Bulgaria A

Denmark A

England A

We have moved Bulgaria past Czechoslovakia to reect the search reordering in the algo

rithm We can now complete the problem by coloring Bulgaria red Denmark either yellow

or blue and England the color not used for Denmark

This example is almost trivially simple of course the thing to note is that when we

changed the color for Bulgaria we retained b oth the blue color for Czechoslovakia and the

information indicating that none of Czechoslovakia Denmark and England could b e red

In more complex examples this information may b e very hardwon and retaining it may

save us a great deal of subsequent search eort

Another feature of this sp ecic example and of the example of the intro duction as

well is that the computational b enets of dynamic backtracking are a consequence of

Dynamic Backtracking

the automatic realization that the problem splits into disjoint subproblems Other authors

have also discussed the idea of applying divideandconquer techniques to csps Seidel

Zabih but their metho ds suer from the disadvantage that they constrain the order in

which unassigned variables are assigned values p erhaps at o dds with the common heuristic

of assigning values rst to those variables that are most tightly constrained Dynamic

backtracking can also b e exp ected to b e of use in situations where the problem in question

do es not split into two or more disjoint subproblems

Exp erimentation

Dynamic backtracking has b een incorp orated into the crosswordpuzzle generation program

describ ed in Ginsb erg Frank Halpin Torrance and leads to signicant p erfor

mance improvements in that restricted domain More sp ecically the metho d was tested

on the problem of generating puzzles of sizes ranging from to each puzzle

was attempted times using b oth dynamic backtracking and simple backjumping The

dictionary was shued b etween solution attempts and a maximum of backtracks were

p ermitted b efore the program was deemed to have failed

In b oth cases the algorithms were extended to include iterative broadening Ginsb erg

Harvey the cheap estrst heuristic and forward checking Cheap estrst has

also b een called most constrained rst and selects for instantiation that variable with

the fewest numb er of remaining p ossibilities ie that variable for which it is cheap est to

enumerate the p ossible values Smith Genesereth Forward checking prunes the

set of p ossibilities for crossing words whenever a new word is entered and constitutes our

exp erimental choice of elimination mechanism at any p oint words for which there is no legal

crossing word are eliminated This ensures that no word will b e entered into the crossword

if the word has no p otential crossing words at some p oint The cheap estrst heuristic

would identify the problem at the next step in the search but forward checking reduces

the numb er of backtracks substantially The leastconstraining heuristic Ginsb erg et al

was not used this heuristic suggests that each word slot b e lled with the word that

minimally constrains the subsequent search The heuristic was not used b ecause it would

invalidate the technique of shuing the dictionary b etween solution attempts in order to

gather useful statistics

The table in Figure indicates the numb er of successful solution attempts out of

for each of the two metho ds on each of the crossword frames Dynamic backtracking is

more successful in six cases and less successful in none

With regard to the numb er of no des expanded by the two metho ds consider the data

presented in Figure where we graph the average numb er of backtracks needed by the

two metho ds Although initially comparable dynamic backtracking provides increasing

computational savings as the problems b ecome more dicult A somewhat broader set of

exp eriments is describ ed in Jonsson Ginsb erg and leads to similar conclusions

There are some examples in Jonsson Ginsb erg where dynamic backtracking

leads to p erformance degradation however a typical case app ears in Figure In this

I am indebted to David McAllester for these observations

Only p oints are shown b ecause no p oint is plotted where backjumping was unable to solve the problem

The worst p erformance degradation observed was a factor of approximately

Ginsberg

Dynamic Dynamic

Frame backtracking Backjumping Frame backtracking Backjumping

Figure Numb er of problems solved successfully

r

r

r

dynamic

backtracking r

r

r

r

r

r

r

r

r

r

rr

r

r

backjumping

Figure Numb er of backtracks needed

Dynamic Backtracking

Region

s

B

s s

a

a

a

A

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

s a

Region

Figure A dicult problem for dynamic backtracking

gure we rst color A then B then the countries in region and then get stuck in region

We now presumably backtrack directly to B leaving the coloring of region alone But

this may well b e a mistake the colors in region will restrict our choices for B p erhaps

making the subproblem consisting of A B and region more dicult than it might b e If

region were easy to color we would have b een b etter o erasing it even though we didnt

need to

This analysis suggests that dep endencydirected backtracking should also fare worse

on those coloring problems where dynamic backtracking has trouble and we are currently

extending the exp eriments of Jonsson Ginsb erg to conrm this If this conjecture

is b orne out a variety of solutions come to mind We might for example record how

many backtracks are made to a no de such as B in the ab ove gure and then use this to

determine that exibility at B is more imp ortant than retaining the choices made in region

The diculty of nding a coloring for region can also b e determined from the numb er

of backtracks involved in the search

Summary

Why it works

There are two separate ideas that we have exploited in the development of Algorithm

and the others leading up to it The rst and easily the most imp ortant is the notion

that it is p ossible to mo dify variable order on the y in a way that allows us to retain the

results of earlier work when backtracking to a variable that was assigned a value early in

the search

Ginsberg

This reordering should not b e confused with the work of authors who have suggested a

dynamic choice among the variables that remain to b e assigned values Dechter Meiri

Ginsb erg et al P Purdom Rob ertson Zabih McAllester we

are instead reordering the variables that have been assigned values in the search thus far

Another way to lo ok at this idea is that we have found a way to erase the value given

to a variable directly as opp osed to backtracking to it This idea has also b een explored

by Minton etal in Minton Johnston Philips Laird and by Selman etal in

Selman Levesque Mitchell these authors also directly replace values assigned

to variables in satisability problems Unfortunately the heuristic repair metho d used is

incomplete b ecause no dep endency information is retained from one state of the problem

solver to the next

There is a third way to view this as well The space that we are examining is really a

graph as opp osed to a tree we reach the same p oint by coloring Albania blue and then

Bulgaria red as if we color them in the opp osite order When we decide to backjump from a

particular no de in the search space we know that we need to back up until some particular

prop erty of that no de ceases to hold and the key idea is that by backtracking along a

path other than the one by which the no de was generated we may b e able to backtrack

only slightly when we would otherwise need to retreat a great deal This observation is

interesting b ecause it may well apply to problems other than csps Unfortunately it is not

clear how to guarantee completeness for a search that discovers a no de using one path and

backtracks using another

The other idea is less novel As we have already remarked our use of eliminating

explanations is quite similar to the use of nogo o ds in the atms community the principal

dierence is that we attach the explanations to the variables they impact and drop them

when they cease to b e relevant They might b ecome relevant again later of course This

avoids the prohibitive space requirements of systems that p ermanently cache the results of

their nogo o d calculations this observation also may b e extensible b eyond the domain of

csps sp ecically Again there are other ways to view this Gashnigs notion of backmarking

Gaschnig records similar information ab out the reason that particular p ortions of a

search space are known not to contain solutions

Future work

There are a variety of ways in which the techniques we have presented can b e extended in

this section we sketch a few of the more obvious ones

Backtracking to older culprits

One extension to our work involves lifting the restriction in Algorithm that the variable

erased always b e the most recently assigned memb er of the set E

In general we cannot do this while retaining the completeness of the search Consider

the following example

Imagine that our csp involves three variables x y and z that can each take the value

or Further supp ose that this csp has no solutions in that after we pick any two values

for x and for y we realize that there is no suitable choice for z

Dynamic Backtracking

We b egin by taking x y when we realize the need to backtrack we intro duce the

nogo o d

x y

and replace the value for y with y

This fails to o but now supp ose that we were to decide to backtrack to x intro ducing

the new nogo o d

y x

We change xs value to and erase

This also fails We decide that y is the problem and change its value to intro ducing

the nogo o d

x y

but erasing And when this fails we are in danger of returning to x y which we

eliminated at the b eginning of the example This lo op may cause a mo died version of the

dynamic backtracking algorithm to fail to terminate

In terms of the pro of of Theorem the nogo o ds discovered already include information

ab out all assigned variables so there is no dierence b etween and When we drop

in favor of we are no longer in a p osition to recover

We can deal with this by placing conditions on the variables to which we cho ose to

backtrack the conditions need to b e dened so that the pro of of Theorem continues to

hold Exp erimentation indicates that lo ops of the form we have describ ed are extremely

rare in practice it may also b e p ossible to detect them directly and thereby retain more

substantial freedom in the choice of backtrack p oint

This freedom of backtrack raises an imp ortant question that has not yet b een addressed

in the literature When backtracking to avoid a diculty of some sort to where should one

backtrack

Previous work has b een constrained to backtrack no further than the most recent choice

that might impact the problem in question any other decision would b e b oth incomplete and

inecient Although an extension of Algorithm need not op erate under this restriction

we have given no indication of how the backtrack p oint should b e selected

There are several easily identied factors that can b e exp ected to b ear on this choice

The rst is that there remains a reason to exp ect backtracking to chronologically recent

choices to b e the most eective these choices can b e exp ected to have contributed to

the fewest eliminating explanations and there is obvious advantage to retaining as many

eliminating explanations as p ossible from one p oint in the search to the next It is p os

sible however to simply identify that backtrack p oint that aects the fewest numb er of

eliminating explanations and to use that

Alternatively it might b e imp ortant to backtrack to the choice p oint for which there

will b e as many new choices as p ossible as an extreme example if there is a variable i

for which every value other than its current one has already b een eliminated for other

reasons backtracking to i is guaranteed to generate another backtrack immediately and

should probably b e avoided if p ossible

Another solution app ears in McAllester

Ginsberg

Finally there is some measure of the directness with which a variable b ears on a

problem If we are unable to nd a value for a particular variable i it is probably sensible

to backtrack to a second variable that shares a constraint with i itself as opp osed to some

variable that aects i only indirectly

How are these comp eting considerations to b e weighed I have no idea But the frame

work we have develop ed is interesting b ecause it allows us to work on this question In

more basic terms we can now debug partial solutions to csps directly moving laterally

through the search space in an attempt to remain as close to a solution as p ossible This

sort of lateral movement seems central to human solution of dicult search problems and

it is encouraging to b egin to understand it in a formal way

Dependency pruning

It is often the case that when one value for a variable is eliminated while solving a csp

others are eliminated as well As an example in solving a scheduling problem a particular

choice of time say t may b e eliminated for a task A b ecause there then isnt enough

time b etween A and a subsequent task B in this case all later times can obviously b e

eliminated for A as well

Formalizing this can b e subtle after all a later time for A isnt uniformly worse than an

earlier time b ecause there may b e other tasks that need to precede A and making A later

makes that part of the schedule easier Its the problem with B alone that forces A to b e

earlier once again the analysis dep ends on the ability to maintain dep endency information

as the search pro ceeds

We can formalize this as follows Given a csp I V supp ose that the value v has

0 0 0

b een assigned to some i I Now we can construct a new csp I V involving the

0 0

remaining variables I I fig where the new set V need not mention the p ossible values

0

V for i and where is generated from by mo difying the constraints to indicate that i

i

has b een assigned the value v We also make the following denition

Denition Given a csp suppose that i is a variable that has two possible values u and

v We wil l say that v is stricter than u if every constraint in the csp induced by assigning

u to i is also a constraint in the csp induced by assigning i the value v

The p oint of course is that if v is stricter than u is there is no p oint to trying a

solution involving v once u has b een eliminated After all nding such a solution would

involve satisfying all of the constraints in the v restriction these are a sup erset of those in

the u restriction and we were unable to satisfy the constraints in the u restriction originally

The example with which we b egan this section now generalizes to the following

Prop osition Suppose that a csp involves a set S of variables and that we have a

partial solution that assigns values to the variables in some subset P S Suppose further

that if we extend this partial solution by assigning the value u to a variable i P there is

no further extension to a solution of the entire csp Now consider the csp involving the

variables in S P that is induced by the choices of values for variables in P If v is stricter

than u as a choice of value for i in this problem the original csp has no solution that both

assigns v to i and extends the given partial solution on P

Dynamic Backtracking

This prop osition isnt quite enough in the earlier example the choice of t for A

will not b e stricter than t if there is any task that needs to b e scheduled b efore A is

We need to record the fact that B which is no longer assigned a value is the source of the

diculty To do this we need to augment the dep endency information with which we are

working

More precisely when we say that a set of variables fx g eliminates a value v for a variable

i

x we mean that our search to date has allowed us to conclude that

v x v x v x

k k

where the v are the current choices for the x We can obviously rewrite this as

i i

v x v x v x F

k k

where F indicates that the csp in question has no solution

Lets b e more sp ecic still indicating in exactly which csp has no solution

v x v x v x F I

k k

where I is the set of variables in the complete csp

Now we can address the example with which we b egan this section the csp that is

known to fail in an expression such as is not the entire problem but only a subset of it

In the example we are considering the subproblem involves only the two tasks A and B

In general we can augment our nogo o ds to include information ab out the subproblems on

which they fail and then measure strictness with resp ect to these restricted subproblems

only In our example this will indeed allow us to eliminate t from consideration as a

p ossible time for A

The additional information stored with the nogo o ds doubles their size we have to store a

second subset of the variables in the csp and the variable sets involved can b e manipulated

easily as the search pro ceeds The cost involved in employing this technique is therefore that

of the strictness computation This may b e substantial given the data structures currently

used to represent csps which typically supp ort the need to check if a constraint has b een

violated but little more but it seems likely that compiletime mo dications to these data

structures can b e used to make the strictness question easier to answer In scheduling

problems preliminary exp erimental work shows that the idea is an imp ortant one here

to o there is much to b e done

The basic lesson of dynamic backtracking is that by retaining only those nogo o ds that

are still relevant given the partial solution with which we are working the storage diculties

encountered by full dep endencydirected metho ds can b e alleviated This is what makes

all of the ideas we have prop osed p ossible erasing values selecting alternate backtrack

p oints and dep endency pruning There are surely many other eective uses for a practical

dep endency maintenance system as well

Acknowledgements

This work has b een supp orted by the Air Force Oce of Scientic Research under grant

numb er and by DARPARome Labs under grant numb er FC I

Ginsberg

would like to thank Rina Dechter Mark Fox Don Geddis Will Harvey Vipin Kumar

Scott Roy and Narinder Singh for helpful comments on these ideas Ari Jonsson and

David McAllester provided me invaluable assistance with the exp erimentation and pro ofs

resp ectively

A Pro ofs

Lemma Let be a complete elimination mechanism for a csp let P be a partial solution

to this csp and let i P Now if P can be successful ly extended to a complete solution after

b

assigning i the value v then v P i

Pro of Supp ose otherwise so that v E P i It follows directly from the completeness

of that

E P P

a contradiction

Lemma At any point in the execution of Algorithm if the last element of the partial

solution P assigns a value to the variable i then the unexplored siblings of the current node

are those that assign to i the values in V E

i i

Pro of We rst note that when we decide to assign a value to a new variable i in step

b

of the algorithm we take E P i so that V E is the set of allowed values for this

i i i

variable The lemma therefore holds in this case The fact that it continues to hold through

each rep etition of the lo op in steps and is now a simple induction at each p oint we

add to E the no de that has just failed as a p ossible value to b e assigned to i

i

Prop osition Algorithm is equivalent to depthrst search and therefore complete

Pro of This is an easy consequence of the lemma Partial solutions corresp ond to no des

in the search space

Lemma Let P be a partial solution obtained during the execution of Algorithm and

0

let i P be a variable assigned a value by P Now if P P can be successful ly extended

to a complete solution after assigning i the value v but v E E we must have

i

0

E P P

Pro of As in the pro of of Lemma we show that no step of Algorithm can cause

Lemma to b ecome false

That the lemma holds after step where the search is extended to consider a new

variable is an immediate consequence of the assumption that the elimination mechanism

is complete

In step when we add v E fj g to the set of eliminating explanations for j we

j

are simply recording the fact that the search for a solution with j set to v failed b ecause

j

we were unable to extend the solution to i It is a consequence of the inductive hyp othesis

that as long as no variable in E fj g changes this conclusion will remain valid

Prop osition Backjumping is complete and always expands fewer nodes than does depth

rst search

Dynamic Backtracking

Pro of That fewer no des are examined is clear for completeness it follows from Lemma

that the backtrack to some element of E in step will always b e necessary if a solution

is to b e found

Prop osition The amount of space needed by backjumping is oi v where i jI j is

the number of variables in the problem and v is the number of values for that variable with

the largest value set V

i

Pro of The amount of space needed is dominated by the storage requirements of the elim

ination sets E there are i of these Each one might refer to each of the p ossible values for

j

a particular variable j the space needed to store the reason that the value j is eliminated

is at most jI j since the reason is simply a list of variables that have b een assigned values

There will never b e two eliminating explanations for the same variable since is concise

and we never rebind a variable to a value that has b een eliminated

Theorem Dynamic backtracking always terminates and is complete It continues to

satisfy Proposition and can be expected to expand fewer nodes than backjumping provided

that the goal nodes are distributed randomly in the search space

Pro of There are four things we need to show That dynamic backtracking needs oi v

space that it is complete that it can b e exp ected to expand fewer no des than backjumping

and that it terminates We prove things in this order

Space This is clear the amount of space needed continues to b e b ounded by the structure

of the eliminating explanations

Completeness This is also clear since by Lemma all of the eliminating explanations

retained in the algorithm are obviously still valid The new explanations added in are

also obviously correct since they indicate that j cannot take the value v as in backjumping

j

and that j also cannot take any values that are eliminated by the variables b eing backjump ed

over

Eciency To see that we expect to expand fewer no des supp ose that the subproblem

involving only the variables b eing jump ed over has s solutions in total one of which is given

by the existing variable assignments Assuming that the solutions are distributed randomly

in the search space there is at least a s chance that this particular solution leads to a

solution of the entire csp if so the reordered search which considers this solution earlier

than the other will save the exp ense of either assigning new values to these variables or

rep eating the search that led to the existing choices The reordered search will also b enet

from the information in the nogo o ds that have b een retained for the variables b eing jump ed

over

Termination This is the most dicult part of the pro of

As we work through the algorithm we will b e generating and then discarding a variety

of eliminating explanations Supp ose that e is such an explanation saying that j cannot

take the value v b ecause of the values currently taken by the variables in some set e

j V

We will denote the variables in e by x x and their current values by v v In

V k k

declarative terms the eliminating explanation is telling us that

x v x v j v

k k j

Ginsberg

Dep endencydirected backtracking would have us accumulate all of these nogo o ds dynamic

backtracking allows us to drop any particular instance of for which the antecedent is no

longer valid

The reason that dependencydirected backtracking is guaranteed to terminate is that

the set of accumulated nogo o ds eliminates a monotonically increasing amount of the search

space Each nogo o d eliminates a new section of the search space b ecause the nature of the

search pro cess is such that any no de examined is consistent with the nogo o ds that have b een

accumulated thus far the pro cess is monotonic b ecause all nogo o ds are retained throughout

the search These arguments cannot b e applied to dynamic backtracking since nogo o ds are

forgotten as the search pro ceeds But we can make an analogous argument

To do this supp ose that when we discover a nogo o d like we record with it all of the

variables that precede the variable j in the partial order together with the values currently

assigned to these variables Thus an eliminating explanation b ecomes essentially a nogo o d

n of the form together with a set S of variablevalue pairs

We now dene a mapping n S that changes the antecedent of to include assump

tions ab out al l the variables b ound in S so that if S fs v g

i i

n S s v s v j v

l l j

At any p oint in the execution of the algorithm we denote by N the conjunction of the

mo died nogo o ds of the form

We now make the following claims

For any eliminating explanation n S n j n S so that n S is valid for the

problem at hand

For any new eliminating explanation n S n S is not a consequence of N

The deductive consequences of N grow monotonically as the dynamic backtracking

algorithm pro ceeds

The theorem will follow from these three observations since we will know that N is a valid

set of conclusions for our search problem and that we are once again making monotonic

progress toward eliminating the entire search space and concluding that the problem is

unsolvable

That n S is a consequence of n S is clear since the mo dication used to obtain

from involves strengthening that antecedent of It is also clear that n S is

not a consequence of the nogo o ds already obtained since we have added to the antecedent

only conditions that hold for the no de of the search space currently under examination If

n S were a consequence of the nogo o ds we had obtained thus far this no de would not

b e b eing considered

The last observation dep ends on the following lemma

Lemma A Suppose that x is a variable assigned a value by our partial solution and that

0

x appears in the antecedent of the nogood n in the pair n S Then if S is the set of

0

variables assigned values no later than x S S

Dynamic Backtracking

0

Pro of Consider a y S and supp ose that it were not in S We cannot have y x since

y would then b e mentioned in the nogo o d n and therefore in S So we can supp ose that

y is actually assigned a value earlier than x is Now when n S was added to the set of

eliminating explanations it must have b een the case that x was assigned a value since it

app ears in the antecedent of n but that y was not But we also know that there was a

later time when y was assigned a value but x was not since y precedes x in the current

partial solution This means that x must have changed value at some p oint after n S was

added to the set of eliminating explanations but n S would have b een deleted when this

happ ened This contradiction completes the pro of

Returning to the pro of the Theorem supp ose that we eventually drop n S from

0 0

our collection of nogo o ds and that when we do so the new nogo o d b eing added is n S It

0

follows from the lemma that S S Since x v is a clause in the antecedent of n S it

i i

0 0

follows that n S will imply the negation of the antecedent of n S and will therefore

imply n S itself Although we drop n S when we drop the nogo o d n S n S

continues to b e entailed by the mo died set N the consequences of which are seen to b e

growing monotonically

References

Bruyno oghe M Solving combinatorial search problems by intelligent backtracking

Information Processing Letters

de Kleer J An assumptionbased truth maintenance system Articial Intel ligence

Dechter R Meiri I Exp erimental evaluation of prepro cessing techniques in

constraint satisfaction problems In Proceedings of the Eleventh International Joint

Conference on Articial Intel ligence pp

Gaschnig J Performance measurement and analysis of certain search algorithms

Tech rep CMUCS CarnegieMellon University

Ginsb erg M L Frank M Halpin M P Torrance M C Search lessons learned

from crossword puzzles In Proceedings of the Eighth National Conference on Articial

Intel ligence pp

Ginsb erg M L Harvey W D Iterative broadening Articial Intel ligence

Jonsson A K Ginsb erg M L Exp erimenting with new systematic and non

systematic search techniques In Proceedings of the AAAI Spring Symposium on AI

and NPHard Problems Stanford California

McAllester D A Partial order backtracking Journal of Articial Intel ligence

Research Submitted

Minton S Johnston M D Philips A B Laird P Solving largescale con

straint satisfaction and scheduling problems using a heuristic repair metho d In Pro

ceedings of the Eighth National Conference on Articial Intel ligence pp

Ginsberg

P Purdom C B Rob ertson E Backtracking with multilevel dynamic search

rearrangement Acta Informatica

Seidel R A new metho d for solving constraint satisfaction problems In Proceedings

of the Seventh International Joint Conference on Articial Intel ligence pp

Selman B Levesque H Mitchell D A new metho d for solving hard satisability

problems In Proceedings of the Tenth National Conference on Articial Intel ligence

Smith D E Genesereth M R Ordering conjunctive queries Articial Intel li

gence

Stallman R M Sussman G J Forward reasoning and dep endencydirected

backtracking in a system for computeraided circuit analysis Articial Intel ligence

Zabih R Some applications of graph bandwidth to constraint satisfaction problems

In Proceedings of the Eighth National Conference on Articial Intel ligence pp

Zabih R McAllester D A A rearrangement search strategy for determining

prop ositional satisability In Proceedings of the Seventh National Conference on

Articial Intel ligence pp