<<

Toward a Model for and

Ý Þ ÞÜß

Michael Alekhnovich £ Allan Borodin Joshua Buresh-Oppenheim Russell Impagliazzo

Ýß Avner Magen Ý Toniann Pitassi

Abstract This second direction has recently attracted the attention of many researchers. Khanna, Motwani, Sudan and Vazi- We consider a model (BT) for backtracking . rani [25] formalize various types of local search paradigms, Our model generalizes both the priority model of Borodin, and in doing so, provide a more precise understanding of Nielson and Rackoff, as well as a simple dynamic program- local search algorithms. Woeginger [30] defines a class ming model due to Woeginger, and hence spans a wide of simple dynamic programming algorithms and provides spectrum of algorithms. After witnessing the strength of conditions for when a dynamic programming solution can the model, we then show its limitations by providing lower be used to derive a FPTAS for an optimization problem. bounds for algorithms in this model for several classical Borodin, Nielsen and Rackoff [11] introduce priority algo- problems such as interval scheduling, knapsack and satisfi- rithms as a model of greedy-like algorithms. Arora, Bol- ability. lob´as and Lov´asz [8] study wide classes of LP formula- tions, and prove integrality gaps for vertex cover within these classes. The most popular methods for solving SAT 1. Introduction are DPLL algorithms—a family of backtracking algorithms whose complexity has been characterized in terms of reso- lution proof complexity (see for example [16, 17, 14, 21]). Proving unconditional lower bounds for computing ex- Finally, Chv´atal [13] proves a lower bound for Knapsack in plicit functions remains one of the most challenging prob- an algorithmic model that involves elements of branch-and- lems in computational complexity. Since 1949, when Shan- bound and dynamic programming. non showed that a random function has large circuit com- plexity [29], little progress has been made toward proving We continue in this direction by presenting a hierarchy lower bounds for the size of unrestricted Boolean circuits of models for backtracking algorithms for general search that compute explicit functions. One explanation for this and optimization problems. Many well-known algorithms phenomenon was given by the Natural Proofs approach of and algorithmic techniques can be simulated within these Razborov and Rudich [27] who showed that most of the models, both those that are usually considered backtrack- existing lower bound techniques are incapable of proving ing and some that would normally be classified as greedy or such lower bounds. One way to investigate the complexity dynamic programming. We prove several upper and lower of explicit functions in spite of these difficulties is to study bounds on the capabilities of algorithms in this model, in reductions between problems, e.g. to identify a canonical some cases proving that the known algorithms are essen- problem (like an NP-complete problem) and show that a tially the best possible within the model. given computational task is “not easier” than solving this The starting point for the BT model is the priority algo- problem. Another approach would be to restrict the model rithm model [11]. We assume that the input is represented to avoid the inherent Natural Proof limitations, while pre- as a set of data items, where each data item is a small piece serving a model strong enough to incorporate many “natu- of information about the problem; it may be a time interval ral” algorithms. representing a job to be scheduled, a vertex with its list of

the neighbours in a graph, a propositional variable with all £ Institute for Advanced Study, Princeton, US. Supported by CCR grant clauses containing it in a CNF. Priority algorithms consider

Æ CCR-0324906.

Ý Department of Computer Science, University of Toronto, one item at a time and maintain a single partial solution bor,avner,[email protected] (based on the items considered thus far) that it continues to Þ CSE Department, University of California San Diego, bureshop, rus- extend. What is the order in which items are considered? A [email protected].

Ü fixed order orders the items up front according to Research partially supported by NSF Award CCR-0098197.

ß Some of this research was performed at the Institute for Advanced some criterion (e.g., in the case of knapsack, sort the items Study in Princeton, NJ supported by the State of New Jersey. by their weight-to-value ratio). A more general (adaptive

1 Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE order) approach would be to change the ordering accord- efficiently approximated by any fixed BT algorithm. We ing to the items seen so far. For example, in the greedy set then show that 3-SAT requires exponential width and expo- cover algorithm, in every iteration we order the sets accord- nential depth first size in the fully adaptive BT model. This ing to the number of yet uncovered elements they contain lower bound in turn gives us an exponential bound on the (the distinction between fixed and adaptive orderings has width of fully adaptive BT algorithms for knapsack by “BT also been recently studied in [19]). Rather than imposing reduction” (but gives no inapproximation result). complexity constraints on the allowable orders, we simply Wherever proofs are omitted, please see the full version of require them to be oblivious to the actual content of those the paper: [3]. input items not yet considered. By introducing branching, a backtracking algorithm (BT) can pursue a number of dif- ferent partial solutions. Given a specific input, a BT al- 2. The Backtracking Model and its Relatives gorithm then induces a computation . Of course, it is

possible to solve any properly formulated search or opti- Let be an arbitrary data domain that contains objects À mization problem in this manner: simply branch on every   called data items.Let be a set, representing the set possible decision for every input item. In other words, there of allowable decisions for a data item. For example, for the

is a tradeoff between the quality of a solution and the com- , a natural choice for would be the set

Ü Ôµ Ü Ô

plexity of the BT-algorithm. We view the maximum width of all pairs ´ where is a weight and is a profit; the

¼ ½ of a BT program as the number of partial solutions that need natural choice for À is where 0 is the decision to to be maintained in parallel in the worst case. As we will reject an item and 1 is the decision to accept an item.

see, this extension allows us to model the simple dynamic A Backtracking search/optimization problem È is

 µ

programming framework of Woeginger [30]. This branch- ´

È È

specifiedbyapair È where is the underlying

Ò

ing extension can be applied to either the fixed or adaptive 

data domain, and È is a family of objective functions,

È

        µ  Ê

order (fixed BT and adaptive BT) and in either case each ´

½ Ò ½ Ò Ò

,where ½ is a set of

  

branch (corresponding to a partial solution) considers the À Ò

variables that range over ,and ½ is a set of vari-

Á     ¾

items in the same order. For example, various DP-based op- Ò

ables that range over . On input ½ , À timal and approximate algorithms for the knapsack problem

the goal is to assign each  avaluein so as to maximize Ò

can be seen as fixed or adaptive BT algorithms. In order to (or minimize) . A search problem is a special case where

È

Ò

½ ¼ model the power of backtracking programs (say as in DPLL outputs either or .

1 È

Ç ´Ë µ algorithms for SAT) we need to extend the model further. For any domain Ë we write forthesetofallor-

In fully adaptive BT we allow each branch to choose its own derings of elements of Ë . We are now ready to define a ordering of input items. Furthermore, we need to allow al- backtracking algorithm for a backtracking problem.

gorithms to prioritize (using a depth first traversal of the

È 

induced computation tree) the order in which the different Definition 1. A backtracking algorithm  for problem

Ò

  µ partial solutions are pursued. In this setting, we can con- ´ consists of the ordering functions

sider the number of nodes traversed in the computation tree

  

Ö  ¢ À Ç´ µ before a solution is found (which may be smaller than the tree’s width).

and the choice functions

  ·½ 

 ¢ À Ç´À  µ

Our results The problems we consider are all well-  studied; namely, interval scheduling, knapsack, and satisfi-

2

 ¼ Ò ½

ability. For Ñ-machine interval scheduling we show a tight where .

Ñ

Ò µ ª´ -width lower bound (for optimality) in the adap- We separate the following three classes of BT algorithms

tive BT model, an inapproximability result in the fixed BT

 Ö ¯ Fixed algorithms: does not depend upon any of its model, and an approximability separation between width- arguments. 1 BT and width-2 BT in the adaptive model. For knapsack,

we show an exponential lower bound (for optimality) on the 

¯ Ö     

¾  Adaptive algorithms: depends on ½

width of adaptive BT algorithms, and for achieving an FP-

   but not on ½ .

TAS in the adaptive model we show upper and lower bounds



¯ Ö ¯

polynomial in ½ . For SAT, we show that 2-SAT is solved Fully adaptive algorithms: depends on both

     

¾  ½  by a linear-width adaptive BT, but needs exponential width ½ and . for any fixed order BT, and also that MAX2SAT cannot be 2All of our lower bound results will apply to non-uniform BT algo- 1

The BT model encompasses DPLL in many situations where access to rithms that know Ò, the number of input items, and hence more formally,

Ò Ò 

the input is limited. If access is unlimited, then proving superpolynomial the ordering and choice functions should be denoted as Ö and .A

ÆÈ lower bounds for DPLL amounts to proving È . discussion regarding “precomputed” information can be found in [11].

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE The idea of the above specification of is as follows. approximates the value of the objective function on the in-

Initially, at the root, the set of actual data items is some stance Á . 

unknown set Á of items from . At each node of the tree,

¯ For an algorithm we define the width of the compu-

   Á

a subset of actual data items, ½ has been

Ï ´Á µ

tation as the maximum of the number of nodes

 

revealed, and decisions ½ have been made about

Ì ´Á µ each of these items in turn. At the next step, the backtrack over all depth levels of

algorithm (possibly) re-orders the set of all possible data



´Á µ Ë ¯

We define the depth first search size as the

Ö items (not just Á )using . Then as long as there are still number of tree nodes that lie to the left of the leftmost

items from Á left to be discovered, the next item (according

Ì ´Á µ

solution of .

 ¾ ·½

to this ordering) is revealed. When this new item,

Á 

, has been revealed, a set of possibilities are explored on 

´Á µ ÒÏ ´Á µ ÁË

Proposition 4. For any and .

this item, as specified by  . Namely, the algorithm can

try any subset of choices from À on this new data item,

Ò Ï ´Òµ

Definition 5. For any and any , define ,the

including the choice to abort (). This is described more

Ò ÑÜÏ ´Á µ  Á

width of on instances of size as



formally by the notion of a computation tree of program

Ò ´Òµ

. Define Ë analogously.

on input Á , as defined below. We say that a rooted tree is 

oriented if it has an ordering on its leaves from the left to 

´Á µ The size Ë corresponds to the running time of the

the right.

Ì ´Á µ

depth first on . We will be mainly

Ì ´Á µ

interested in the width of for two reasons. First, it Definition 2. Assume that È is a BT problem and is a

has a natural combinatorial meaning: the maximum number

È Á ´  µ Ò BT algorithm for . For any instance ½ ,

of partial solutions that we maintain in parallel during the

 ¾  Ì ´Á µ

È  we define the computation tree as an oriented rooted tree in the following recursive way. execution. Second, it gives a universal upper bound on the

running time of any search style.

Ú

¯ Each node of depth in the tree is labelled by a tuple While the Fixed and Adaptive classes are ostensibly less

Ú Ú Ú Ú

   

  . powerful than Fully Adaptive algorithms, they remain quite

½ ½

powerful. For example, the width 1 algorithms in these

¯ The root node has the empty label. classes are precisely the fixed and adaptive priority algo-

rithms, respectively, that capture many well known greedy

Ú Ò

¯ For every node of depth with a label

Ú Ú

Ú algorithms. In addition, we will see that they can simulate



   Á Ò

,let be the data item in ·½

large classes of dynamic programming algorithms; for ex-

Ú Ú Ú Ú



   Ö   µ

that goes first in the list ´ .

½

ample, Fixed BT algorithms can simulate Woeginger’s DP-

Ú Ú Ú



  µ  

Assume that the output ´ has the

·½

simple algorithms ([30]).

´    µ  ¾ À

 ·½ 

form ½ ,where .If The reader may notice some similarities between the BT

¼ Ú

 then has no children. Otherwise it model and the online setting. Like online algorithms, the in-

 Ú Ú 

has child nodes ½ that go from left to put is not known to the algorithm in advance, but is viewed

Ú Ú Ú Ú

    µ

right and have labels ´

½ ½

·½ ·½

as an input stream. However, there are two notable differ-

Ú Ú Ú Ú Ú

  µ ´   

 resp.

½ ½

·½ ences: first, the ordering is given here by the algorithm and

not by the adversary, and second, BT algorithms are allowed Ò Each leaf node Ø of depth contains a permuted se- to branch, or try more than one possibility.3

quence of the data items Á (permuted by the ordering func-

A note on computational complexity: We do not im- Ø

tions Ö used on the path ending at ) with the correspond-

pose any computational restrictions on the functions Ö

ing decisions in À (determined by the choice functions on

and  , such as computability in polynomial time. This

this path). For a search problem we say that a leaf is a solu-

Ø Ø Ø

Ø is because all lower bounds in this model come from in-

Á ´  µ  ´    µ

Ò È

tion for ½ iff

½ Ò ½ Ò Ø

Ø formation theoretic constraints and hold for any (even non-

 

½ where is the decision for . For an optimization



computable) Ö and . However, if these functions are problem every leaf determines a solution and a value for polytime computable then there exists an efficient algorithm

the objective function on the instance Á . (We can define the



Ç ´½µ

Ë Á µ ¡ Ò that solves the problem in time ´ . In partic- semantics so that the value of the objective function is 0 for ular, all upper bounds presented in this paper correspond to

a maximization problem and ½ for a minimization problem if the solution is not feasible.) algorithms which are efficiently computable. Another curi- ous aspect of the model is that one has to choose the input

Definition 3. We say that is a correct algorithm for a representation carefully in order to limit the information in

È Á Ì ´Á µ

BT search problem iff for any YES instance , con- 3Recently, a version of the online model in which many partial solu-

tains at least one solution. For an optimization problem, the tions may be constructed was studied by Halldorsson, et al [22]. Their

´Á µ value of is the value of the leaf that optimally or best online model is a special case of a fixed order BT algorithm.

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE

¼Ì½Ì 

each data item, because once a BT algorithm has seen all has partial-solutions corresponding to Ì . ·½

of the input (or can infer it), it can immediately solve the To calculate the partial solutions for the first  intervals

¼

 ·½ Ì  ×

problem. Hence, we should emphasize that there are unrea- we take Ì extending one of the andalsotake

¼Ì½Ì 

sonable input models that will render the model useless; for Ì so as to extend the corresponding par-

Ø ·½ example if each item is a node in a graph together with a tial solutions with a ’reject’ decision on the  interval. list of its neighbours and their neighbours, then it contains Note that for most dynamic programming algorithms, enough information that the ordering function can summon the size of the number of solutions maintained is determined

the largest clique as its first items, making an NP-hard prob- by an array where each axis has length at most Ò. Thus, the



Ë Ò

lem solvable by a width-1 BT algorithm. In our discussion, size of typically grows as some polynomial .Inthis

we use input representations which seem to us the most nat- case, we call the dimension of the algorithm. Note that

 ÐÓ Û ´Òµ ÐÓ Ò ural. we have , so a lower bound on width yields a lower bound on this dimension. 2.1. BT as an Extension of Dynamic Programming While powerful, there are also some restrictions of the and other Algorithm Models model that seem to indicate that we cannot simulate all (in- tuitively understood as) back-tracking or branch-and-bound How does our model compare to other models? As noted algorithms. That is, our decision to abort a run can de- above, the width 1 BT algorithms are exactly the priority pend only on the partial instance, whereas many branch- algorithms, so many greedy algorithms fit within the frame- and-bound methods use a global prunning criterion such as work. Examples include Kruskal or Prim’s algorithms for the value of an LP relaxation. These types of algorithms are spanning tree, Dijkstra’s shortest path algorithm, and John- incomparable with our model. Since locality is the only re- son’s greedy 2-approximation for vertex cover. striction we put on computation, it seems difficult to come Secondly, which is also one of the main motivations of up with a meaningful extension to the model that would in- this work, the fixed-order model captures an important class clude branch-and-bound and not become trivially powerful. of dynamic programming algorithms defined by Woeginger [30] as simple dynamic-programming or DP-simple.Many 2.2. General Lower bound strategy algorithms we call “DP algorithms” follow the schema for-

malized by Woeginger: Given an ordering of the input Since most of our lower bounds are for the fixed and

items, in the -th phase the algorithm considers the -th adaptive models, we present a general framework for

 Ë

input item , and produces a set of solutions to the achieving these lower bounds. The fully adaptive lower

   Ë

problem with input ½ . Every solution in bound for SAT is more specialized.

Ë

½ must extend a solution in . Knapsack (with small inte- Below is a 2-player game for proving these lower bounds

ger input parameters), and interval scheduling with Ñ ma- for adaptive BT. This is similar to the lower bound tech- chines, are two well studied problems which have efficient niques for priority algorithms from [11, 18]. The main dif- DP-simple algorithms. ference is that there is a set of partial solutions rather than a

In the simulation of these algorithms by a fixed-BT al- single partial solution.

Ë

gorithm, corresponds to the -th level of the BT tree. The game is between the Solver and the Adversary. Ini-

Indeed, each node in level is a partial solution involv- tially, the Adversary presents to the algorithm some finite

È

·½

ing the first items and, clearly, every node in level set of possible input items, ¼ . Initially, partial instance

ÈÁ Ì

¼ represents a partial solution that extends one from level . ¼ is empty, and is the set consisting of the null partial

The only subtle point is that DP-simple algorithms get to solution. The game consists of a series of phases. At any

 È

Ë 

look at all of and decide which partial solutions to ex- phase , there is a set of possible data items , a partial

ÈÁ Ì ÈÁ

 

tend, whereas, when a branch of the BT tree decides how to instance  and a set of partial solutions for .In

   ½  ¾ È

½

extend itself, it sees only its own history and does not com- phase , , the Solver picks any data item  ,

 ÈÁ  ÈÁ  Ì

 ½  municate with the other branches. However, since all paral- adds to obtain  , and chooses a set

lel branches of a fixed (or adaptive) BT algorithm view the of partial solutions, each of which must extend a solution

Ì 



½

same input, and the computational power of the function is in  . The Adversary then removes and some further È

unlimited, each branch can simulate all other branches. For items to obtain  . È

concreteness, we consider the simulation of the DP algo- This continues until  is empty. The Solver wins if

ÈÁ Ì   Û ´ÈÁµ 

rithm to solve interval scheduling on one machine. Recall, Ò is not a valid instance, or if for all

½    Ò Ì

this algorithm orders intervals by their ending time (earli- ,and Ò contains a valid solution, optimal so-

  ÈÁ est first). It then calculates Ì the intervals among the lution, or approximately optimal solution for (if we are

first  which give maximal profit and which schedule the trying for a search algorithm, exact optimization algorithm,

Ì   Ì     ’th interval; of course extends for some . or , respectively). Otherwise, the

We can now think of a BT algorithm which in the  -th level Adversary wins.

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE

¾

´µ ´ ´ µÐÓµ Any BT algorithm of width gives a strategy for the ([7]). An obvious question, then, is Solver in the above game. Thus, a strategy for the Adver- whether Dynamic Programming, which might seem like the sary gives a lower bound on BT algorithms. natural approach, is really inferior to other approaches. Per- Our Adversary strategies will usually have the follow- haps it is the case that there is a more sophisticated way to

ing form. The number of rounds, will be fixed in ad- get a Dynamic Programming algorithm that achieves a run-

vance. The Adversary will choose some such that, ning time which is at least as good as the flow algorithm. In



for more many partial solutions ,thereisanex- this section we prove that there is no better simple Dynamic

 

tension of to an instance so that all Programming algorithm than the obvious one, and, however

¿ valid/optimal/ approximately optimal solutions to are ex- elegant, the DP approach is inferior here (for ).

tensions of . We’ll call such a partial solution indispens- It has been shown in [11] that there is no constant ap-

¾ Ò

able, since if , the Adversary can set to , proximation ratio using priority algorithms. Our main re- ensuring victory. Since all partial solutions are indispens- sult in this section is to show that any adaptive BT, even for

able, either the above strategy works, or the Solver keeps the special case of proportional profit interval scheduling,

Ñ

ª´ µ

many partial solutions in . requires width ; thus in particular any simple-DP al-

For the fixed BT game, the Solver must be committed, gorithm requires at least dimensions. throughout the entire game, to an order that is chosen up- We first prove an inapproximability result for the more

front. This sometimes (see also [10], Theorem 3) calls for limited variant of BT, i.e. the fixed model.

¡

½

the following lower bound approach. The Adversary sup- Ò

¾ Theorem 6. A width fixed-ordering BT for in- plies an initial input set and claims that regardless of the Ñ

ordering of elements in that set, some combinatorial proper- terval scheduling with proportional profit on machines

ties must hold for a subset (that depends on the ordering) of and intervals cannot achieve a better approximation ratio

½

½·Æ Ƽ Ñ

than ½ , for any .

Ñ´¾­ ·½µ the initial set. This is analogous to the Ramsey phenomenon ¾

in graphs; i.e. in every colouring we can say something (e.g.

½ Proof. We begin with the special case of .The

the existence of a clique/anticlique of a certain size) about

½ set of possible inputs are intervals in ¼ of the form

one of the colour classes. Restricted to such a (now or-

  where are integers and is a function of dered) subset, the game proceeds with the guarantee of that

which will be fixed later. We start with some definitions.

property. Such an approach is obviously not applicable to

    ½ ¼ A set of three intervals of the form ¼ ,

adaptive or fully-adaptive algorithms.

½

, is called a complete triplet. An interval of the



form ¼ is called a zero interval, and an interval of the

½

3. Interval scheduling form  is called a one interval.Let be the set of zero- ½

intervals whose right endpoint is at most ,andlet be the ¾

Interval selection is the classical problem of selecting, set of one-intervals whose left endpoint is at least ½ .We ¾ among a set of intervals associated with profits, a subset say that a set of complete triplets is unsettled with respect of pairwise disjoint intervals so as to maximize their total to an ordering of all of the underlying intervals if either all profits. This can be thought of as scheduling a set of jobs zero-intervals are before all one-intervals, or vice versa.

with time-intervals on one machine. When there is more We notice that for any ordering of the above intervals

 ¾´¾ ½µ than one machine the task is to schedule jobs to machines and for every such that , there is a set of

so that the jobs scheduled on any particular machine are complete triplets which is unsettled. Let be the sequence



disjoint; here too, the goal is to maximize the overall profit induced by the ordering on . Each of and has

½ ¾ ½

of the scheduled jobs. size ¾ . Ifwelookatthefirst elements of ,

When all the profits are the same, a straight-forward the majority of them are (wlog) from . Select of these

¾ ½ (in the sense of [11]) solves the problem. -intervals and select -intervals from the last el-

For arbitrary profits the problem is solvable by a simple dy- ements of and match the two sets in any way such that if

½ ½

 ½ ¼ 

namic programming algorithm of dimension , and hence and are both selected, then they are matched to

¾ ¾

Ñ

´ µ runtime .Thewaytodothisistoorderintervalsin each other. This matching, along with the distinct middle

increasing order of their finishing points, and then compute intervals (one of which may be empty) needed to connect

 

¾ ¿ Ñ an -dimensional table where ½ is the each pair of the matching, constitutes a set of unsettled

maximum profit possible when no intervals later (in the or- complete triplets.

´ ½µ¾

dering) than  are scheduled to machine ; it is not hard to Now, consider a BT program of width and

¾´¾ ·½µ ·½ see that entries in this table can be computed by a simple let so as to guarantee there are unset- function of the previous entries. tled complete triplets. Throw out all intervals not involved

As mentioned earlier, such an algorithm gives rise to an in these triplets. Assume wlog that all of the zero-intervals

Ñ

´ µ -width, fixed-order BT algorithm. A completely dif- come before the one-intervals. Since no two intervals can be

ferent approach that uses flows achieves a running time of accepted simultaneously, and since the width is ,thereis

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE a zero-interval that is not accepted on any path. The adver- so as to necessitate a large set of intervals in any optimal sary will remove all one intervals except the one belonging solution. to the same triplet as the missing zero interval. With this in- Before we turn to the lower bound we remark (without

put it is easy to get a solution with profit ½ by simply picking proof) that an adaptive BT algorithm of width 2 supplies the complete triplet. But with the decisions made thus far a better approximation than priority algorithms (width 1);

it is obviously impossible to get such a profit, and since the namely we get an approximation ratio of 1/2, which beats

½ next best solution has profit at most ½ , we can bound the tight approximation result for priority algorithms of 1/3. the approximation ratio. This is not quite enough, however,

Theorem 7. The width of an optimal adaptive BT for inter-

·½µ·½ since here we have only ¾´ , rather than , input

val scheduling with proportional profits on machines and ¡

items. To handle this we include in the set of input items

ª´µ

intervals is .

´¾´ ·½µ·½µ Æ “dummy” intervals of length for

arbitrarily small Æ . These dummy intervals can contribute Proof. The set of data items are the intervals of size at most

at most Æ to the non optimal solution, which gives an inap-

¼ ½

½ in with endpoints of the form for any

½· Æ ½ ½·Æ ½¾´¾ ·½µ proximation bound of ¾

. We will associate a graph with the set of revealed ¼

for any Æ . intervals, where the endpoints of the intervals are vertices ½

The same approach as above works for machines. in the graph, and the intervals themselves are the edges. We

That is, if is large enough so that we have unsettled

¡ also define a point to be zero connected (one connected)

triplets, then must be at least in order to get opti-

if is connected to 0 (respectively, 1) in the graph by edges

mality. Therefore, given width ,let be minimal such ¡

going left (right). An interval is zero (one) connected if

½ that . Then we achieve profit at most

both its endpoints are zero (one) connected; it is generic if

· Æ ½ ´¾ ·½µ

(or if we add dummy inter- neither of its endpoints are.

·Æ ½

vals) and our approximation ratio is at most 

The adversary applies the following two rules for elimi-

½

Ñ

½ ½

·½µ Æ ½´¾´¾ ·½µµ  ½·Æ ½¾´¾

½· . nating intervals for phases (until intervals have

 ¼ ½ been revealed). Initially, we have a set of points .

Remark 1. Certain known algorithms (see [20, 23]), which Throughout the algorithm’s progress, we will add to the should intuitively be called greedy, allow semi-revocable endpoints of the revealed intervals. At each stage, decisions. We can consider this additional strength in the 1. Cancel all unseen intervals both of whose endpoints context of BT algorithms. This means that at any point we are in .

can revoke previous accept decisions. We only insist that any partial solution is feasible (eg for the Interval schedul- 2. Cancel all intervals ending (starting) in if is zero

ing problem, we never allow a solution with overlapping in- connected (one connected) and we have already seen ¾

tervals). This extension only applies to packing problems; intervals ending (starting) in .

that is, where changing accept decisions to rejections does

not make a feasible solution infeasible. It is not hard to see We denote by the set of revealed intervals after

  

that the proof of Theorem 6 also applies to the case where phases of the game. Clearly . We are interested

 ½

½

in ,thesetof revealed intervals at the

½ ½ we allow revocable acceptances. Setting and

in Theorem 6 slightly improves an inapproximation bound end of the game. The union of all intervals in is strictly

½ ½ of [23] although the bound in [23] applies to adaptive or- contained in ¼ as all intervals are of length at most . derings. Notice that the graph associated with is acyclic because each interval contributes at least one endpoint not previ- ously seen. Further, when an interval is revealed, it is either

3.1. Interval Scheduling in the Adaptive Model generic and stays that way, or it is zero (one) connected.

Let ¼ be the set of zero connected intervals in and let

The following theorem shows that adaptivity still does ½ be the set of all one connected intervals in and let not allow a BT algorithm to get a substantial reduction in be the remaining intervals in . Note that since does not

width compared to the Simple DP setting. Specifically we cover the entire interval, no point/interval can be both zero

ª´ µ

show that for a constant number of machines , and one connected, and thus every interval in lies in at

½ width is required to solve the problem exactly. However, it most one of the sets , ¼ , . is quite clear that the way to achieve a lower bound must We will now define three types of indispensable partial change dramatically. We cannot hope, for example, to get a solutions, those that involve only generic intervals, those

superconstant lower bound if the adversary offers an input that involve only zero connected intervals, and those that

½ which contains a set of three intervals covering ¼ ,as involve only one connected intervals. We will argue that is the case with the lower bound of the Theorem 6. This there must be a large number of indispensable solutions of intuitively leads to a setting in which the intervals are small, at least one of the three types.

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE

Ñ Ë Á Á Á 

¾ Ñ Asetof generic intervals ½ 4. The Knapsack problem

is indispensable (with respect to È ) if there is a valid set of

 Ë

future inputs Ë that extends to a complete solution, but

The knapsack problem takes as input Ò non-negative

¼

 È does not extend any other Ë to a complete solution.

integer pairs denoting the weight and profit of Ò items,

´Ü Ô µ´Ü Ô µ Æ

½ Ò Ò

½ and another number ,andre-

Ë Á Á Á   

È

¾ Ñ

Claim 8. All choices of ½ of

Ë  Ò Ô

turns a subset that maximizes  subject to

¾Ë

  È

generic intervals are indispensable. Moreover, Ë

Ü  Æ

 .

¾Ë

´ÑÒµ Ç .

There are well-known simple-DP algorithms solving the Æ

We will now discuss the zero connected case. The one knapsack problem in time polynomial in Ò and ,orin

Ò

Ò ¥ÑÜ Ô

connected case is analogous. A partial solution of intervals time polynomial in and  . In this section,

½

to machines is projected to a multiset of Ñ points by taking we prove that it is not possible to solve the problem with an

the rightmost points covered by each machine. For exam- adaptive BT algorithm that is subexponential in Ò (and does

Æ ¥

¾ 

ple, if intervals are scheduled on machine not depend on or ). Further, we provide an almost tight

¼

1, and interval is scheduled on machine 2, then bound for the width needed for an adaptive BT that approx-

½ ¯

 Ê

this solution projects to Ê A multiset of zero imates the optimum to within . We present an upper

¾

¯µ

connected points/intervals is indispensable if for some par- bound (due to Marchetti-Spaccamela) of ´½ based on a

È Ê

tial solution defined on that projects to ,thereisa modification of the algorithms of Ibarra and Kim [24] and

½

¿½

´½¯µ



set of future inputs, Ê , that extends to an optimal solu- Lawler [26]. The lower bound of uses the expo- tion, but does not extend any solution that does not project nential lower bound for the exact problem. We note that

to Ê . We think of the projection operation as defining an both our lower bounds in this section hold for the simple equivalence relation over all partial solutions, and we will knapsack problem, where for each item the profit is equal show that each equivalence class induced by this relation is to the weight. indispensable. This implies that at least one partial solution

Theorem 10. The width of an optimal adaptive BT for the ¡

from each class must be maintained by the algorithm, and

Ô

Ò¾

Ò¾

Òµ 

simple knapsack problem is at least ª´¾ .  thus we will show a lower bound equal to the number of Ò equivalence classes.

Proof. We are tempted to try to argue that having seen only

Ê Ö Ö   Ö 

¾ Ñ ¼ Claim 9. Every independent set ½ part of the input, all possible subsets of the current input

of zero connected points/intervals is indispensable. More- must be maintained as partial solutions or else an adversary

  Ç ´ÑÒµ Ä

over Ê . Similarly, every independent set has the power to present a set of remaining input items that

Ð Ð Ð  

¾ Ñ ½

½ of one connected points/intervals is will lead to an optimal solution with a solution the algorithm

  Ç ´ÑÒµ indispensable, and Ä . failed to maintain. For an online algorithm, when the order is adversarial, such a simple argument can be easily made

We can now complete the proof of our theorem. Let to work. However, the ordering (and more so the adaptive

       

¼ ½ ½

, ¼ and . We will first argue ordering) power of the algorithm requires a more subtle ap-

   ´Ò ½µ ½

that one of , ¼ or is at least . Whenever an proach.

× Ö  È  ½ interval in is revealed, it must be in , ¼ or

Let Æ be some large number which will be fixed later.

unless one of the following two cases occurs: (i) Ö is zero

(Since a simple-DP of size poly(n,N) exists, it is clear that

× Ö

connected but × is not, or (ii) is one connected, but is Ò Æ must be exponential in .) Our initial set of items are

not. Case (i) can occur only once for each revealed mem- 

¡ ÆÒ ¼ Ò¾

integers in Á .Takethefirst items, and

¿

ber of ¼ (since each zero connected point can have at most following each one, apply the following “general-position” two intervals going left due to the elimination rules), and rule to remove certain items from future consideration: re- case (ii) can occur only once for each revealed member of

move all items that are the difference of the sums of two

 ·¾ ·¾  Ò

¼ ½ ½ . Hence, , so one of them is at least

subsets already seen; also remove all items that complete

È

Ò ½µ   ´Ò ½µ

´ .If , then from Claim 8, we need

 Æ Æ ¡ ¡

any subset to exactly (ie all items with value 

¾Ë

 ª´Òµ

Ò ½

active solutions after the first rounds.

Ñ Ñ

   Ë ¾

where ½ are the numbers considered so far, and

  ´Ò ½µ¿ Otherwise, if ¼ , then since we are dealing is any subset). These rules guarantee that at any point, no

with acyclic graphs, we can extract an independent set from two subsets will generate the same sum, and that no subset

 ¾  ´Ò ½µ½¾

Ò¾ ¼

¼ of size at least . Thus by Claim 9, ¿ will sum to Æ . Also notice that this eliminates at most

we need to maintain all the possible projections of this set ¡

numbers so we never exhaust the range from which we can

ª´Òµ

Ñ ¾

onto the machines. There are of these. Again the Ò

Ñ  Æ pick the next input provided that ¿ . one connected case is analogous. Since the number of input

Call the set of numbers seen so far È and consider any

 Ç ´ÑÒµ

intervals altogether is Æ , we get a lower bound

É È Ò ¡

subset contained in of size . Our goal is to show

ª´Æѵ

of , expressed in terms of the input size Æ . É Ñ that is indispensable; that is, we want to construct a set

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE

Ê Ê Ò¾

É of size consisting of numbers in the feasible Proof. We only show the inapproximability result here. We

input with the following properties. take the existing lower bound for the exact problem and

¯

convert it to a width lower bound for getting a ½ approxi-

È Ê

1. does not contain two subsets that have the same mation. Recall that the resolution parameter Æ in that proof

Ô

Ò Ò¾

Ò¿ ¾  Ò

sum. hadtobe for getting a width lower bound of .

­ È

È For a given width , we might hope to lower the necessary

 Æ  · 

2. 

¾Ê ¾É  resolution in order to achieve an inapproximability result.

We consider a Knapsack instance with Ù items that require

The above properties indeed imply that É is indispens- exponential width (as is implied by Theorem 10), and set

able since obviously there is a unique solution with optimal Ù

Ù¿

Æ , the parameter for the range of the numbers to .If

Ô

Æ É ¾

value and, in order to get it, is the subset that must Ù

­  ¾  Ù Ù is such that then this problem cannot be

be chosen among the elements of È . We thus get a lower

solved optimally by a width- ­ BT algorithm. Recall, the

bound on the width which is the number of subsets of size

¡

Ô

Æ Æ ½ ¾

Ò optimum is ,andthenextbestis ,andsothebest

Ò¾

Ò È Òµ 

in ; namely ª´¾ .  Ò possible approximation we can get is

How do we construct the set Ê ? We need it to sum to

È

Ù ¾ ÐÓ ¿



¾

Æ É Ê

Æ ½µÆ ½ ½´ Ù¿ µ ½ Ç ´­ µ , while preserving property 1. The elements in ´

must be among the numbers in Á that were not eliminated

È

½¿½

ª´´½¯µ ½ ¯ µ

Æ É

thus far. If Ê is to sum to , then the average of Therefore width is required to get a ap-

È

¾

¡ ´Æ ɵ 

the numbers in Ê should be .Since proximation. To make the lower bound work for any num-

Ò

È

¾

Ò Ù ¼

ÆÒ   É  ´Òµ´Æ¿Òµ ¾Æ¿

¼ ,weget ber of items, we simply add -items to the adversarial

¿

 ¾ÆÒ  . This is good news since the average is not input.

close to the extreme values of Á , owing to the fact that the

Remark 2. The proof of theorems 10 and 11 can be ex- É cardinality of Ê is bigger than that of . We now need to worry about avoiding all the points that were eliminated in tended so as to allow revocable acceptances with slightly

the past and the ones that must be eliminated from now on worse parameters. Recall that in Theorem 10 we look at

¾ ¼ ƾ

to maintain property 1. The total number of such points, Ò elements of the range and then show that all  Ò subsets are indispensable. We can modify the proof

Í , is at most the number of ways of choosing two disjoint

Æ Ò Æ Ò

Ò so that this range is for suitable constants

Í  ¿

subsets out of a set of Ò elements, namely .

 ¾ Ò¾

 ; we look at the first items and similar to

 Í  · Í Â  Let  . We later make sure that

the arguments in Theorem 10, show that all subsets of size

Á ¾ ¾ Â

.WefirstpickÒ elements in that (i) avoid all

´¾ µ points that need to be eliminated, and (ii) sum to a number Ò are indispensable. In the semi-revocable model it is

no longer the case that this supplies a width lower bound of

¡

Û ´Ò¾ ¾µ  Í

Û so that . This can be done by ¾

Ò , but instead we should look for a family of feasible

Ò´¾µ

iteratively picking numbers bigger/smaller than  according

Ò´¾ µ sets such that any of the indespensible sets of size

to whether they average to below/above . To complete we

Ò

¾

is contained in some  . But, and this is the crucial

 Û ¾ Á Ú ¾

need to pick two points ½ that sum to

¾

Ò  ¾

point, feasible sets must be of size  , and so every

¾ ½ ¾

and so that ½ are not the difference of sums

¡

Ò

¾

Ò contains at most sets, and a counting argument

´¾µ

of two subsets of the items picked so far. Assume Ò

¡ ¡

Ò¾ Ò

´Òµ

¾ ¾Í ·½

for simplicity that Ú is an integer. Of the pairs

 ¾

immediately shows that    .

Ò´¾µ Ò´¾µ

Ú¾  Ú ¾· µ  ½ ¾Í ·½

´ ,where , at least one

¾ pair ½ will have all the above conditions. All that is left 5. Satisfiability

to check is that we never violated the range condition, ie



¡ ÆÒ

we always chose items in ¼ . We can see that the

¿ The search problem associated with SAT is as fol-

Í ´¾Í · smallest number we could possibly pick is 

¾ lows: given a boolean conjunctive-normal-form formula,

ÆÒ ¿Í ½ 

½µ . Similarly the biggest number we

¿

 ´Ü  Ü µ Ò

½ , output a satisfying assignment if one ex-

·¿Í ·½  ¾ÆÒ ·¿Í ·½ might take is  . These numbers

¾ ists. There are several ways to represent data items for

 ¿Í ·½

are in the valid range as long as ÆÒ .Since

¿ Ò

Ò the SAT problems, differing on the amount of information

 ¿ Æ Ò¿ Í we get that suffices. contained in data items. The simplest weak data item con- A more careful analysis of the preceding proof yields the tains a variable name together with the names of the clauses following width-approximability tradeoff. in which it appears, and whether the variable occurs pos-

itively or negatively in the clause. For example, the data

½ ¯

´ ·µ ´ µ  Ü Theorem 11. Knapsack can be ( )-approximated by Ü 

item  means that occurs positively

¾ ½¿½

´½¯µ ´½¯µ

 a width adaptive-BT. At least width is 

in clause , and negatively in clause ,andtheseare needed for such an approximation, even for the simple Ü

the only occurrences of  in the formula. The decision is knapsack problem. Ü

whether to set  to0orto1.Wealsodefineastrong model

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE

in which a data item fully specifies all clauses that contain along the path must be set to ¼ up to a certain point, and to

½ ·½

½ ¾

a given variable. Thus where from that point on, which means at most possible

the ½ are a complete description of the the clauses valid assignments to the literals involved.

containing . Using an adaptive ordering we can “grow” such a path

In general we would like to prove upper bounds for the as follows. Suppose we start with ½ . The algorithm then

 

½ ¾

weak data type, and lower bounds for the strong data type. chooses a new variable ¾ that appears in a clause

 ¾

We will show that 2SAT (for the strong data type) requires if there is one (that is, look at an edge ½ ). Then,

 

¾ ¿ exponential time in the fixed BT model, but has a simple lin- it continue to look for a path ½ and so on. ear time algorithm in the adaptive BT model (for the weak As long as this is possible we only need to maintain a linear data type). Thus, we obtain an exponential separation be- number of solutions. When the path is not extendable in this

tween the fixed and adaptive BT models. Next, we give fashion, few different cases are possible. We sketch two. If

   

¾ ¿ ¾

exponential lower bounds in the fully adaptive model for we get a path ½ we can safely set

½

3SAT (strong data type). ¾ to , ’prune’ it from the path and continue. If the path

  

¾ ¿ ¾ ¾ ¿

is ½ then we know that

5.1. 2-Satisfiability in the Fixed Model and we introduce a new variable ¾¿ that must be set to this

 ¾¿ common value, and continue with the path ½ . The optimization problem associated with SAT is called MAXSAT and is the following: find an assignment to the 6. 3-Satisfiability in the Fully Adaptive Model variables of a CNF that maximizes the number of satisfied clauses. In this section we prove the following theorem: In this section we show that the fixed BT model cannot Theorem 14. Any fully adaptive BT algorithm for 3SAT on

approximate MAX2SAT (and hence solve SAT) efficiently.

ª´µ ª´µ

¾ ¾

variables requires width and depth-first size .

¼ Ƽ Theorem 12. For any , there exists a such that This lower bound holds for the strong data type for SAT.

for all sufficiently large , any fixed BT algorithm for solv- Consider the framework we’ve been using so far to prove

Æ ¾

ing MAX2SAT on variables requires width to achieve lower bounds for adaptive BT algorithms. We generally

¾½ a · approximation. This lower bound holds for the ¾¾ start with a rich universe of items and then the adversary strong data type for SAT. prunes this universe at each level of the algorithm’s tree. In A similar idea to the 2SAT inapproximation can be used other words, the items that the algorithm has seen so far tell to show that Vertex Cover (where the items are nodes with us which of the remaining items to prune. In the fully adap- their adjacency lists) requires exponential width to obtain tive model, each path in the algorithm’s tree sees the items

½ in a different order. One possible lower bound strategy is

an approximation, for any . ½ to have a separate adversary for each path in the tree, but 5.2. 2-Satisfiability in the Adaptive Model then we could never be sure that these adversaries prune the universe consistently: maybe one would prune an item that has already been seen on a different path. The other imme- In this section, we show that allowing adaptive variable diate possibility is to look at the union of all items seen on ordering avoids the exponential blow up in the number of all paths up until the current level of the tree and use those possible assignments that need to be maintained. That is, we to prune the remaining items. The trouble with this is that

give a linear width BT algorithm for 2SAT in the adaptive

´ÐÓ µ after depth , the algorithm could potentially have model. seen every input item, so we cannot continue this game for

very long.

´µ Theorem 13. There is a width- adaptive BT algorithm Instead of using an adversary argument, then, we fix for 2SAT on variables. Further, this upper bound holds for the weak data type for SAT. a distribution of inputs and argue that with non-negligible probability the algorithm needs an exponentially large tree Proof. (sketch) Consider the standard digraph associated to solve a random input. More specifically, we don’t look with a 2SAT instance. Recall that the standard algorithm for at the algorithm as a whole, but at the individual potential

the problem goes via finding the strongly connected compo- paths. If the algorithm never terminated any of its paths,

nents of this graph. This does not fit immediately into the there would be ¾ of them. If we could show that a typi- BT model, since here, whenever we observe a variable we cal such potential path cannot afford to terminate before a must extend partial solutions by determining its value. The certain depth, because it might be the algorithm’s only hope

algorithm we present uses the simple observation that a path for finding a solution, then we would have a lower bound.

   

½ ¾ in the graph, such as ¿ has only The keys to proving this are linearly many satisfying assignments; namely the variables (1) using instances with unique solutions, so no currently

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE

Á  Ñ´Á Ö µ Á  ¡Á µ viable path can count on another path to find the solution, 2.  .

and

´Ö× µ Matrix is an -expander if condition 2 is replaced (2) bounding what a path can know about the instance at by

a certain depth and arguing that, for many instances, many

¼

Á  Ñ´Á Ö µ  ¡Á µ

paths’ partial solutions look like they can be extended to a 2 . Á

full solution.

Ö× µ ´Ö×¾ It is easy to check that any ´ -expander is an

The second part is accomplished by stating a nice, suc- µ × -boundary expander.

cinct condition that implies that a particular partial solution

Ñ¢Ò

¾ ¼ ½

“looks like” it can be extended to a total solution. Definition 17 ([5]). Let . For a set of

  Ò

The inputs we use are (CNFs that encode) full rank lin- columns define the following inference relation Â

  Ò ¢ Ò Ü on sets of rows of :

ear systems over ¾ ,where is an matrix,

Ò ¢ ½ Ü Ò ¢ ½

 is an vector and is an vector of variables.

Á Á Á Ö¾  ´Á µ   Â

½ ½  ½ Á Â (1)

More specifically, we fix a matrix with appropriate prop-

 µ Â

erties (as outlined below) and use the uniform distribution Let the closure Ð ´ of be the union of all sets which



over ’s. The full-rank property of will guarantee that the can be inferred via  transitively from the empty set.

system has a unique solution. One of the other properties of

 µ Notice that д is a set of rows, not necessarily

will be that each row has at most 3 ones. Therefore, each

¾ Â

bounded by Ö , whose boundary is contained in .It’s Ü row of   is a contraint on that can be represented by not hard to imagine that, in general, it could be the whole

4 3-clauses. Since the algorithm knows in advance, the

Ñ Â

set  .If is small and we are dealing with a boundary ex- Ü

strong data type for variable, say, boils down to revealing

pander, however, it never gets the opportunity to grow very those entries of  that correspond to rows of containing much:

a one in column  . Note that these formulas are efficiently

¼

´Ö ¿ µ

solvable by Gaussian elimination, thus they separate back- Lemma 18 ([5]). Let be -boundary expander.

¼ ¼

 ´ Ö¾µ д µ  tracking and dynamic programming from more global alge- For any set  with braic methods4.

As we proceed down a path in the BT tree, we discover Ü bits of  and we set values for variables in . We will use 6.1. Linear systems over expanders this idea of closure as an indication of the extendability of the current partial assignment to the variables. This is be- Below we present the machinery developed in [4] and cause, if  is the set of variables for which we have assigned [2]. These concepts provide a convenient way to analyze values, then the most difficult subsystem to satisfy is the set

a linear system when the underlying matrix is a good ex-

 µ of rows in д . Intuitively, think of the opposite case: if pander; that is, when the bipartite graph encoded by the a set of rows has an unset boundary variable for each row, matrix is a good expander. then it is certainly possible to satisfy that set of rows by set-

ting those boundary variables appropriately after setting all

Ñ ¢ Ò Definition 15. Let be an 0/1-matrix. We denote the

the other variables involved. More formally,

   

-th row of by  and identify it with the set

Ñ¢Ò

  ½

¾ ¼ ½ ´Ö ¿ µ

. The cardinality of this set is denoted by  . We extend Lemma 19. Let be an -boundary

Ë

Ñ

Á  Ñ

 ¾ ¼ ½ 

 Á

the notation to  for a set of rows .

¾Á

 expander and let .Let be a partial restric-

Á дÎÖ×´ µµ ÎÖ×´ µ tion on Ü and denote ,where There are two notions of expanders: expanders and

denotes the set of variables given values by  . Assume that

boundary expanders. The latter notion is stronger as it re-

´Üµ  Á there exists a solution for the system Á that ex-

quires the existence of unique neighbors. However, every ¼

Á  Ö¾ tends  . Then, for any set of rows of size there

strong expander is also a good boundary expander.

¼ ¼

 ´Üµ  Á

exists a solution of the system Á that extends .

 Ñ Ñ ¢ Ò

Definition 16. For a set of rows Á of an matrix In addition, we will need the fact that there are many

Á Á

, we define its boundary  (or just )asthesetofall ways to satisfy moderate-sized subsystems of the instance:

¾ Ò

 (called boundary elements) such that there exists

¢ Ò

Lemma 20 ([4]). Assume that an Ñ matrix is an

¾ Á 

exactly one row  that contains . We say that is an

´Ö ¿ µ  Ü  Ü  Ò

-expander, ½ is a set of variables,

Ö× µ

´ -boundary expander if

Ñ

 ¾¼ ½ Ä        

, , and ½ is a tuple of

 ×  ¾ Ñ

Ü    Ö

1.  for all , and linear equations from the system ,where .

Ä

4We would like to note that if one adds a little bit of random noise into Denote by the set of assignments to the variables in 

Ä Ä

the linear mapping (and the goal would be to find a solution that satisfies that can be extended on  to satisfy .If is not empty 

almost all equations) then there is no efficient algorithm known for this 

¼ ½

then it is an affine subspace of  of dimension greater 

task. It was conjectured by the first author [1] that the resulting mapping 

½ ½ 

 

may be a pseudorandom generator against P. than  .

¾ ¾´¾ ¿µ

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE

From now on, we fix to be the matrix guaranteed by Assume that is a uniformly chosen random vector.

 ½ Ì ´Ü  µ

the following theorem, with equal to, say, . This con- Then  is a random tree the size of which we struction is an improvement upon that in [4]. need to estimate.

Given a backtrack algorithm we construct a strategy ¾

Theorem 21. For any constant  there exist a constant È

 for the BT game and then proceed to prove a lower

¯ ¼ ü Ò ¢ Ò

and and a family Ò of matrices s.t.

Ì

´Üµ

bound on the size of È , which will imply a lower

¯

Ì ´Ü  µ Ò has full rank.

bound on  . While it is difficult to estimate ex-

actly what the algorithm knows about at any point (mainly

¯ ´¯Ò ¿µ

Ò is an -expander.

because it may have inferred that some items are not part of

¯ Ã

Every column of Ò contains at most ones. the input), the corresponding strategy will stay ahead of the

¼

¼ algorithm in terms of knowledge of and it will be very

¾ ¿ Ö  ¯Ò ´Ö ¿ µ

Let  and ,so is an -boundary

easy to track its knowledge of . Nodes of are trans-

expander. È

lated into sequences of steps of  ; we start with the root

Ì ´Ü  µ

6.2. The lower bound node of  that is mapped to the root node of

Ì ´Ü  µ

È

Ù Ì ´Ü  µ

In what follows the behavior of BT algorithms will be Now, let be a node in  s.t. the ordering at

Ù Ù Ù

Ù



 µ  µ  ´  ´

this point is a sequence of items

¾ Ü ½

´  µ

Ý Ü

analyzed with pairs of restrictions Ü . Intuitively Ú

Assume that we have defined a mapping of Ù into a node

Ü

corresponds to the current assignment on and Ý corre-

Ù

Ì ´Ü  µ 

in È Possibly is already contradictory with

½

sponds to the known partial information about .Weregard

Ú Ú Ú

´ 

µ in that it contradicts or it represents a variable

Ü Ý Ý

ÎÖ×´ µ

a set of assigned variables Ü as a set of columns

Ú Ù 

already set in ; in this case we know that is not in

Ü ½

    ´Ü µ ¾ ¼ ½ ÎÖ×´ µ

 Ý

Ü and as a set of rows Ù

the input. We therefore assume wlog that  is not con-

½

Á   ´ µ ¾¼ ½  Ý .

tradictory. Now, let Á be the set of bits associated with the

Definition 22. We call a pair of partial assignments Ù

  

¾Á

clauses of . Notice that the bits  are the ones that

½

Ù ´  µ

Ý Ü Ü consistent w.r.t. if can be extended to a

determine whether  is one of the input items associated

½

Ò

Ü ¾ ¼ ½

global assignment and Ý can be extended to



with Ü . Also, since every variable may participate in

Ò

¾¼ ½ Ü 

such that .

Á  Ã

at most à linear equations, . The BT game now

¼

  Ú

¾Á

We begin by viewing a BT algorithm for this problem reveals all the currently unrevealed bits  .Let be a

¼ ¼

Ú Ú Ú Ú

Á

new node with  and extends on .Atthis

Ý Ü Ý

as playing a game with the object of finding a solution to Ü

 Ü for unknown . While there are good strategies to point we make a data-check that can have the following two win this game quickly, we show that any BT algorithm is a possible outcomes.

very bad player. ¼

Ù Ú



¯ A negative item-check contradicts and hence

Ý ½ Ù

Definition 23 (backtrack game on a linear system). A  is not in the input. In this case the next data item ½

strategy È is a function that maps a pair of partial as- in the list is considered using the above set of rules.



signments to a decision to reveal a bit or to guess bit ¼

Ù Ú



¯ A positive item-check does not contradict ,

Ý ½

Ü  ¾ Ñ È

 ( ). For a given strategy and a linear system

in which case it might be one of the input items.

Ü  Ì ´Ü  µ

we define a backtrack game tree È in

Ù 

The strategy will proceed regardless. Suppose  ½

the following recursive way. ¼

¼ Ú

Á  дÎÖ×´ µ  ´Ü     µ

½ Ø

 and denote

Ü

¯ Ì ´Ü  µ

¼ È

 µ

Every node of contains a pair of partial

.If Á contains yet unknown bits that are not set in

¼

Ú Ú Ú

Ú

´  µ

assignments . The assignment is always

Ü Ý

Ý then the corresponding revealing steps take place

Ý

Ú Ú

¼ ¼¼

´  µ Ú

consistent with the value of . The assignment Á Ý Ü so that is exposed in the vertex .

is always consistent w.r.t. .

After that a guessing move takes place w.r.t. the vari-

¼¼

Ü Ú ¯

The root node contains the empty partial assignments. able  . This results in a splitting of the vertex into

Ð Ø Ö Ø Ú

two vertices Ú and according to the value of Ú

¯ (Revealing move). If is a node containing the pair

Ü Ù

 . The translation of one step of in the node is

Ú Ú Ú Ú

´ È ´  Ú   µ µ  Ö Ø

and reveal then has the Ð Ø

Ü Ü Ý Ý Ú

finished and Ú , are growing according to the

¼ ¼

Ú Ú Ú ¼ Ú

Ý      Ú

 

Ì ´Ü  µ

unique child with . Ù

Ý Ý Ü Ü

two children of in  except that if either

Ð Ø Ö Ø

Ú Ú ´  µ Ý

or contains an inconsistent pair Ü Ú

¯ (Guessing move). If is a node containing pair

Ú Ú Ú

Ú then the corresponding branch(es) are terminated.

´ È ´    µ 

µ and guess then denote

Ü Ü Ý Ý

Ú ´Ð ص Ú ´Ö ص

Ú Ú

Ü ¼ Ü ½   È Ì ´Ü  µ

Ü Ü

  

 , . Think of as defining a subtree of which

Ü Ü

Ú will have 0,1 or 2 successors in accordance with the keeps only those paths which are “highly extendible:” that Ú

number of consistent pairs. is, they maintain not only the invariant that is consistent Ü

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE

Ú  Ö ¾

with what the algorithm might know about , but also that we’re done. Let be the node of at depth ,andlet



is consistent with the equations in its closure. Ø be the number of item-checks. Our first goal is to show

ª´Ö µ

Ø . It is enough to show that on average there are at È Lemma 24. is a well-defined BT game strategy and most a constant number of revealing nodes per item-check

there exists an injective function  that maps guessing nodes (since at least half the nodes are revealing nodes). There are

Ì Ì ´Ü  µ

´µ of to nodes of .

two types of bits that are revealed. The ones that are asso- Ã ciated to the  equations associated with the data items,

Proof. The map  is defined recursively in the natural way and the ones that are in the closure of the current partial

w.r.t. the execution of . Further, one can inductively show assignment. Notice that all the revealed bits of the second

the following semantics : at any node Ù the amount of in-

дÎÖ×´ µµ д¡µ

type are a subset of ,as is monotone.

 Ú

formation known about is no more than ,where is the

µµ  Ø ÎÖ×´ Notice also that  . These two observa-

corresponding node in the game. This immediately shows

¼

  Ö ¾ tions, together with Lemma 18, show that for Ø ,

that whenever aborts one or two possible extensions of

¼ ¼

дÎÖ×´ µµ  ÎÖ×´ µ   Ø 

 . Putting it all

Ù Ú Ì

the node , the corresponding node in È contains an

¼

à ·½ 

together, there are (amortized) at most revealed

µ  

inconsistent pair ´ , since otherwise any consistent

¼

Ø  Ö ¾

bits per each item-check node (unless in which 

extension of  could be the unique solution and this

 ª´Ö µ Ø  ª´Ö µ case Ø anyway), and so .Next,we

will lead to a failure of to solve the problem. need to bound the number of guessing-nodes; those are ex-

Lemma 24 says that it is enough to bound the size or actly the nodes associated with positve item-checks. De-

Å È note by the number of items that have been identified as

width of the tree resulting from the strategy  in order to Ø

get a simiar bound to the size/width of . present in the formulas after item checks. Notice that since

¾

an item-check is positive with probability at least ½ ,

È



¾ Ø

Lemma 25. For the strategy , all leaf nodes in Å

the sequence is a submartingale. Applying

½

Ì ´Ü  µ Ö ¾

È

Ø ¡ ¾

have depth at least . Å

Azuma’s inequality gives that  with prob-

¾

ª´µ

¾ ability ½ . This implies that w.h.p. the random path

Proof. Assume for the sake of contradiction that there ex-

¾ ¾

contains Ø guessing points. Now we need to show

¾ Ú ists a path of length less than Ö ,let be the last node that the number of forced choices (i.e. guessing nodes with

on this path. By the definition of BT game strategy Ú

one child) is not too big.

´ 

contains a pair of restrictions µ that are consistent

Ä   ÎÖ×´

Denote µ and by the set of lin-

w.r.t. . The only possibility of a path being terminated

Ý Ý

´ܵ  

´ µ ´

ear equations in the system µ

Ú Ú

is when a new bit is revealed and the new pair of re-

By Lemma 20 the space of those partial assignments on

Ý     

strictions is inconsistent with .Let

 ª´ µ  

that are consistent with has dimension

Á  дÎÖ×´ È  µµ

By the construction of  , gives

¾ Ø ¾ ª´Ö µ Å

 this implies that there are at least



values to the bits . This implies that may be extended

that many guessing points with both children present in

´ܵ   

to satisfy By Lemma 19 may be extended

Ì

¼

È .

¼ ¼

 ´ܵ Á  ÎÖ×´ µ 

to satisfy for We have

¾

a contradiction.

´Ü  µ È Ì

For a strategy  denote by a subtree of

Ì ´Ü  µ

that consists of all nodes of depth less than

Definition 26. Assume that Ì is a tree. A random path ¾

or equal Ö .

´Ì µ Ì

 goes from the root to one of the leaves of in the

 Ì ¾

following way. The first node of is the root of . At every

ª´ µ

¾ Ì

Lemma 28. With probability ½ , contains

step the next node in the path is chosen uniformly at random

ª´ µ

from the set of all children of the current node. ¾ nodes.

Æ

¾

We now consider a random path in the random tree Proof. By Lemma 27 with probability ½ a random

¾ ¯Ö

path in Ì contains at least branching points. By aver-

Ì ´Ü  µ

È

È . Our plan is to prove that w.h.p. this path has a lot of branching nodes (i.e. nodes that have more than aging, this implies that

one child).

ÈÖ ÈÖ  ´Ì ´Ü  µµ



contains at most

È

ƾ ƾ

ª´ µ

¯Ö   ¾   ¾ 

¾

Lemma 27. With probability ½ (over random branching points

´Ü  µµ   ´Ì

choices of and ), a random path È con-

However a simple counting argument shows that if a ran-

Ö µ

tains ª´ guessing nodes with both children present. ¯Ö

dom path in tree Ì has branching points with probability

 Ì ´Ü  µ ½ ¾ Ì ¾ ¾

Proof. Consider a random path in È .By then has size at least nodes. Indeed, let us

¾ Ú Ì ¼ ½

Lemma 25 it has at least Ö edges. We first prove that give each node in an identifier which is string of

Ö µ ¯Ö ¯Ö

there are ª´ guessing nodes along this path with high length that contains the direction on the first branching

 Ú probability. Assume that there are at most Ö ,otherwise point on the way from the root to . An identifier is good iff

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE the correponding path that it defines has at least ¯Ö branch- While this theorem extends Theorem 10, it does not seem ing nodes. There is an injective map from the set of good to offer any inapproximation ratio as in Theorem 11, nor

identifiers to the set of nodes in Ì and a randomly chosen does it apply to the semi-revocable model.

¾ identifier is good with probability ½ . Proof. The theorem follows from the standard polytime re- Lemma 28 along with Lemma 24 imply the following duction (see, for example, [15]) from 3SAT to SUBSET-

theorem: SUM which turns out to be a “BT-reduction”. Let

´Ü    µ

½ 

 be a variable item in the weak or strong

Ò

 ¾ ¼ ½ 

Í Ò

Theorem 29. Let . For any BT algorithm data type. Given Ñ 3-clauses on variables, the reduc-

¼

½ Ó´½µ

 ¾Ò ·¾Ñ

with probability tion creates a set of Ò base-6 numbers each

· Ñ

with Ò digits. Namely, for each propositional variable,

ª´Òµ

Ï ´Ü  µ¾ the reduction creates two knapack items (one correspnd- ing to positive occurences and one to negative occurences We do not need much additional work to estimate the of the variable) and two knapsack items corresponding to depth first size. Consider the first branching point of

each clause. (These clause items will correspond to dummy

Ì ´Ü  µ

; it corresponds to the choice of the value of

variables in the SAT domain.) Any BT algorithm (with

Ü 

the first variable  . Assume that recommends consider-

or without free choices) for solving the knapsack problem

Ü ¼

ing the assignment  first. However with probability

will induce a BT algorithm (with free choices) for solving

½¾ Ü ½

,  , but the proof of Theorem 29 implies that the

3SAT. Namely, any decision on either of the literal knap-

Ü ¼

subtree corresponding to  has exponential size. Thus

sack items for a particular variable gives us a decision for



ª´Òµ

¾ ¾ Ó´½µ Ë with probability ½ , that variable because an optimal solution to knapsack is one in which exactly one of the items corresponding to positive 6.3. Free Branching and negative literals must be taken. For a node in a path of a BT tree that represents the first time an item of that vari- Here we state a result showing that the 3SAT lower able appears, we convert the decision to a corresponding bound extends to an even stronger model than fully adaptive truth value for that variable. For the second occurrence in BT. We will need this stronger lower bound below where a branch of an item corresponding to a certain variable, we we prove a lower bound for knapsack in the fully adaptive extend the branch (if possible) only where the decision is

model by a reduction from 3SAT. consistent. Decisions for the clause items will correspond

For any BT problem È over domain , augment to free decisions. We emphasize that the ordering for the

by including an infinite number of dummy data items 3SAT instance is well defined and any optimal solution to

Ê Ê   Ê  

¾ Ò ½ and let these dummy items be implic- knapsack (ie one that achieves the target) is a branch that is

itly included in every instance Á . Decisions about these converted to a branch representing a solution for the 3SAT

items do not affect the semantics of a solution (they are ig- formula. 

nored by È ). They do, however, allow any fully-adaptive

Given the specific example above, it is not difficult to de- È algorithm  for to branch without making a decision about the real data items so that it can, in particular, con- fine the concept of a BT-reduction between problems which sider multiple orderings on the remainder of the items (this will preserve the property of being efficienlty computable ability is not useful in fixed or adaptive BT). One can mod- by a BT algorithm. Intuitively, the main criterion of such ify and get the following strengthening of Theorem 29. a reduction is that it establish a correspondence between items in an instance of the first problem and items in the

Theorem 30. For any fully adaptive BT algorithm with induced instance of the second problem. In other words, it 

free-branching, , there exists at least one for which should be very local. We then need a way to map the deci-

ª´Òµ

Ï ´Ü  µ¾

. sions of the second problem back to decisions for the first problem. One could formalize this notion at various levels

6.4. BT Reductions of generality. In order to avoid unnecessary details, we sim-

È È ¾

ply say that a problem ½ reduces to problem if there is

 È È ¾

Given the 3SAT lower bounds in Theorem 30 and uti- a mapping from the items of ½ to sets of items in ,and

 È

lizing an appropriate reduction from 3SAT to the SUBSET- a mapping from decisions on an item in ¾ to decisions

È  

½ ¾

SUM search problem, we obtain the same exponential lower in ½ . We can order given an ordering on provided

 È ½ bounds for SUBSET-SUM and hence for the simple knap- that the ¾ -subsets corresponding to different items for sack problem. are disjoint. Then, given any decision on the first item in

one of these subsets, we use  to decide about the corre-

 È ½

Theorem 31. Any fully adaptive BT algorithm for simple sponding ½ item. Given a tree in , the above setting

ª´Òµ

Ò ¾ È

knapsack on items requires width . allows us to define an associated tree in ½ . Finally, for the

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE

reduction to work, we have to require that those branches offs between the expected width and the probability of ob- È

of the tree of ¾ that represent a solution (optimal solution taining a solution. Finally, [12] recently defined a model

in the case of optimization problem) can be associated with that enhances BT by making use of memoizing. How much È

solution branches in the corresponding tree in ½ . The one can this improve on the complexity of BT algorithms? È

additional point is that, if the induced instance of ¾ con- È tains items that don’t correspond to any particular ½ -item 8. Acknowledgments

(as is the case with the clause items above), then we need to È

introduce dummy items into ½ ’s computation. We omit the We thank Spyros Angelopoulos, Paul Beame, Sashka

È È ¾

formal proof that if ½ reduces to in the above manner, Davis, Jeff Edmonds, and Charles Rackoff for their very

È È ¾ then the required width/size for ½ is at most that of . helpful comments. A special thanks goes to Alberto Marchetti-Spaccamela for contibuting the upper bound of 7. Open Questions theorem 11.

There are many open questions regarding our BT mod- References

els. Could we show, for example, that the known greedy

Ó´½µ ¾ approximation algorithms for vertex cover are the [1] M. Alekhnovich. More on average case vs approximation

best we can do using a polynomial-width BT algorithm? complexity. In Proc. 44th Ann. Symp. on Foundations of

½ Ó´½µ

We show a ½ inapproximation in the fixed BT Computer Science, 2003.

¿

model. [18, 9] show a  inapproximation result for prior- [2] M. Alekhnovich. Lower bounds for k-DNF resolution on

Ó´½µ ity algorithms and [18] proves a ¾ inapproximation random 3CNF. Manuscript, 2004. result for weighted vertex cover in priority algorithms. Such [3] M. Alekhnovich, A. Borodin, J. Buresh-Oppenheim,

R. Impagliazzo, A. Magen, and T. Pitassi. To-

a lower bound would be analogous to the work of 8 in that it would focus on a broad model of computation that con- ward a Model for Backtracking and Dynamic Program-

ming. http://www.cs.toronto.edu/ avner/papers/model- tains a 2-approximation and show that the model cannot go bt.pdf, 2004. much beyond that. [4] M. Alekhnovich, E. Hirsch, and D. Itsykson. Exponential For interval scheduling, can the adaptive BT lower bound lower bounds for the running time of DPLL algorithms on be extended to the fully adaptive model? For proportional satisfiable formulas. In Automata, Languages and Program- profit on one machine, we are able to show that a width-2 ming: 31st International Colloquium, ICALP04, 2004. adaptive BT can achieve a better approximation ratio than [5] M. Alekhnovich and A. Razborov. Lower bounds for the a priority algorithm. While we know that for one machine, polynomial calculus: non-binomial case. In Proc. 42nd Ann.

Symp. on Foundations of Computer Science. IEEE Com-

Òµ an optimal solution requires width ª´ , the tradeoff be- tween width and the approximation ratio is not at all under- puter Society, 2001. [6] S. Angelopoulos and A. Borodin. On the power of priority

stood. For example, what is the best approximation ratio algorithms for facility location and set cover. In Proceed- ´½µ

for a width-3 BT? We also do not know if a Ç -width ings of the 5th International Workshop on Approximation ´½µ adaptive BT can achieve an Ç -approximation ratio for Algorithms for Combinatorial Optimization, volume 2462 of interval scheduling with arbitrary profits. Also for any of Lecture Notes in Computer Science, pages 26–39. Springer- the problems already studied in the priority framework (eg Verlag, 2002. [11, 18, 6, 28]) it would be interesting to consider constant- [7] E. M. Arkin and E. L. Silverberg. Scheduling jobs with fixed or (in some cases) poly-width BT algorithms. start and end times. Disc. Appl. Math, 18:1–8, 1987. Will the BT framework lead us to new algorithms (or [8] S. Arora, B. Bollob´as, and L. Lov´asz. Proving integrality gaps without knowing the linear program. In Proceedings of at least modified interpretations of old algorithms)? Small the 43rd Annual IEEE Conference on Foundations of Com- examples in this direction are the width-2 approximation for puter Science, pages 313–322, 2002. interval selection, the linear-width algorithm for 2SAT and [9] A. Borodin, J. Boyar, and K. S. Larsen. Priority Algorithms the FPTAS for knapsack presented in this paper. for Graph Optimization Problems. In Second Workshop on Finally, while we have shown that the BT model has Approximation and Online Algorithms, volume 3351 of Lec- strong connections to dynamic programming and back- ture Notes in Computer Science, pages 126–139. Springer- tracking, can it be extended to capture other common types Verlag, 2005. of algorithms? For example, we show that BT captures [10] A. Borodin, D. Cashman, and A. Magen. How simple dynamic programming (where we consider the input well can primal-dual and local-ratio algorithms perform? items one-by-one), but what about other dynamic program- Manuscript, 2005. [11] A. Borodin, M. Nielsen, and C. Rackoff. (Incremental) pri- ming algorithms, such as the longest-common-subsequence ority algorithms. Algorithmica, 37:295–326, 2003. or string alignment algorithm? Also, it seems natural to [12] J. Buresh-0ppenheim, S. Davis, and R. Impagliazzo. augment BT with randomness: each decision would be A formal model of dynamic programming algorithms. taken with a certain probability and one would study trade- Manuscript in preparation, 2004.

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE [13] V. Chv´atal. Hard knapsack problems. , 28(6):1402–1441, 1985. [14] S. Cook and D. Mitchell. Finding hard instances of the sat- isfiability problem: A survey. In DIMACS Series in Theo- retical Computer Science, 1997. [15] T. Cormen, C. Leiserson, R. Rivest, and C. Stein. Introduc- tion to Algorithms, Second Edition (Page 1015). MIT Press, Cambridge, Mass., 2001. [16] M. Davis, G. Logemann, and D. Loveland. A machine program for theorem proving. Commun. ACM, 5:394–397, 1962. [17] M. Davis and H. Putnam. A computing procedure for quan- tification theory. Commun. ACM, 7:201–215, 1960. [18] S. Davis and R. Impagliazzo. Models of greedy algorithms for graph problems. In Proceedings of the 15th Annual ACM-SIAM Symposium on Discrete Algorithms, 2004. [19] B. Dean, M. Goemans, and J. Vondr´ak. Approximating the stochastic knapsack problem: The benefit of adaptivity. In Proc. 44th Ann. Symp. on Foundations of Computer Science, 2004. [20] T. Erlebach and F. Spieksma. Interval selection: Ap- plications, algorithms, and lower bounds. Technical Re- port 152, Computer Engineering and Networks Laboratory, ETH, Oct. 2002. [21] J. Gu, P. W. Purdom, J. Franco, and B. J. Wah. Algorithms for the Satisfiability (SAT) Problem: A Survey. In Satisfi- ability (SAT) Problem, DIMACS, pages 19–151. American Mathematical Society, 1997. [22] M. Halldorsson, K. Iwama, S. Miyazaki, and S. Taketomi. Online independent sets. Theoretical Computer Science, pages 953–962, 2002. [23] S. Horn. One-pass algorithms with revocable acceptances for job interval selection. MSc Thesis, University of Toronto, 2004. [24] O. Ibarra and C. Kim. Fast approximation algorithms for the knapsack and sum of subset problems. JACM, 1975. [25] S. Khanna, R. Motwani, M. Sudan, and U. Vazirani. On syn- tactic versus computational views of approximability. SIAM Journal on Computing, 28:164–a91, 1998. [26] E. L. Lawler. Fast approximation algorithms for knapsack problems. In Proc. 18th Ann. Symp. on Foundations of Com- puter Science, Long Beach, CA, 1977. IEEE Computer So- ciety. [27] A. Razborov and S. Rudich. Natural proofs. J. Comput. Syst. Sci., 55(1):24–35, 1997. [28] O. Regev. Priority algorithms for makespan minimiza- tion in the subset model. Information Processing Letters, 84(3):153–157, Septmeber 2002. [29] C. E. Shannon. The synthesis of two-terminal switching cir- cuits. Bell Systems Tech. J., 28:59–98, 1949. [30] G. Woeginger. When does a dynamic programming formu- lation guarantee the existence of a fully polynomial time ap- proximation scheme (FPTAS)? INFORMS Journal on Com- puting, 12:57–75, 2000.

Proceedings of the Twentieth Annual IEEE Conference on Computational Complexity (CCC’05) 1093-0159/05 $20.00 © 2005 IEEE