<<

Appendix A THE OF PROBLEMS

A.1 Preliminaries Many scheduling problems are combinatorial in nature: problems where we seek the optimum from a very large but finite number of solutions. Sometimes such problems can be solved quickly and efficiently, but often the best solution procedures available are slow and tedious. It therefore becomes important to assess how well a proposed procedure will perform. The theory of addresses this issue. The seminal papers of complexity theory date from the early 70’s (.g., Cook, 1971 and Karp, 1972). Today, it is a wide field encompassing many sub-fields. For a formal treatment, the interested reader may wish to consult Papadimitriou (1994). As we shall see, the theory partitions all realistic problems into two groups: the “easy” and the “hard” to solve, depending on how complex (hence how fast or slow) the computational procedure for that problem is. The theory defines still other classes, but all except the most artificial mathematical constructs fall into these two. It should be noted that “easy” or “hard” does not simply mean quickly or slowly solved. Sometimes, for small problem instances, “hard” problems may be more quickly solved than “easy” ones. As we shall see, the difficulty of a problem is measured not by the absolute time needed to solve it, but by the rate at which the time grows as the problem size increases. To this point, we have not used the accepted terminology; we introduce it now. A problem is a well-defined question to which an unambiguous an- swer exists. Solving the problem means answering the question. The prob- lem is stated in terms of several parameters, numerical quantities which are left unspecified but are understood to be predetermined. They make the data of the problem. An instance of a problem gives specified values to each parameter. A combinatorial optimization problem, whether maximization or minimization, has for each instance a finite number of candidates from which the answer, or optimal solution, is selected. The choice is based on a real- valued objective which assigns a value to each candidate solution. A

H. Emmons and G. Vairaktarakis, Flow Shop Scheduling: Theoretical Results, , 319 and Applications, International Series in Operations Research & Management Science 182, DOI 10.1007/978-1-4614-5152-5, © Springer Science+Business Media New York 2013 320 A THE COMPLEXITY OF PROBLEMS or recognition problem has only two possible answers, “yes” or “no”. An example of an optimization problem is a linear program, which asks “what is the greatest value of cx subject to Ax ≤ b?”, where bold characters denote n-dimensional vectors (lower case) or n × n matrices (upper case). To make this a combinatorial optimization problem, we might make the variable x bounded and integer-valued so that the number of candidate solutions is finite. A decision problem is “does there exist a solution to the linear program with cx ≥ k?” To develop complexity theory, it is convenient to state all problems as de- cision problems. An optimization (say, maximization) problem can always be replaced by a sequence of problems of determining the existence of solutions with values exceeding k1,k2,....Analgorithm is a step-by-step procedure which provides a solution to a given problem; that is, to all instances of the problem. We are interested in how fast an is. We now introduce a measure of algorithmic speed: the function.

A.2 versus Exponential Algorithms Note that we always think of solving problems using a . Thus, an algorithm is a piece of computer code. Similarly, the size of a problem in- stance is technically the number of characters needed to specify the data, or the length of the input needed by the program. For a decision problem, an algorithm receives as input any string of characters, and produces as output either “yes” or “no” or “this string is not a problem instance.” An algorithm solves the instance or string in time k if it requires k basic operations (e.g., add, subtract, delete, compare, etc.) to reach one of the three conclusions and stop. It is customary to use as a surrogate for instance size, any number that is roughly proportional to the true value. We shall use the positive integer n to represent the size of a problem instance. In scheduling, this usually represents the number of jobs to be scheduled. In summary, for a decision problem Π: Definition A.1 The Time Complexity Function (TCF) of algorithm A is:

TA(n)=maximal time for A to solve any string of length n. In what follows, the big oh notation introduced by Hardy and Wright (1979) will be used when expressing the time complexity function. We say that, for two real-valued functions f and g, f(n) is O(g(n)), or f(n) is of the same order as g(n)if|f(n)|≤k ·|g(n)| for all n ≥ 0 and some k>0. An efficient, polynomially bounded, polynomial time, or simply polynomial algorithm is one which solves a problem instance in time bounded by a power of the instance size. Formally: A.3 Reducibility 321

Definition A.2 An algorithm A is polynomial time if there exists a polyno- mial p such that + TA(n) ≤ p(n), ∀n ∈ Z ≡{1, 2,...}. More specifically, an algorithm is polynomial of c,orhas complexity O(), or runs in O(nc) time if, for some k>0, the algorithm never takes longer than knc (the TCF) to solve an instance of size n. Definition A.3 The collection P comprises all problems for which a poly- nomial time algorithm exists. Problems which belong to P are the ones we referred to earlier as “easy”. All other algorithms are called exponential time or just exponential, and problems for which nothing quicker exists are “hard”. Although not all algorithms in this class have TCF’s that are technically exponential functions, we may think of a typical one as running in O(cp(n)) for some polynomial p(n). Other examples of exponential rates of growth are nn and n!. We can now see how, as suggested earlier, the terms “hard” and “easy” are somewhat misleading, even though exponential TCFs clearly lead to far more rapid growth in solution times. Suppose an “easy” problem has an algorithm with running time bounded by, say kn5. Such a TCF may not be exponential, but it may well be considered pretty rapidly growing. Furthermore, some algorithms take a long time to solve even small problems (large k), and hence are unsatisfactory in practice even if the time grows slowly. On the other hand, an algorithm for which the TCF is exponential is not always useless in practice. The concept of the TCF is a worst case estimate, so complexity is only an upper bound on the amount of time required by an algorithm. This is a conservative measure and usually useful, but it is too pessimistic for some popular algorithms. The for , for example, has a TCF that is O(2m) where m is the number of constraints, but it has been shown (see Nemhauser et al., 1989) that for the average case the complexity is only O(nm) where n is the number of variables. Thus, the algorithm is actually very fast for most problems encountered. Despite these caveats, exponential algorithms generally have running times that tend to increase at an exponential rate and often seem to “explode” when a certain problem size is exceeded. Polynomial time algorithms usually turn out to be of low degree (O(n3) or better), run pretty efficiently, and are considered desirable.

A.3 Reducibility A problem can be placed in P as soon as a polynomial time algorithm is found for it. Sometimes, rather than finding such an algorithm, we may place it in P by showing that it is “equivalent” to another problem which is already known to be in P. We explain what we mean by equivalence between problems with the following definitions. 322 A THE COMPLEXITY OF PROBLEMS

Definition A.4 A problem Π is polynomially reducible, or simply reducible to a problem Π (Π ∝ Π) if, for any instance I of Π, an instance I of Π can be constructed in polynomially bounded time, such that, given the solution SI to I, the solution SI to I can be found in polynomial time. We call the construction of the I that corresponds to I a polynomial transformation of I into I. Later, we will briefly mention a more general type of reducibility, in which the polynomial time requirements for constructing I and finding SI are relaxed. Until then, will mean polynomial reduction. Definition A.5 Two problems are equivalent if each is reducible (or simply reduces) to the other. Since reduction, and hence equivalence, are clearly transitive properties, we can define equivalence classes of problems, where all problems in the same equivalence class are reducible (or equivalent) to each other. Consider poly- nomial problems. Clearly, for two equivalent problems, if one is known to be polynomial, the other must be, too. Also, if two problems are each known to be polynomial, they are equivalent. This is because any problem Π ∈Pis reducible to any other problem Π ∈Pin the following trivial sense. For any instance I of Π, we can pick any instance of Π, ignore its solution, and find the solution to I directly. We conclude that P is an equivalence class. We state a third simple result for polynomial problems as a theorem. Theorem A.1 If Π ∈P, then Π ∝ Π ⇒ Π ∈P. Proof: Given any instance I of Π, one can find an instance I of Π by applying a polynomial time transformation to I. Since Π ∈P, there is a polynomial time algorithm that solves I. Hence, using the transformation followed by the algorithm, I can be solved in polynomial time. 2 Normally, to “reduce” means to “make simpler”. Not so here. Keep in mind that if Π reduces to Π (Π ∝ Π) then, unless they are equivalent, Π is the more difficult problem. We can say that Π is a special case of Π.

A.4 Classification of Hard Problems In practice, we do not usually use reduction to show a problem is polynomial. We are more likely to start optimistically looking for an efficient algorithm directly, which may be easier than seeking another problem known to be polynomial, for which we can find an appropriate transformation. But sup- pose we cannot find either an efficient algorithm or a suitable transformation. We begin to suspect that our problem is not “easy” (i.e., is not a member of P). How can we establish that it is in fact “hard”? We start by defining a larger class of problems, which includes P and also all the difficult problems we may ever encounter. To describe it, consider any combinatorial decision A.4 Classification of Hard Problems 323 problem. For a typical instance, there may be a very large number of possible solutions which may have to be searched. Picture a candidate solution as a set of values assigned to the variables x =(x1, ..., xn). The question may be “for a given vector c is there a feasible solution x such that cx ≤ B?” and the algorithm may search the solutions until it finds one satisfying the inequality (whereupon it stops with the answer “yes”) or exhausts all solutions (and stops at “no”). This may well be a big job. But suppose we are told “the answer is ‘yes’, and here is a solution x that satisfies the inequality”. We feel we must at least verify this, but that is trivial. Intuitively, even for the hardest problems, the amount of work to check that a given candidate solution confirms the answer “yes” should be small, even for very large instances. We will now define our “hard” problems as those which, though hard to solve, are easy to verify, where as usual “easy” means taking a time which grows only polynomially with instance size. To formalize this, let:

VA(n) = maximal time for A to verify that a given solution establishes the answer “yes” for any instance of length n.

Definition A.6 An algorithm A˜ is nondeterministic polynomial time if there exists a polynomial p such that for every input of length n with answer “yes”, ≤ VA˜(n) p(n).

Definition A.7 The collection NP comprises all problems for which a non- deterministic polynomial algorithm exists. It may be noted that a problem in NP is solvable by searching a decision tree of polynomially bounded depth, since verifying a solution is equivalent to tracing one path through the tree. From this, it is easy to see that P⊆NP. Strangely, complexity theorists have been unable to show that P⊂NP; it remains possible that all the problems in NP could actually be solved by polynomial algorithms, so that P = NP. However, since so many brilliant re- searchers have worked on so many difficult problems in NP for so many years without success, this is regarded as being very unlikely. Assuming P = NP, as we shall hereafter, it can be shown that the problems in NP include an infinite number of equivalence classes, which can be ranked in order of in- creasing difficulty; where an equivalence class C is more difficult than another class C if, for every problem Π ∈Cand every Π ∈C, Π ∝ Π but Π ∝ Π. There also exist problems that cannot be compared: neither Π ∝ Π nor Π ∝ Π. Fortunately, however, all problems that arise naturally have always been found to lie in one of two equivalence classes: the “easy” problems in P, and the “hard” ones, which we now define. The class of NP-hard problems (NPH) is a collection of problems with the property that every problem in NP can be reduced to the problems in this class. More formally, 324 A THE COMPLEXITY OF PROBLEMS

Definition A.8 NPH = {Π : ∀Π ∈NP,Π ∝ Π} Thus each problem in NPH is at least as hard as any problem in NP.We know that some problems in NPH are themselves in NP, though some are not. Those that are include the toughest problems in NP, and form the class of NP-complete problems (NPC). That is, Definition A.9 NPC = {Π :(Π ∈NP) and (∀Π ∈NP,Π ∝ Π)} The problems in NPC form an equivalence class. This is so because all prob- lems in NP reduce to them, hence, since they are all in NP, they reduce to each other. The class NPC includes the most difficult problems in NP.As we mentioned earlier, by a surprising but happy chance, all the problems we ever encounter outside the most abstract mathematical artifacts turn out to belong to either P or NPC. When tackling a new problem Π, we naturally wonder whether it belongs to P or NPC: is it “easy” or “hard”? As we said, to show that the problem belongs to P, we usually try to find a polynomial time algorithm, though we could seek to reduce it to a problem known to be polynomial. If we are unable to show that the problem is in P, the next step generally is to attempt to show that it lies in NPC; if we can do so, we are justified in not developing an efficient algorithm. To show that our problem Π is hard, we look for a problem, call it Π that has already been proven hard, and can be reduced to our problem. That is, for any instance of the hard problem, we can efficiently construct an instance of our problem such that knowing the answer to our problem will immediately tell us the answer to the hard problem. Effectively, the hard problem Π is a special case of our problem Π. Now, if our problem is easy, the hard problem would be easy. But it is not. So our problem must be hard, too. This logic is summarized in the following theorem, which should be clear enough to require no proof. Theorem A.2 ∀Π, Π ∈NP, (Π ∈NPC) and (Π ∝ Π) ⇒ Π ∈NPC Thus, we need to find a problem Π ∈NPCand show Π ∝ Π, thereby demonstrating that Π is at least as hard as any problem in NPC. To facilitate this, we need a list of problems known to be in NPC. Several hundred are listed in Garey and Johnson (1979) in a dozen categories such as Graph Theory, Mathematical Programming, Sequencing and Scheduling, Number Theory, etc., and more are being added all the time. Even given an ample selection, a good deal of tenacity and ingenuity are usually needed to pick one with appropriate similarities to ours and to fill in the precise details of the transformation. In the next section, we describe the basic technique for theorem proving in complexity theory, and conclude with an illustrative example. A.5 Strong NP-Completeness 325 A.5 Strong NP-Completeness We now introduce one of the various ways NP-complete problems can be classified into smaller subclasses, the only one we will use in this monograph: the partitioning of the class NPC into the two sets, ordinary and strongly NP- complete problems. For a detailed description of these classes see Garey and Johnson (1979). In practical terms, an ordinary NP-complete problem can be solved using implicit enumeration algorithms like . In this case, the time complexity of the algorithm is not polynomial in the length of input data, but it is polynomial in the size of these data. For instance, Partition is an NP-complete problem (to be defined shortly, in Sect. A.7), for which the input data are k positive integers vi (i =1, 2, ..., k). Let V be the size of this data: V =Σivi. Partition is solvable by dynamic programming in O(nV ) time (see Martello and Toth, 1990). Evidently, this complexity is polynomial in V . To see that this complexity bound is not polynomial in the length of the data, consider the binary encoding scheme. In this scheme each vi can be represented by a string of length O(log vi), and hence v1,...,vn can be described by a string of length O(Σi log vi) which is no greater than O(n log V ). We see that the time complexity O(nV ) of the dynamic program (DP) is polynomial in the size V of the data, but not polynomial in the length of the input data, O(n log V ). When the complexity of an algorithm is polynomial in the size of the data, but not the length of the input, we refer to it as a pseudo-polynomial algorithm. A NP-complete problem solvable by a pseudo-polynomial algorithm is called ordinary NP-complete. Else, the problem is strongly NP-complete.

A.5.1 Pseudo-Polynomial Reduction As we know, to show ordinary NP-completeness of Π, we start with an or- dinary NP-complete Π and provide a polynomial reduction to Π. That is, for any instance I of Π we produce an instance I of Π in polynomial time, and given the solution SI of I, we produce a solution SI of I , also in poly- nomial time. Now, if we could solve Π in polynomial time, we would have a sequence of three polynomial steps that would solve Π. But we know Π is not polynomially solvable, and so Π cannot be, either. The same logic applies if we start with a strongly NP-complete Π. Given a polynomial reduction, Π must also be strongly NP-complete: if Π were anything less (polynomial or ordinary NP-complete), Π would be, too. But now note: if either or both the steps in the reduction were pseudo-polynomial, and if Π could be solved polynomially or pseudo-polynomially, we would still have an overall pseudo-polynomial solution to Π, giving us the contradiction we need. This should provide the motivation for the following analogue of Definition A.4: Definition A.10 A problem Π is pseudo-polynomially reducible to a prob- lem Π (Π ∝ Π) if, for any instance I of Π, an instance I of Π can be 326 A THE COMPLEXITY OF PROBLEMS constructed in pseudo-polynomially bounded time, such that, given the solu- tion SI to I, the solution SI to I can be found in pseudo-polynomial time. This definition leads to the following extension of Theorem A.2: Theorem A.3 ∀Π, Π ∈NP, if Π is strongly NP-complete, and Π ∝ Π, then Π is strongly NP-complete. This is a stronger result than Theorem A.2. However, it is not to our knowledge ever used, partly because Theorem A.2 seems to be sufficient, partly because pseudo-polynomial transformations are much harder to find than polynomial ones, and finally because Theorem A.3 does not seem to be widely known.

A.6 How to show a Problem is NP-Complete We now summarize the process of actually proving the NP-completeness, whether ordinary or strong, of a new Problem Π of interest. Recall, we are dealing only with decision problems. 1. Show that Π ∈NP.

That is, given a solution SΠ of Π we must be able to check whether SΠ provides a “yes” or “no” answer for Π in polynomial time. This is a technical requirement. After all, as we said earlier, “all the problems we ever encounter outside the most abstract mathematics turn out to belong to either P or NPC” and hence to NP. Thus, in practice, this step is commonly assumed without mention. 2. Find a problem Π ∈NPCthat reduces to Π. This, of course, is the crux of the matter. It is not easy to do, requiring technical skills born of insight and experience. If a candidate problem Π is to serve our purposes, then by the definition of reduction in Sect. A.3, the following must be true and verifiable: • For any instance I of Π , we must be able to construct an instance I of Π such that I has the solution SI = yes if and only if I has the solution SI = yes. • The times required to construct I from I , and to construct SI from SI , must be polynomial [may be polynomial or pseudo-polynomial] in the size of (i.e., the length of input data required to specify) I, when I is ordinary [strongly] NP-complete. 3. Determining whether Π is ordinary or strongly NP-complete The precise complexity of Π depends largely on the complexity status of the known NP-complete problem Π. • If Π is strongly NP-complete, then Π is strongly NP-complete. A.7 A Sample Proof 327

• If Π is ordinary NP-complete, then Π is at least ordinary NP-complete. If in addition a pseudo-polynomial algorithm exists for Π, it is confirmed to be ordinary NP-complete. Finally we summarize, in the decision tree of Fig. A.1, the sequence of logical steps required to show the complexity of a new problem by reduction of a known problem. We have presented the steps as they are usually given, leaving out the complication that in some cases the reduction may be pseudo- polynomial.

Fig. A.1 Establishing the complexity status of a problem Π

A.7 A Sample Proof Here is a very simple application of the reduction process outlined previously. More ingenious reductions will be found scattered through this monograph.

A.7.1 PARTITION ∝ P 2||Cmax We wish to show that the following problem is NP-complete:

P 2||Cmax ≤ B?

INSTANCE: Two parallel identical processors, a set J = {J1,J2,...,Jn} of jobs with a processing time pj for each Jj, and threshold value B. QUESTION: Is there a nonpreemptive assignment of the n jobs to the two processors so that at any time each machine processes at most one job, and the completion time of Jj is Cj ≤ B for every j =1, 2,...,n ?

This is the decision version of the problem P 2||Cmax, replacing a minimiza- tion problem with a yes-or-no question to answer. It can be solved repeatedly for different values of B in order to find the minimal makespan. To prove it “hard”, we will show that the following problem, known to be NP-complete, can be reduced to it: 328 A THE COMPLEXITY OF PROBLEMS

PARTITION

INSTANCE: A set of k positive integers vi : i ∈T = {1, 2, ..., k}. QUESTION: Is there a subset T ⊂T such that i∈T  vi = i∈T −T  vi ?

We must first show that P 2||Cmax ∈NP. That is, given a schedule S of the n jobs, we must be able to check in polynomial time whether the asso- ciated makespan Cmax(S) ≤ B. To perform the check, we need to find the completion time of the last job processed by each of the processors. This re- quires no more than n additions involving the processing times of the jobs in J. Thus, Cmax(S) can be computed in O(n) time, and subsequently, whether Cmax(S) ≤ B or not can be established in O(1) time. Hence, P 2||Cmax ∈NP. Next, we must construct an instance I of P 2||Cmax corresponding to an instance I of Partition. Let v1,...,vk be the integers in I . Then I is sim- ply defined by letting n = k, pi = vi (i =1,...,n), and B =(1/2)Σipi. The construction of I requires n + 2 assignments and n + 1 basic operations to compute B, so the total amount of effort is O(n). To confirm that this is indeed a reduction, we need to show that the answer is “yes” for the instance I of Partition if and only if the answer is “yes” for the instance I of P 2||Cmax. Indeed, given any I with answer “yes”, let T be a subset of T giving Σi∈T  vi =Σi∈T −T  vi . We can now construct a solution for I by assigning all jobs Jj : j ∈T to be processed (in any order) by M1, and all jobs Jj : j ∈T −T to be processed (in any order) by M2. Let S be the resulting schedule for P 2||Cmax. By definition of T , 1 i∈T  pi = i∈T −T  pi = 2 i pi = B. Clearly, given T , the schedule S is constructed in O(n) time. Similarly, given a schedule S that solves P 2||Cmax, we can construct the partition T , T−T in O(n) time as well. Since Partition is an ordinary NP-complete problem, to completely de- termine the status of P 2||Cmax, we will have to develop a pseudo-polynomial algorithm for it. Such an algorithm can in fact be developed (see Cheng and Sin, 1990) which means that P 2||Cmax belongs to the class of ordinary NP-complete problems.

A.8 Clarification of Terminology The language of complexity theory can be a bit confusing, with several terms being used in different branches of the literature to refer to the same thing. We have introduced the sets NPH and NPC. All NP-complete problems are NP-hard, and in practice the only NP-hard problems we ever encounter are NP-complete. Though the terms are not synonymous, they have come to be used interchangeably. We have chosen to use “NP-complete” throughout this monograph. References 329

We say a problem is “strongly NP-complete”, but we could also say it is “NP-complete in the strong sense”, or “unary NP-complete” (a term used in which we will not further motivate). An ordinary NP- complete problem can be simply called NP-complete, without qualification. It is also acceptable to say “NP-complete in the ordinary sense” or “binary NP-complete”.

A.9 Conclusion In this appendix we have presented an introduction to the foundations of computational complexity together with some basic techniques used in prov- ing NP-completeness results. Following Cook’s seminal paper (Cook, 1971), the first list of reductions for combinatorial problems was compiled in Karp (1972). The example described in this article can be found in Garey and Johnson (1979).

References

1. Cheng, T.C.E. and C.C.S. Sin (1990) A State-Of-The-Art Review Of Parallel- Machine Scheduling Research, European Journal of Operational Research, 47, 271–292. 2. Cook, S.A. (1971) The complexity of Theorem Proving Procedures, Proc. 3rd Annual ACM Symposium on Theory of Computing, ACM, New York, 151–158. 3. Garey, M.R. and D.S. Johnson (1979) and Intractability, W.H. Free- man, San Francisco, CA. 4. Hardy, G.H. and E.M. Wright (1979) An Introduction to the Theory of Numbers, Clarendon Press, Oxford, England. 5. Karp, R.M. (1972) Reducibility Among Combinatorial Problems, in R.E. Miller and J.W. Thatcher (eds.), Complexity of Computer Computations, Plenum Press, New York, 85–103. 6. Martello S. and P. Toth (1990) Knapsack Problems: Algorithms and Computer Implementations, John Wiley and Sons, Chichester, England. 7. Nemhauser, G.., A.H.G. Rinnooy Kan and M.J. Todd (1989) Handbooks in Op- erations Research and Management Science, Vol. 1, North Holland, Amsterdam. 8. Papadimitriou, C.H. (1994) Computational Complexity, Addison-Wesley, Read- ing, Mass. Index

2-partition problem, 53 ordinary NP-complete, 327 3-partition problem, 31, 79, 102 strongly NP-complete, 327 NP-hard problems (NPH), 326 absolute performance guarantee, 119, of 2 machine problems, 28, 30, 53, 56, 147, 178, 181, 184 79, 144, 194, 204, 227, 262, 274, algorithm 281, 296 efficient, 323 of 3 machine problems, 102, 143, 204, exponential (time), 323 205, 219, 277, 281 nondeterministic polynomial (time), of assembly flow shops, 214 325 of hybrid flow shops, 168 polynomial (time), 323 polynomial (time) problems (P), 323 pseudo-polynomial, 327 proving NP-completeness, 328–329 ant colony optimization (ACO), 42, 141 time complexity function (TCF), 322 approximation scheme, 185 computational experiments assembly shop, 213 for m machine problems, 108, 115, assignment problem, 32 123, 131, 142, 143, 150, 208, 236, average relative gap, 180 254, 285 for 2 machine problems, 49, 52, 58, bicriteria, see objectives 62, 277, 299, 302, 315 , 197, 211 for hybrid flow shops, 180, 187, 253, blocking (block), 195, 223 260 branch-and-bound for hybrid reentrant shops, 289 m machine problems, 232 contiguous set, 82 for m machine problems, 138, 143, controllable processing speeds, 204 147, 242 coupled tasks (cpld), 95 for 2 machine problems, 41, 43, 45–47, critical path, 12, 23, 228, 239 50, 52, 58, 276, 298, 301 cycle time (CT), 17, 153, 154, 282 for bicriteria, 57 cyclic scheduling, see flow shops, cyclic for hybrid flow shops, 172, 186 dispatching rules, see priority rules chains of jobs, 82 distribution matrices, 206 classification of problems, 5 dominance cluster schedules (in reentrant shops), global, 9 280 job versus sequence, 100–102 complete priority ordering, 14, 23 local, 9 complexity machine, 106 NP-complete problems (NPC), 326 dominance properties, 13

H. Emmons and G. Vairaktarakis, Flow Shop Scheduling: Theoretical Results, Algorithms, 331 and Applications, International Series in Operations Research & Management Science 182, DOI 10.1007/978-1-4614-5152-5, © Springer Science+Business Media New York 2013 332 Index

for m machine problems, 117, 146, 278 Hamiltonian cycle, 198 for 2 machine problems, 43, 47, 50, hereditary order (in reentrant shops), 58, 297, 300 279 dynamic programs, 207, 257, 263 heuristics, 2, 16–17,56 for m machine problems, 118–132, elimination criteria, see dominance 136, 140, 143, 147, 150, 207, 233, properties 280, 284 entry (in reentrant shops), 274 multiple objectives, 149–152 compact, 274 for 2 machine problems, 29, 41, 45, error bounds, 16–17, 265, 266 48, 52, 61, 81, 276, 298, 302, 314 for m machine problems, 119, 120, for hybrid flow shops, 90, 92, 93, 176, 124, 138 181 for 2 machine problems, 29, 42, 80, metaheuristics, 42 204, 276 for Fm|perm|Cmax, 124 for hybrid flow shops, 91–93, 173, 174, for Fm|perm|ΣCj , 141 176, 183, 184, 215, 259 genetic algorithms, 129, 149, 151, expected performance, 185 207 exponential distribution, 306 iterated greedy, 131 particle swarm, 132 first avaialble machine (FAM) rule, 259 simulated annealing, 125, 151, 207 first available machine (FAM) rule, 89, tabu search, 126, 208 170, 177 first come, first served (FCFS) rule, 167, inserted idle time, 10 170, 177, 215 integer programs, see mixed integer flexibility programs resource, 250 internet connectivity, 22 labor, 250, 266 machine, 250, 255 Jackson’s rule, 111 mix, 250, 268 job , 197 multiprocessor, 261 job selection rules, see priority rules multitask, 255, 256 Johnson’s relation, 23 resource, 259, 266 Johnson’s rule, 22–27 routing, 250, 251, 254 modified, 75 flow shops, 2 with stochastic task times, 313 compound, see hybrid cyclic, 17–18, 153, 236, 243, 245, 282 knapsack problem, 28 flexible, 249, see hybrid hybrid, 3, 88, 163, 215, 242, 251, 255, Lagrangean relaxation, 187, 230 259 lags, 69 multiprocessor, see hybrid negative, 71 no-wait (nwt), 192 types, 71 ordered, 102 last busy machine (LBM) rule, 89, 170, parallel hybrid, 252 177, 259 proportionate, 109 lateness, 4 resource-constrained, 267 Lawler’s rule, 32 simple, 2 limited storage, 224 with blocking, 223 limited storage (ltd), see blocking with pallets, 242 linear program for Fm|nwt, lots|Cmax, 218 Gantt chart, 5 longest route, see critical path general job shop, 5 lot streaming, 32–35, 105, 133–137, 216 Gilmore-Gomory fractional, 217 algorithm, 197 multiple products, 218 metric, 196 lower bounds Index 333

for m machine problems, 109–116, for 2 machine problems, 29 139, 144, 230 precedence (prec), 6, 7–9 for 2 machine problems, 28, 36, 44, chain, 2, 8 48, 50, 59, 80, 301 in-tree, 8 for hybrid flow shops, 172, 174, 187 out-tree, 8 parallel chains, 11 makespan, 7 string, 8 master-slave model, 78, 95 precedence diagram, 12, 23, 228, 244 multiple processors, 97 precedence network, see precedence merged machines, 90, 167, 187, 251, 259 diagram metaheuristics, see heuristics preemption (pmtn), 3, 6 minimal latency problem, 219 on 2 machines, 22 minimal part set, 243 with random task times, 308 minimal part set (MPS), 17 preprocessing, 52 minimal slack time (MST) rule, 186 priority index, 14 minimax regret, 294 priority lists, see priority rules mixed integer programs priority rules, 15, 124, 147, 287 for F (k1,k2,...,km)||Cmax, 165 earliest due date (EDD) rule, 186 for Fm|block, (perm)|Cmax, 230 longest processing time (LPT) rule, for Fm|block, cyclic|CT, 243 206 for Fm|perm, cyclic|CT, 153 shortest processing time (SPT) rule, for Fm|perm|Cmax, 104 168, 174, 181, 186, 206, 280 Moore’s algorithm, 61, 143 problem multigraphs, 212 combinatorial optimization, 321 multimoves, 208, 241 decision, 322 multiple products, 211 production cycle, 17 nonbottleneck, 70, 105, 109, 167 ready time, 4 NP complete, see complexity reduction NP hard, see complexity polynomial, 324 pseudo-polynomial, 328 objectives reentrant patterns bicriteria, 7, 56, 149, 150 (1,2,1), 272 choice of, 35 chain, 272 composite, 7, 36, 46, 149, 151 cyclic, 272, 282 hierarchical, 7, 36, 45, 55 hub, 272 regular, 7 V, 272 open shop, 5 relative deviation index, 148 release date, 4 partition problem, 256, 262, 274, 296, resource utilization, 55 330 reversibility property, 166, 180 performance guarantees, see error robotic manufacture, 22 bounds rules, scheduling, see priority rules performance ratio, see error bounds permutation schedules (perm), 6, 10–11, schedules 89, 100, 179, 192, 215, 223, 224, nondelay, 10 230 pyramidal, 108, 206, 317 policies SEPT-LEPT, 317 dynamic, with and without preemp- semi-ordered processing times, 205, 206, tion, 308 219 scheduling, see priority rules sequence matching, 198 static, 307 setup and teardown times, 30, 203, 208, policies, scheduling, see priority rules 214 polynomial approximation schemes nonoverlapping, 30 334 Index

on m machines, 152 , 28 on 2 machines, 43, 47 super shop, 6 separable, 30, 76 sequence independent, 30 tardiness, 4, 186 simulation results, see computational transfer lags, see lags experiments transportation problem, 212 single server, 30 transshipment network, 40–41 , 199 traveling salesman problem (TSP), 195, SPT rule, see priority rules 196, 203–205, 208, 210, 211, 214, stable policies, 288 219, 318 stochastic ordering, 309 generalized, 218 string of jobs, 82 sublot, 210, 216, 218 worst case analysis, see error bounds consistent, 33, 133 worst case performance, see error critical, 34 bounds